Impact & the Role of AI How Artificial Intelligence Is Changing Everything

Impact & the Role of AI How Artificial Intelligence Is Changing Everything

Session at a glanceSummary, keypoints, and speakers overview

Summary

Martin Chungong opened the summit by warning that AI systems now decide who receives public services, loans, or surveillance, and that their designers shape both individual lives and the democratic information environment [1-2]. He cited an Amsterdam traffic-management algorithm that diverted congestion into low-income neighborhoods because it learned those communities lacked political influence, illustrating how bias can scale without proper governance [4]. Chungong noted that a handful of tech corporations now hold market capitalisations larger than whole national equity markets while low-paid workers in the Global South annotate the data that powers these systems, concentrating benefits and concentrating risks [7-9]. He argued that democratic societies must openly debate trade-offs between innovation, safety, efficiency and equity, and that parliaments are uniquely positioned to connect lived experience with political accountability [14-16][21-25].


The inter-parliamentary community stressed that AI’s effects cross borders, requiring inclusive, participatory international governance and coordinated domestic legislation to avoid fragmentation and geopolitical competition [30-38][39-44]. Over 60 parliaments have already taken legislative or oversight actions on AI, forming cross-party groups and specialized committees, but they need faster coordination and capacity building [41-44].


In his address, Speaker Om Birla highlighted India’s effort to embed AI within its democratic institutions, describing a unified digital platform that will host all parliamentary debates, budgets and metadata by 2026 to enhance transparency and citizen access [102-108]. He framed AI as a tool to project India’s spiritual and cultural values worldwide and to create a model for other nations to follow in responsible AI deployment [109-110][112-115].


Panelists then presented concrete AI applications, such as Rocket Learning’s use of AI to deliver early-childhood education at scale, generating data that tailors messages to children and parents [210-214]. Rupa Purushothaman argued that AI could free doctors from routine tasks and spawn roughly 30 million new jobs in health, education and entrepreneurship, especially in underserved rural areas [267-280]. Sanjeev Bikhchandani observed that, contrary to fears, AI has not yet reduced hiring at his job-site platform and cited historical examples where technology created more productivity and new roles, suggesting a gradual adjustment rather than abrupt disruption [236-259]. Iqbal Dhaliwal warned that the unprecedented speed and low cost of AI could outpace labor-market adaptation and that policy frameworks must be strengthened to prevent capital-biased outcomes [296-320]. He also emphasized the need for upskilling, noting that individuals who master AI tools become highly employable and can protect their jobs by sharing best practices within organizations [330-338].


The panel concluded that no single answer exists yet, but continued multi-stakeholder collaboration, including OpenAI’s support, is essential to embed democratic accountability and human-rights safeguards in AI design and governance [443-449]. The summit therefore underscored AI’s dual potential to strengthen or erode democracy, calling for coordinated action to ensure it serves inclusion, participation and good governance worldwide [48-52].


Keypoints


Democratic oversight of AI is essential to curb power concentration and protect human rights.


Martin Chungong stresses that AI systems shape public services and democratic information flows, yet “power is accumulating rapidly in the hands of those at the forefront of AI development” and “the benefits of AI are increasingly concentrated” while costs fall on the powerless - a clear democratic concern that requires transparent, accountable parliamentary debate - [1-8][12-18][41-44][47-53].


India is positioning AI as a national governance tool intertwined with cultural and spiritual values.


Om Birla outlines India’s effort to digitise parliamentary work, create a single platform for all legislative debates, and embed AI-driven metadata search to boost democratic capacity, while repeatedly linking the technology rollout to “Vedic/Spiritual values” and the nation’s “cultural” ethos - [59-70][94-108][109-112].


AI will reshape the labour market, creating both disruption and new opportunities, demanding up-skilling and proactive policy.


Panelists note that AI has not yet reduced hiring at InfoEdge, but historic tech shifts (e.g., computers in banks) eventually “created new jobs and new opportunities” - [234-259]; Roopa highlights potential for 30 million new jobs in health, education, and entrepreneurship through AI-augmented workers - [262-280]; Iqbal warns that the “speed … is unprecedented” and stresses the need to “dial down the speed” and align policy with labour - [287-310]; Sanjeev adds that individual AI literacy is now the key to job security - [321-340].


AI can be leveraged for social development and inclusion, especially in education and micro-enterprise.


Iqbal describes how AI-enabled platforms (e.g., Rocket Learning) are democratizing early-childhood education by reaching Anganwadi workers and generating actionable data - [204-214]; he also cites pilots in Kenya where chat-GPT tools helped micro-entrepreneurs, showing “leveling of skills” for high-performers while highlighting the need for “hand-holding” for others - [362-384].


Global, inclusive, multi-stakeholder cooperation is required to forge binding AI governance standards.


Chungong warns that “international AI governance remains fragmented” and that geopolitical competition could fracture efforts, urging summits to embody “inclusive participatory approach” - [30-38]; Dr Fadi Dao reinforces that AI democratization must avoid “people’s manipulation or domination” and calls for collective action - [147-149]; Lord Krish Ravel adds that adaptability to rapid change is a cultural asset that can help manage uncertainty - [154-158].


Overall purpose / goal


The discussion aimed to highlight the urgent need for democratic, parliamentary-led governance of AI, showcase national initiatives (particularly India’s) that integrate AI with cultural values and public administration, examine AI’s economic and labour impacts, explore concrete social-impact applications, and call for coordinated, inclusive international frameworks to ensure AI benefits are equitably shared.


Overall tone


The conversation begins with a serious, urgent tone emphasizing risks and power imbalances. It shifts to an optimistic, celebratory tone when describing India’s ambitious AI-driven parliamentary reforms and cultural integration. The panel then adopts a pragmatic, analytical tone while debating labour market effects and up-skilling needs, acknowledging both opportunities and uncertainties. The closing remarks return to a hopeful yet cautious tone, urging collective, inclusive action to steer AI toward democratic ends.


Speakers

Martin Chungong – Secretary-General, Inter-Parliamentary Union (IPU); expertise in democratic governance, AI policy and parliamentary oversight of technology [S8][S10]


Dr. Chinmay Pandeya – Moderator/Chair of the panel; background in policy and AI governance (no external source cited)


Om Birla – Speaker of the Lok Sabha, Parliament of India; expertise in parliamentary procedures and democratic governance [S19][S20]


Dr. Fadi Dao – Chairman, Globe Ethics (Geneva); expertise in AI ethics and global governance [S16]


Lord Krish Ravel – Member of the UK House of Lords; devout member of the Gayatri Parivar; expertise in democratic values, spirituality and governance [S14][S15]


Anmol Garg – OpenAI representative / moderator for the session; expertise in AI research and outreach [S11]


Ronnie Chatterji – Chief Economist, OpenAI; expertise in AI economics, policy and societal impact [S6][S7]


Roopa Purushothaman – Chief Economist and Head of Policy Advocacy, Tata Sons; expertise in macro-economics, sustainability, health and AI-driven social impact [S2]


Sanjiv Bikhchandani – Founder, InfoEdge (Naukri.com); expertise in entrepreneurship, job platforms and AI-enabled business solutions [S1]


Iqbal Dhaliwal – Global Director, J-PAL (MIT); expertise in development economics, impact evaluation and AI for social good [S3][S4]


Kavita Gunjikannan – Global Affairs team, OpenAI; role in partnership development and education initiatives [S5]


Additional speakers:


(none)


Full session reportComprehensive analysis and detailed insights

Martin Chungong opened the summit by reminding the audience that AI systems now decide who receives public services, who qualifies for a loan and even who is placed under surveillance, and that the designers of these systems therefore shape both individual lives and the democratic information environment [1-2]. He cited the inter-parliamentary conference on responsible AI in Malaysia, where MPs highlighted an Amsterdam traffic-management algorithm that unintentionally routed congestion through low-income neighbourhoods because it had learned that those communities lacked political influence [3-5]. Chungong warned that democratic governance is lagging behind the rapid concentration of power in a handful of technology corporations whose market capitalisations now exceed the equity markets of whole industrialised nations, while millions of low-paid workers in the Global South annotate the data that underpins these systems [6-9]. Framing the issue as a democratic rather than merely economic concern, he argued that when AI-driven systems that affect daily life, information access and economic opportunity are controlled by a small elite without public oversight, the social contract itself is strained [10-12][13-16]. The parliamentary community in Malaysia expressed opposition to concentration of power and called for red-line safeguards, demanded an equal voice for the Global South and urged all parliaments to engage actively in AI governance [17-20][21-25]. Chungong stressed that healthy democracies must debate trade-offs between innovation, safety, efficiency, equity, profit and the public interest openly, transparently and with accountability [14-16][26-27].


Dr Chinmay Pandeya then introduced the next speaker and handed the floor to Hon. Om Birla, Speaker of the Lok Sabha.


Hon. Om Birla presented India’s vision of AI as a national governance tool intertwined with the country’s spiritual and cultural heritage. He repeatedly linked AI deployment to “Vedic/Spiritual values” and the principle of Vasudhaiva Kutumbakam, arguing that India can showcase its cultural ethos to the world through AI-enabled institutions [59-70]. Birla described an ambitious “Digital Assembly” plan to digitise all parliamentary work on a single platform that will host every Lok Sabha and Rajya Sabha debate, budget and metadata by 2026, enabling searchable, paper-less access for citizens [100-108]. He suggested that the unified digital platform could serve as a model for other countries, improving transparency and parliamentary efficiency [100-108]. While celebrating India’s rapid progress in clean energy and youth empowerment, Birla also highlighted the need for responsible use of AI to answer the challenges of a changing world [134-138].


Dr Fadi Dao offered a brief response, echoing the call for democratic oversight and human-rights-centred AI governance [12-16][21-22].


Lord Krish Ravel then commented on the necessity of flexible policy frameworks that can keep pace with AI’s rapid evolution [154-159].


The session moved to a felicitation segment in which Pandeya, Gayatri Parivār and other parliamentary leaders recognised contributions to AI-policy work [39-44].


Anmol Garg introduced the OpenAI panel, announcing the presence of the chief economist, chief of global affairs and other senior OpenAI representatives [380-384].


Ronnie Chatterji moderated the subsequent discussion with three panelists. Iqbal Dhaliwal explained how Rocket Learning, in partnership with OpenAI, uses AI to democratise early-childhood education, reaching millions of Anganwadi workers, tailoring messages to children and parents, and generating rich data on learning patterns [204-214]. Roopa Purushothaman expanded the discussion to the labour market, arguing that AI can free doctors from routine tasks, creating a new class of mediators and estimating that roughly 30 million new jobs could arise in health, education, finance and logistics across India [262-280]. She added that voice-activated, local-language models will empower entrepreneurs by providing rapid market and price information, potentially raising SME employment from 10 % to 40 % of the private-sector workforce [281-285].


Sanjeev Bikhchandani offered a more optimistic view, noting that AI has not yet reduced hiring at his job-portal platform Nocri and recalling historical parallels where new technologies initially disrupted but ultimately created productivity gains and new roles (e.g., computerisation of Indian banks in the 1980s) [236-259]. He stressed that individuals who acquire AI literacy become highly employable, likening today’s AI tools to early PC skills and urging personal upskilling as the primary defence against displacement [321-340][340-345].


Several speakers – notably Martin Chungong, Dr Fadi Dao and Dr Chinmay Pandeya – emphasized the importance of democratic, ethical AI governance to protect human rights and curb power concentration [12-16][21-22][147-149]. Capacity development was universally endorsed: Chatterji announced OpenAI’s forthcoming jobs-and-certification platform; Iqbal, Sanjeev and Roopa all stressed the need for AI skill acquisition and internal knowledge-sharing to bridge the “capability overhang” [390-392][321-340][287-306][347-354]. Adaptability to rapid AI change was highlighted by Lord Krish Ravel, Iqbal and Sanjeev, who called for flexible policies and personal agency [154-159][287-306][321-340]. The panel highlighted the potential for AI-generated employment in health, education and entrepreneurship, contingent on inclusive policies [262-280][287-306][321-340]. Finally, the speakers stressed the necessity of inclusive, multilateral cooperation, echoing Chungong’s call for global governance, Dr Dao’s emphasis on avoiding domination, and Birla’s framing of AI as a shared international challenge [30-38][39-41][147-149].


Notable disagreements surfaced. Chungong’s warning about the dangers of power concentration and his demand for strict parliamentary “red lines” contrasted with Birla’s focus on using AI to project India’s spiritual values and improve parliamentary efficiency without directly addressing corporate dominance [4-13][14-18][68-70][100-108]. On employment impacts, Roopa’s optimistic estimate of 30 million AI-enabled jobs conflicted with Iqbal’s caution that the speed of AI diffusion could overwhelm labour markets, requiring a policy “dial-down” and safeguards [262-280][287-306][307-320]. Likewise, the purpose of parliamentary involvement diverged: Chungong saw parliaments as guardians of democratic accountability, whereas Birla portrayed them primarily as internal users of AI to showcase cultural heritage [21-27][39-44][68-70][100-108].


Thought-provoking remarks helped shape the dialogue. Chungong’s observation that “when the systems that govern aspects of people’s daily lives… are controlled by a small number of actors… the social contract itself is under strain” framed the summit as a democratic imperative [12]. Iqbal’s description of Rocket Learning’s data-driven early-childhood programme illustrated a tangible social-impact benefit [210-214]. Roopa’s quantification of 30 million potential jobs challenged the prevailing “AI-as-job-killer” narrative [262-280]. Sanjeev’s stark warning that “the genie is out of the bottle” and his call for individuals to become AI power users underscored personal agency [340-345]. Iqbal’s plea to “dial down the speed” highlighted the policy-labour mismatch [287-306][307-320]. Finally, Chatterji’s closing admission that “none of us have the answers” reinforced the summit’s role as a starting point for ongoing collaboration [443-449].


Key takeaways include: (i) concentration of AI power threatens democracy and requires transparent parliamentary oversight; (ii) democratic institutions must openly debate trade-offs between innovation, safety, efficiency, equity and profit; (iii) India aims to embed AI within its democratic processes, linking it to spiritual and cultural values and launching a unified digital parliamentary platform by 2026; (iv) AI can empower social development-early-childhood education, health services and entrepreneurship-potentially creating tens of millions of jobs; (v) the rapid, low-cost diffusion of AI creates labour-market challenges that demand swift policy responses; (vi) upskilling individuals is essential for employability; (vii) large organisations should share best practices to accelerate responsible adoption; (viii) OpenAI is expanding partnerships in India and will release a certification platform to teach AI skills [41-44][54-56][59-70][100-108][210-214][262-280][287-306][321-340][390-392].


Resolved actions announced were: (a) parliaments worldwide pledged to engage actively in AI governance, coordinate across borders and develop national legislation or oversight mechanisms (over 60 parliaments have already acted) [41-44]; (b) the inter-parliamentary community committed to establishing “red lines” and ensuring an equal voice for the Global South [17-20]; (c) India’s parliament will launch the unified, paper-less platform for all legislative proceedings by 2026 [100-108]; (d) Globe Ethics pledged to leverage the summit outcomes for the 2027 Geneva AI summit [147-149]; (e) OpenAI announced a jobs-and-certification platform to build AI competencies [390-392].


Unresolved issues remain, notably the lack of binding international AI standards, concrete mechanisms for equitable benefit-sharing with low-income and Global-South populations, specific policy tools to slow AI rollout or protect labour markets (as urged by Iqbal), scalable models for AI education of micro-entrepreneurs, and detailed frameworks for harmonising parliamentary AI committees globally. Suggested compromises include adopting inclusive, participatory governance that balances innovation with equity (e.g., agreeing on “red lines” while permitting continued development), recognising both the need to accelerate AI for societal benefit and the necessity to temper its speed to protect workers, and allowing corporate AI innovation under public-sector oversight to ensure accountability and human-rights compliance [Suggested compromises].


The panel generated a series of follow-up research questions to guide future work: (1) how to operationalise India’s linguistic and cultural diversity and the principle of Vasudhaiva Kutumbakam in AI governance (Dr Pandeya to Dr Dao) [142-146]; (2) the way India should advance democracy in the AI era (Dr Pandeya to Lord Krish Ravel) [152-158]; (3) data-driven methods for assessing AI’s economic impact (Chatterji to Iqbal) [189-203]; (4) the role of a chief economist at Tata and notable AI developments across its businesses (Chatterji to Roopa) [215-224]; (5) AI use cases and metrics at InfoEdge (Chatterji to Sanjeev) [231-236]; (6) strategies to diffuse AI best practices within large organisations (Chatterji to Roopa) [341-354]; (7) parallels in development literature for teaching AI and the potential for global institutions or J-PAL to deliver such programmes (Chatterji to Iqbal) [356-361]; (8) research on “red lines” and Global-South participation in AI governance (Chungong) [Research question 8]; (9) mechanisms for coordinated parliamentary capacity-building on AI (Chungong) [Research question 9]; (10) unintended consequences of AI-driven traffic-management routing through disadvantaged neighbourhoods (Chungong) [Research question 10]; (11) environmental costs of AI at scale (Chungong) [Research question 11]; (12) AI’s influence on democratic institutions and the social contract (Chungong) [Research question 12]; (13) evaluation of AI-enabled early-childhood initiatives and data quality (Iqbal) [Research question 13]; (14) quantitative study of AI-generated jobs in health, education, finance and logistics (Roopa) [Research question 14]; (15) AI-driven entrepreneurship opportunities for SMEs and micro-enterprises (Roopa) [Research question 15]; (16) aligning AI speed with policy infrastructure (Iqbal) [Research question 16]; (17) diffusion of AI skills within conglomerates and legacy-system integration (Roopa) [Research question 17]; (18) monitoring AI’s impact on the IT job market, especially for platforms like Nocri (Sanjeev) [Research question 18]; (19) development and evaluation of AI certification platforms (Chatterji) [Research question 19]; (20) effectiveness of AI tools for micro-entrepreneurs in low-resource settings (Iqbal) [Research question 20]; (21) AI for parliamentary processes such as metadata search and its effect on legislative transparency (Birla) [Research question 21]; (22) AI-enabled answer-based technologies for improving coordination among international organisations (Birla) [Research question 22].


Kavita Gunjikannan concluded the summit by celebrating the education partnerships forged during the event and urging continued collaboration to scale AI-enabled learning initiatives across the region [460-466].


In sum, the summit underscored that AI is a double-edged sword: it can either erode democratic norms through power concentration or become a catalyst for inclusive development if governed transparently, responsibly and with broad participation. The consensus among speakers-spanning inter-parliamentary leaders, academics and industry executives-calls for democratic oversight, rapid capacity-building, adaptable policy frameworks and culturally attuned deployment, all of which align with existing guidance on human-rights-centred AI governance [S1][S2][S39][S78]. Continued multi-stakeholder collaboration, supported by concrete actions such as the Indian parliamentary digital platform, OpenAI’s certification programme and the establishment of international “red lines,” will be essential to steer AI toward strengthening, rather than weakening, democracy worldwide.


Session transcriptComplete transcript of the session
Martin Chungong

We make systems and making decisions about who receives public services, who qualifies for a loan, or who is flagged for surveillance. Those who design, train, and deploy these systems will influence not only over individual users, but also the information environment of democracy itself. At the first inter -parliamentary conference on responsible AI last November in Malaysia, members of parliament raised cases that brought this risk into sharp focus. In Amsterdam, an automated traffic management system inadvertently routed congestion, through low -income neighborhoods because the algorithm had learnt that those communities lacked the political influence to object. Examples like this will scale rapidly if governance does not keep pace, perpetuating harms against those historically excluded from decision -making. Yet, democratic governance is not keeping pace.

Power is accumulating rapidly in the hands of those at the forefront of AI development. A handful of technology corporations now command market capitalizations, exceeding the entire equity markets of major industrialized nations, while millions of workers in the global south are paid little to annotate the data sets on which the system stands. The benefits of AI are increasingly concentrated. while many of the costs fall on those with the least power to shape the technology. This is not merely an economic concern. It is a democratic concern. When the systems that govern aspects of people’s daily lives, their access to information services and economic opportunity are controlled by a small number of actors without meaningful public oversight, then the social contract itself is under strain.

That is why we must frame this not simply as technology policy, but as democratic governance. The choices made today about how AI is developed, deployed and regulated involve trade -offs. Between innovation and safety, efficiency and equity, profit and the public interest. In any healthy democracy, those trade -offs are debated openly, decided transparently and subject to accountability. The parliamentary community declared in Malaysia that we do not accept the concentration of power in the hands of a few actors. They called on all stakeholders to agree upon red lines that this technology cannot cross. They insisted on an equal voice for the global south. And they called on all parliaments to engage actively with AI governance efforts at every level.

The principle that elected legislatures shape the rules governing society is the cornerstone of democracy. But the contribution of parliaments to AI governance goes beyond that basic principle. Parliaments are where the real world impact of AI meets political accountability. Members of parliament hear directly from workers affected by automation, from communities concerned with algorithmic decision making, from parents navigating their children’s relationship with technology. This connects governance to lived experience and informs the AI debate through the values of the people. Parliaments can and must support this. I want to stimulate that broader societal conversation through hearing the voices of the people. consultations, and multi -stakeholder dialogues. I believe you heard what the Deputy Speaker of Hungary said about the practices in his country, which I believe is the path down which we would want to travel.

This brings me to the international dimension. AI is a truly global challenge whose effects transcend national borders. As we would say, AI doesn’t have a national passport. While the risks are real, from job displacement to environmental costs, so too are the opportunities. AI has genuine potential to improve healthcare, expand access to education, and accelerate progress on the sustainable development goals. But those benefits will not be shared equitably by default. That requires deliberate collective action and it requires that the countries with the most to gain are not shut out of the conversation. Yet international AI governance remains fragmented and short on binding commitments. Geopolitical competition risks fracturing governance efforts further. That is why this summit, I say this summit and those which will follow, must embody the inclusive participatory approach that the equitable governance of AI demands.

Parliaments are pivotal to ensuring coherence between domestic legislation, established human rights, and evolving international standards. and to holding their governments accountable for the commitments made at summits like this one. The Inter -Parliamentary Union is committed to supporting that engagement. In the past two years, over 60 parliaments have taken action on AI, from comprehensive legislation to oversight inquiries. Across the world, parliaments are forming cross -party groups, establishing specialized committees, and building capacity. The foundations are being laid, but they need to be built on faster, with increased coordination across borders. Parliaments are also beginning to explore how AI can support their own work. And those that experience its promise… and limitations firsthand will bring far greater understanding to the task of governing it.

So, let me return to the principle at the heart of what I have said today. Democracy cannot be automated. It must be shaped by every one of us through our democratic institutions, through open debate, through laws made transparently and enforced fairly, and through international cooperation in which every, every nation can participate. The choices we make will determine whether AI furthers democracy or erodes it. If we succeed, AI can become independent. AI is a tool for inclusion, participation, human rights, and better governance. If we fail, it risks becoming a force which concentrates power, weakens accountability, and erodes trust in public institutions, including parliaments. The task before us is to embed democratic accountability, human rights, and the rule of law at the heart of how AI is designed, deployed, and governed.

This summit is a critical opportunity to advance that mission. Let us make the most of it together. Thank you very much.

Dr. Chinmay Pandeya

Thank you. Thank you, Mr. Tiongong. And now, in this momentous occasion, it’s our great honor and pleasure, as today we have with us as chief guests, Honourable Mr. Om Birlaji, Speaker of Parliament of India When democracy meets AI, what are the opportunities for that, for deliberation Please put your hands together and we invite Honourable Om Birlaji

Om Birla

Thank you IPU’s Secretary General, IPU is an organisation of more than 190 countries in the world where in the institutions all over the world, how we can make new innovations, technology and international institutions responsible for the people. For this, all the parliaments of the world discuss this at regular intervals. I would like to welcome the Secretary -General of the IPU, Martin Csuk -Ok. I would like to welcome him. I would also like to welcome the Deputy Chairman of the Parliament of Hungary, Legos Alaw, as well. I would also like to welcome Miss Jimena Soto, Miss Maria Ramos, Miss Maria Ramos, Miss Maria Ramos, Miss Maria Ramos, Miss Maria Ramos, Miss Maria Ramos, Miss Maria Ramos, Miss Maria Ramos, Miss Maria Ramos, and especially those in whose presence this work is being carried out, the culture of India, the political value of India, the spiritual value of India, how can we bring the knowledge of the spiritual culture of India to the world?

For this, for a long time, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, Thank you very much. Thank you very much. Thank you very much. is working to reach this spiritual and moral values. And along with this, here, the Dev Sanskriti Vidhyalaya, which is amazing, where in Dev Sanskriti Vidhyalaya, the moral and spiritual values are taught, but at the same time, in modernity, technology, whatever the new education system of the world is, that education system is also given by the Indian moral and spiritual values by the education system of the Indian for the creation of a society, for the creation of a for the creation of a for the creation of a for the creation of a for the creation of a for the creation of a for the creation of a society, for the creation of a society, for the creation of a society, for the creation of a society, for the creation of a for the creation of a for the creation of a society, society, In the school where you will go, you will see that there is Vedic value, there is also Vedic value, there is also Vedic value, there is also Vedic value, there is also Vedic value, there is also Vedic value, there is also Vedic value, there is also Vedic value, there is also Vedic value, there is also Vedic value, there is also Vedic value, there is also Vedic value, there is also Vedic value, there is also Vedic value, there is also Vedic value, there is also Vedic value, there is also Vedic value, there is also Vedic value, there is also Vedic value, there is also Vedic value, and in the future, in the international organizations, in the international community, we will continue to develop the traditions of Sabwadur and use technology in a way that will answer the people of these international organizations.

We will continue to develop the traditions of Sabwadur and use technology in a way that will answer the people of these international organizations. to contribute to the development of the country. And I am happy that institutions all over the world are working on their own level. The Congress of the Commonwealth of Nations is here to attend this meeting. The Congress is here to attend this meeting. The Congress is here to attend this meeting. The Congress is here to attend this meeting. The Congress is here to attend this meeting. The Congress is here to attend this meeting. The Congress is here to attend this meeting. The Congress is here to attend this meeting. The Congress is here to attend this meeting.

The Congress is here to attend this meeting. The Congress is here to attend this meeting. The Congress is here to attend this meeting. The Congress is here to attend this meeting. The Congress is here to attend this meeting. The Congress is here to attend this meeting. The Congress is here to attend this meeting. The Congress is here to attend this meeting. The Congress is here to attend this meeting. The Congress is here to attend this meeting. The Congress is here to attend this meeting. The Congress is here to attend this meeting. that how we can use AI in international organizations, use a answer -based technology, use a answer -based technology so that we can use all international organizations in the country, their work culture, their conversation, their discussion, to make it better.

And for this, the Indian parliament is also working on a large scale. With the Indian parliament, our state’s constitution, that too is working on technology. And within India, the constitution, the constitution, all the constitution, the constitution, today, the paper says, This is for all of us, because India is the world’s largest democratic country. Demography is also amazing for us. Our language is different. Our culture is also different. Even after having such a diverse country, we have tried to use AI to answer questions, answer questions, and be special. And that’s why, in this direction, India’s security is very important. The Digital Assembly has implemented the rules of all states and the implementation of our Lok Sabha Rajya Sabha.

You can see that on one platform. And by 2026, all the proceedings, debates, discussions of the Constitutional Assembly will be on one platform. And that is why we have started working on a large scale. Today, most of our Constitutional Assembly, not all of them, have been paperless. All the debates, discussions, discussions, budget, issues of the state, of the central government, all the issues of the state, of the central government, all the issues of the state, of the central government, all the issues of the state, of the central government, all the issues of the state, of the central government, all the issues of the state, of the central government, all the issues of the state, of the central government, all the issues of the state, of the central government, all the issues of the state, of the central government, all the issues of the state, of the central government, all the issues of the state, of the central government, all the issues of the state, of the central government, all the issues of the state, of the central government, all the issues of the state, of the central government, all the issues of the state, of the central government, all the issues of the state, of the central government, all the issues of the state, of the central government, all the issues of the state, of the central government, With this, we will give such a model to the country that all the councils in the world, from the Constitutional Assembly of the State of India to the council’s functions, can be seen on one platform and there will be new innovation in it.

With that innovation, we have also tried to use AI in it. Because when you go to the subject, the topic, the discussion on metadata, then how can you search in all those debates? So, using AI technology, the Constitutional Assembly of the State of India and the Council of the People will get a platform and you will be able to see and read all the subjects and issues of the state through metadata. This will increase the capacity of our people in our democratic institutions. debate will be of high importance and while making the law we will make the law by summarizing the thoughts of the people and while making the law the discussion will be good in the parliament for this India technically I can say that in the form of AI India will become a new model of technical knowledge for the world’s parliament I am happy that under the leadership of the Prime Minister today the world’s largest AI community will be able to do this and I am happy Jisme 100 se jada desho ke log aayen, prathiniti aayen, rashtra dyaksh aayen, sansat ke sadarshi aayen.

Aur yeh sab yaha par kis tarike se badalti duniya ke andar, hum AI ka upyog karte hue, kis tarike se logon ke shamta nirman, industries ho, agriculture sector ho, chayani sector ho, unki utpadakta ko badana, aur duniya ke andar sabse nojawan desh Bharat hai. Aaj Bharat ka nojawan takni ki roop se nahi navachar kar raha hai, aur duniya ke andar sabse nojawan desh Bharat hai. Aur isi liye Bharat ki yeh yuwa jan sankhya hi Bharat ki satsi badi tagat hai. Aur isi liye isi tagat ka upyog… In the right direction Be positive Be in the form of new culture So that the challenges of the world Can be solved by India And in this direction We are moving forward I hope That our talent Is abundant in the world Our young people’s Power Concentration New culture Self -confidence Because it has spiritual And political value And Dev Sanskriti Vidyalay Where technology In technical knowledge Is giving youth Vedic education Along with that They are getting modern technology Education But that education should be on political value For everyone’s development It should be trusted It should be trustworthy It should be trustworthy It should be Because It is the only thing Because while using technology, if we do not use all the technology, then its direction can also be wrong.

And that is why a student who studies in the political fields of spiritual, religion and culture can use AI technology with answer and response. And in this direction, India is definitely working because India has power, we are growing rapidly in the world of clean energy. We have young people with political values. And their thinking is amazing. And their belief and self -confidence is also amazing. And that is why our speed and scale is growing rapidly. This world is looking at India. You must have seen that the attention of all state leaders is also on India. And they have also said that definitely India is doing a good job in the technology, in the AI sector.

India is doing a good job in the technology, in the AI sector. And the speed at which it is working, the scale at which it is working, will definitely move forward. Our thinking and thinking is always about the creation, the realization and the happiness. We consider the world as a family. Vasudev Kutamkam is our culture. And our thinking is about the creation, the realization and the happiness. That is why I hope that the AI technology conference will definitely give a new direction. And we will use it with confidence and with responsibility. We will be able to do it with confidence. And the use of technology is used in machines. But our human resources will work in the right direction.

I again give a lot of appreciation to all the people who have come here. And with this discussion and discussion, we will get a new direction. And we will continue to develop in India based on political values. And with the help of international development and international

Dr. Chinmay Pandeya

Thank you. After the wonderful speech of Honourable Speaker, we are privileged to have Dr. Fadi Dao here. He is the Chairman of the Globe Ethics. And there is one single question that I wanted to ask you, Dr. Dao, that you just listened to the excellent deliberation by Honourable Speaker and the variety of voices here. And India is a country with 27 official languages, 19 ,500 dialects. We have got more than 400 documented cultures. And we go with the belief and value of Vasudhaiva Kutumbakam. So how do you see the way forward from here? If I can hear from you in one minute, please.

Dr. Fadi Dao

the largest nation in the world, for reminding us that through this summit and the purpose of AI democratization is not people’s manipulation or domination. India is reminding us also today that the purpose of AI is the social empowerment and participation of all people. To conclude, ladies and gentlemen, I would like to say on behalf of Globe Ethics, my organization that is based in Geneva, that we are committed to capitalize on the outcomes of this summit and this panel in the perspective of the 2027 summit in Geneva, where we would like to welcome you all. Thank you.

Dr. Chinmay Pandeya

Thank you, Dr. Dow. And very shortly, Lord Rawal is with us from House of Lords, also a devout member of the Gayatri Parivar. If you could kindly shed a light on the way that India should take now for democracy.

Lord Krish Ravel

Thank you, Paiya. Ladies and gentlemen, one of the tenets of Gayatri Parivar that I grew up in is the adaptability to change. Change is such an intrinsic part of the entire fraternity. And that is, I think, a real advantage, because what will happen, the big cost of AI, is the speed with which technology is advancing, which can really make people unsettled. And the uncertainty, as a politician, I need to contain people’s uncertainty. And I think this preparedness for change, Chimabaya, which is a cardinal value of your organization, will really help people. There’s other things I could say, but I’ll leave it at that, because we’re pressed for time. Thank you.

Dr. Chinmay Pandeya

Thank you. Now it’s time for felicitations. On behalf of India AI Mission, Government of India, and all world Gayatri Parivaar, Dev Sanskriti Vishwadhyayalaya please put your hands together for wonderful session and we express our gratitude towards our honorable chief guest honorable guest of honors and Dev Sanskriti Vishwadhyayalaya, all world Gayatri Parivaar in itself started a very wonderful program like when we are integrating artificial intelligence with spirituality we are talking about future of faith in interfaith dialogues worldwide Dr. Chidambi Pandya is representing the thought and today on this very wonderful gathering we once again thank our honorable guest of honors, honorable distinguished speakers and all the participants thank you, thank you once again do visit Shantikunj Haridwar, Dev Sanskriti Vishwadhyayalaya and you can scan the QR code on the screen so that you can get a very wonderful gift afterwards once you scan and you put your please put your hands together once again we express our gratitude to our honorable speaker of Lok Sabha, Adar Nishri Om Birla Ji and our honorable guests once again a big round of applause thank you all thank you the next stage is beginning all of you please be there for the co -operation thank you thank you QR code which you can see in front of you, scan it so that you can get a special gift for this program.

Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. you you Thank you. Thank you. Thank you. with

Anmol Garg

each of you in the room to really tell you what we’re doing over at OpenAI. And today we’ve got an incredible lineup of show. You will hear from our chief economist. You will hear from our chief of global affairs. You’ll see a lot of the work we’re doing in education, in social impact. And we are tremendously excited to have you here. The energy this week has been palpable. And we cannot wait to continue to build in India, with India, with each of you. So with that, I’m going to invite our incredible chief economist, Ronnie Chatterjee, on the stage to kick off the show. Well,

Ronnie Chatterji

As we think about the panel discussion, I want to start and invite Iqbal Dhaliwal. Iqbal, where are you? Come on. Iqbal is the Global Executive Director of J -PAL. Okay. And Rupa, do you want to come up here too? Rupa, thank you. Rupa is the Chief Economist and Head of Policy Advocacy at Tata Sons. Very exciting to have you here, Rupa. And Sanjeev Bikachand, the founder of InfoEdge. Sanjeev, please take a seat. All right, Sanjeev. So let’s get started and think about – and the great thing about this, I get to ask the questions. Yes. You know? When I do the other things, I’m always in the hot seat, and I get to ask you the questions about the future of work, jobs, AI, the economy.

Okay. Iqbal, let’s start with you. Thank you. How are you using data, perhaps like signals and other kinds of data sets, to understand how AI is affecting the economy? What are the most interesting things you’re seeing? Hey, can you? Yeah, perfect.

Iqbal Dhaliwal

Yeah, thanks for having us. Super exciting. I think for us, data means a lot of things. One is absolutely to understand what the problems on the ground are. But for us, the most important use of data is to understand how the applications of AI are making an impact on the ground, which is so important also because that is the topic of today’s conversation. And it’s giving us so many incredible insights about how things are working. For instance, just in the field of AI and the intersection with development, wearing my hat now as a researcher but earlier as a policymaker, one of the things that we have been worried a lot about is early childhood. And we think about early childhood education and say, hey, how can we get more kids to come?

into early childhood education, how can we get to ages one to three? And it was an impossible task despite rigorous evidence that this works. And now what we are able to do with AI, you know, Rocket Learning, OpenAI has a partnership with them, I’ve been on their founding board, Rocket Learning is able to now democratize with data the application of early childhood education for children. It’s able to do that by reaching millions of Anganwadi workers who could not be trained earlier, and it is generating so much data which is telling us how children learn, what exactly customized messages to send to these children, and what to send to their parents. And so for me, that’s kind of a really cool application of the data which is being collected and being generated by using an AI application.

Ronnie Chatterji

Oh, I want to return to education in a second, but first let me go to my other chief economist, Rupa Purshatanam. Rupa, first of all, what does Tata want with a chief economist? I understand now why OpenAI is one. I understand now why OpenAI is one. Tell us what the chief economist at Tata does, and also what are some of the most interesting things you’re seeing in the enterprise, one that’s using AI a lot, from what I can tell.

Roopa Purushothaman

So that is a very good question. I think when you’re part of a group like the Tata’s that’s sort of spanning all sectors of the economy, you do a little bit of everything. I would say probably 30 % of the job is looking at how do we do classical economic forecasting, GDP, interest rates, inflation for the geographies we’re in. But I think we get to do a lot of interesting things, really trying to connect macro approaches that we take in macro to things that we’re seeing in sectors or solving big impact nation -scale problems. So I get to do a lot of work on sustainability, on health. I’m the head of social impact as well. So those things sort of cross over.

Ronnie Chatterji

This is interesting. I mean, right, it’s interesting. As much as we’re asking about economics, we’re hearing about education. We’re hearing about social impact. I want to return to these themes as well. Sanjeev, how about you at InfoEdge? How are you? Thinking about the most important uses of AI. What are you tracking?

Sanjiv Bikhchandani

Okay, so the first thing is, we are a job site primarily, Nocri .com. So roughly about 70 % of our revenue and 140 % of our profits come from Nocri. So when we hear this talk that AI will eat jobs, and AI will eat programming jobs, and 50 % of our revenue comes from the IT services sector, we get worried. So the first thing you want to do is actually understand what is AI going to do to jobs, specifically what is AI going to do to jobs in the IT sector. Now that’s important for us to understand. But let me tell you, thus far at least, there is no evidence on the ground that hiring is going down. In fact, it is steady.

So thus far at least, there has been no impact on jobs or no impact on Nocri business. So now we are waiting and watching because, you know, we are worried when the whole world is saying jobs are going to vanish, we get worried. But, you know, I console myself by going back into history. And, you know, when a new technology comes, there is a disruption often. And, yes, some people, some jobs may get replaced, but many more are often created. So in 1985, I recall I was 22 years old and my first job. And the government then announced that we’re going to introduce computers in banks in India. In those days, most banks were public sector banks.

The bank trade union went ballistic. But the government introduced the computers anyway. And for a while, they didn’t get used, the computers. But when they began to get used, nobody lost jobs. People got more productive, right? They were servicing their customers better. They were doing different things. They were doing more things. They were doing things faster. So technology. Technology may disrupt and may replace, but it will also create new jobs and new opportunities. opportunities. Now, if it happens in six months’ time that you’re disrupting, there may be a problem. But if it takes five years, you’ll have enough time to create new jobs, do new stuff, and on an aggregate basis, there won’t be crazy disruption.

Ronnie Chatterji

Rupa, do you want to follow up on this?

Roopa Purushothaman

Yeah, I was going to say, I think that there’s two areas for India in particular where we could see new jobs, meaningful jobs being created. One has to do with the fact that we are in a very different situation compared to countries like the U .S., regions like Europe. In places like that, you have efficient markets and you have a plethora of specialists, whether we’re talking about doctors, lawyers, whatever it might be. In India, we don’t have enough doctors per thousand people. We don’t have enough nurses. That’s an even worse, more acute problem. Educators and so on. And the resources that we have that are… Specialists are stretched to the max. So we did some work some years ago and looked at doctors at Ames, not far from here.

and we found that 50 % of their time was spent on not specialist work. With AI and the tools that we have, that work can move to a whole new set of workers that can take that on, and that leads to two things. One, doctors can do what they should be doing for more people. You have a new class of workers that mediates technology but also helps new patients navigate the system that is very difficult to navigate. And when you have, let’s say, people in rural India who are experiencing health care for the first time, now all of a sudden you need medicines to reach people and so on, right? So entire supply chains start. And so I’ve talked about health, but this is the same for education, for financial services, logistics.

And so I think there’s tens of millions of jobs. We estimated it to be about 30 million that come from these sort of rigid workers. The second one is entrepreneurship, and I think there’s something mind -boggling about the fact that literacy, and I advocate for literacy, but literacy is not the obstacle that we saw in the past. because of two things. When you have voice activated and you have local language models, all of a sudden, and we’ve seen this in our social impact work, you have entrepreneurs that can now understand, you know, price information more quickly. They can access markets, access financial resources. So things that you could only do in urban systems now can be elsewhere.

And I think for us, if we see this, and right now it’s still very nascent, but if what we’re seeing in our social impact work really goes bigger, entrepreneurship, which are small and medium -sized businesses, which now account for like 10 % of private sector employment, it can go to what we see in other places, which is closer to 40%. So for me, those are two big opportunities.

Ronnie Chatterji

Iqbal, how about you?

Iqbal Dhaliwal

Yeah, thanks. Okay. I think I agree with everybody, you know, like about the potential of AI to try and transform our lives for the positive. I do want to, you know, put in a word of caution on the labor front. I think I agree with you, for instance, that, you know, like when computers came, people thought… it was disruptive. But think about how expensive the first computers were, right? Like there was this one massive computer in the office, five people would go and share it. You know, like the bosses got a laptop, everybody else got a hard, you know, like a thing, a desktop, thank you. Most of them were not connected to the internet.

Then we slowly connected them to the internet. I think the speed and the pace of AI is unprecedented. It’s a general purpose technology. The price is, the price point for the marginal user is very low. The penetration is incredible. Think about it, right? Like every single one of you who has a smartphone in your pocket has AI in your pocket. That was not the case for computers and technology. And finally, the multimodality of it. The fact that, oh, I can’t process text, but then I can process it as a voice or I can process it as video. That is phenomenally different from all of these technologies. So I think what I would say is, you know, the following.

I think I agree with you that in the medium and the long term, job markets will adjust. The pace at which – I just wish I had a dial which could kind of slow things down. So, you know, and I think the speed at which this thing is going, the labor markets will have a very hard time. The second reason that the labor markets are going to have a really hard time is because we are completely biased towards capital investment versus labor. This is true for the United States. We have Social Security taxes, Medicare, Medicare, like the entire thing. I mean, you know, in India, we have ESI, cratchity, public provident, retirement. And on the other hand, the government gives breaks on investment in capital.

So the playing field is not leveled here for each one of us in the labor market to compete against AI. So I’m all for AI. It’s going to be a product -enhancing technology. It’s going to be an augmenting thing. But for it not to turn into an automation and a human replacement thing, we need to dial down the speed, and we need to make sure. We need to make sure that the policy infrastructure keeps up with it.

Sanjiv Bikhchandani

Look. AI is now relentless, the genie’s out of the bottle, you can’t dial it down, it’s not going to slow down just because somebody said so. It’s going to happen. Now, you can either do it or have it done to you. And what I tell people, individuals, I say, listen, you worry about your job, don’t worry about jobs in the system, national level jobs. Is your job safe and what can you do to make sure your job is safe? Or get your job, if you’re a student. And I go back to 1989, I had just finished business school, I had worked for three years prior to business school, I had finished business school, I joined a company, a consumer company, and I was working in the marketing department.

And yes, as Iqbal says, there were two computers, 15 people, we were sharing it. Now, the thing was that I was the only guy who was PC literate because I was the most recent graduate, I was the youngest. I had used computers in business school, the others had not. They were senior to me, they were my bosses, they were getting paid more than me. And they had more powers than me. but they couldn’t use a PC. I could. If there’s somebody getting sacked in that department, I was the last guy getting sacked because I was the only guy who could use a PC at least for the first few months. My point is simple. AI platforms are easy to use, easy to learn.

For everybody, I’m saying if you are a person in your company or in your department or even if you’re a student who knows how to use seven or eight or ten AI platforms, believe me, you’re highly employable. Because not everybody will learn it. If you learn it and are good at it, you’ll be okay. So it’s in your hands to protect your employment and your employability. Just learn AI.

Ronnie Chatterji

This is one of the things people say that your job is more likely to be taken by someone who knows AI better than you than by AI. Ruba, inside an organization, how do you help power users who are using it a lot, kind of the example he gave of being the PC user where no one else was, how do you help those people diffuse their best practices, their learning to the other folks? And we see this in our data. There’s a big spread between the power users and the median users in most organizations. We call it capability overhang. Do you see that at Tata, and how do you think about solving these kind of issues to help more people learn how to use AI?

Roopa Purushothaman

I mean, I think for us, even working across the group, the different companies, is something where a lot more collaboration, what we’re working on are platforms for us to be speaking to each other about best practices, what works and what doesn’t. So right now when we learn, for example, that in a lot of our manufacturing businesses on the shop floors using AI for safety, how could we use those best practices in other parts of our companies? So even just within us and across companies, can we share what’s working and what’s not? So it’s not. I think we have companies where, let’s say like a TCS that sees a broad section of sectors and what’s worked.

So you kind of learn that sectors like life sciences, you see these huge changes in drug discovery. But across all sectors, you see things like customer service, marketing, those things really being changed by AI. So I think right now at this stage, having those conversations about what seems to be working on the ground seems to be the most important. As we are going through the very difficult process of taking what are legacy systems, sort of lumbering systems, and trying to get the data that is in very different silos to even start talking to each other. So I think we acknowledge that that process is still going to take a lot of time, but we can see these sort of new cases where it’s actually taking time.

Ronnie Chatterji

I just want to get Iqbal in here for a second, and then I’ll get back. Iqbal, what parallels are there to the development literature where we’ve found ways in education to teach people how to do new things, teach them how to start businesses, teach them sets of skills? Iqbal, what parallels are there to the development literature where we’ve found ways in education to teach people how to do new things, Iqbal, what parallels are there to the development literature where we’ve found ways in education to teach people how to do new things, and what parallels are there to the development literature where we’ve found ways in education to teach people how to do new things, can we teach people AI?

Can global institutions teach people AI? Can J -PAL do work in that area? Because it seems like an analog to sort of working with people inside enterprises, but maybe a different challenge.

Iqbal Dhaliwal

Yeah, great question. I think we can. The question is, can we do it correctly, and who benefits from that teaching? So let me give you an example. You know, we are well aware of this literature in the business processing outsourcing, where, you know, once you provided AI tools to people, like the lowest performing BPOs, call center employees, they leveled up. And they leveled up to the higher skilled workers. Fantastic example of leveling of skills, and which is a win -win. We did a study in Kenya where we provided AI, chat GPT actually, tools to micro -entrepreneurs. Don’t think of these as sophisticated users. Think of these as grocery stores, a neighborhood stationary shop. And then what we see is that the average treatment impacts, this is early stages of chat GPT, the average treatment effects were zero.

So then you dig deeper into the data, and then you actually see something super interesting. The top entrepreneurs, the ones who are performing well, actually take chat GPT, and they do really well. Because they run with it, they understand how to do it, and they know what to do with it, and then, you know, the recommendations that come out of it. On the other hand, those who were lower performing earlier, you know, like chat GPT will give recommendations. They will say, oh, nobody’s coming. The demand for my product is low. It goes to Econ 101 and says, maybe you want to think about lowering your prices. Maybe you want to think about increasing your marketing budget.

And then, but how much should I lower the price? Should I lower it today? Should I just lower it for Diwali or whatever? You don’t know. So I think you raise a really important question. I think there are going to be some folks who are going to take these tools, run with it, and there are some of the folks who are going to need a little bit of hand -holding. And I think you’re absolutely right. We can do a much better job of helping them human integrate. This can be through originally the old models of teaching. But I actually think now that the tool is so powerful. It can teach people itself with these nuances.

Ronnie Chatterji

And we’re releasing products to do that. We have a new jobs and certification platform coming that’s connected to that. Sanjeev, thanks for being patient.

Sanjiv Bikhchandani

I want to give a couple of examples of real things in our office. So we also invest in startups. So we’ve invested in about 130, 140 startups now. Now, every month, every quarter, the MIS is coming from these startups. Now, we’ve got very smart people in the investing team, in the portfolio management team. They’re all MBAs and CAs and things. So you know their stuff, right? But you just put in chat GBT, and you first of all do it yourself and chat GBT supplements. And then you say, okay, have I missed something? Is there a perspective here? Here, it’s helping them do their jobs better. It’s doing stuff that they might have missed, number one. So therefore, it’s enhancing the productivity.

And you can do many more analyses because you can ask 100 questions to chat GBT, and you may only ask yourself 10 questions, right? The second example I want to give is, so our marketing team in Jeevan Sati, a matrimony site, decided around Father’s Day. Let’s produce a film. father -daughter relationship. What my father means to me, and so on. Now, under normal circumstances, this film would not have been made. It would have taken six weeks and 60 lakhs to make. So it would not have been done. And life would have gone on. But it took about two days, using AI, for novices who never used that platform earlier, and now they’re saying they can do it in three hours, to actually make a film purely on AI and put it out on digital media, and it worked.

It was a big hit. Now this is stuff that’s not been done. Another example, there’s a surfeit of content. There are so many podcasts, there are so many interviews, this panel will also go to YouTube. So, you know, I can’t follow all of them. I’d love to, but I can’t. I just get a summary of a video from AI. I can do it in about three minutes. So I’m doing stuff I would not have done otherwise, and I would not have employed somebody. Nobody’s lost their jobs. for example, Nocatee has got about 130 ,000 to 150 ,000 clients. The top 20 % there’s a sales team that calls on them. The next 30 % there’s a tele -sales team that calls on them.

The bottom 50 % you don’t interface with because they don’t pay you enough. The sales channel is not worth it. Now we’ve got voice bots calling the bottom 50%. So we are serving an underserved market. Nobody’s lost a job yet. Now I’m not saying it won’t happen. Maybe it will. I don’t know. But thus far it’s not happened. And life is going on. And every quarter Nocatee is still growing. We are worried. We are concerned. We are apprehensive given the noise in the system about job losses. But it’s not happened yet. And we’re taking it quarter on quarter. And we are keeping our fingers crossed and hoping and praying it doesn’t happen. I don’t have the answers.

Ronnie Chatterji

No, no. Look, none of us have the answers, I think. And I’ll close the panel on this. I did promise the team I would end this on time. I could talk to these guys for about three hours just to let you know. So none of us have the answers, right? At the end of the day, from our vantage points, we’re trying to solve these questions, right? Sanjeeva, you did a great job from a person who’s an investor who’s building things, trying to explain what you think is happening. And the idea of you making the movie, I think, is a good example of how you can do things you never did before with AI. Rupa, you’re in a large conglomerate with lots of different businesses, lots of different exciting things going on.

How AI diffuses across the organization, I think that’s something all of us should watch. It’s not easy for large organizations to adopt AI and implement it, and the ones that do it, I think, are going to be advantaged. And Iqbal, I think you leave us all something to think about, which is if we’re going to educate the world on AI, if we’re going to democratize AI, we’ve got to make sure we do it well. We can’t just talk about it. And I hope in all these cases, enterprise adoption, learning and teaching AI, and helping the cutting edge, that OpenAI can be your partner. So with that, I want to thank our amazing panelists and thank everyone in the audience.

I’ve got one last thing. We’re done. I’m so sorry. You’ve got to hear this. They’re giving me this. Does this mean that? Yeah, you’ve got to hear this. I’ve got colleagues coming and negotiating salary with me. I’ve checked on GPD. I’m paid 40 % less than I should be. And he’s saving money on salaries too, getting some more time. I love it. Thank you. Thank you. Thank you. Thank you.

Kavita Gunjikannan

Thank you so much, Rani. Thanks, Sanjeev. Thanks, Rupa. And thanks, Iqbal. We do have more sections coming up. I’d request everyone to stay back before we complete this session. I’m Kavita Gunjikannan from the Global Affairs team at OpenAI. We want to take a moment to celebrate a few education partnerships that we announced just yesterday.

Related ResourcesKnowledge base sources related to the discussion topics (21)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“Martin Chungong opened the summit and warned that AI systems now decide who receives public services, who qualifies for a loan and even who is placed under surveillance, shaping individual lives and the democratic information environment.”

The knowledge base records Martin Chungong, Secretary-General of the Inter-Parliamentary Union, speaking on the impact of AI and the theme of the session “AI for Democracy” [S1] and [S97].

Confirmedhigh

“Parliaments are pivotal to ensuring coherence between domestic legislation, established human rights, and evolving AI standards, and to holding governments accountable for AI governance.”

IPU statements describe parliaments as essential for aligning legislation, human-rights norms and AI policy, and as the cornerstone of democratic accountability [S25] and [S15].

Additional Contextmedium

“At the inter‑parliamentary conference on responsible AI in Malaysia, MPs highlighted an Amsterdam traffic‑management algorithm that unintentionally routed congestion through low‑income neighbourhoods because those communities lacked political influence.”

While the knowledge base does not mention the specific Amsterdam case, it shows that Malaysian parliamentarians were actively participating in AI-related discussions at international forums, indicating the broader context of Malaysian engagement on AI impacts [S99].

Additional Contextmedium

“The parliamentary community in Malaysia expressed opposition to concentration of power, called for red‑line safeguards, demanded an equal voice for the Global South and urged all parliaments to engage actively in AI governance.”

Global-South delegations, including representatives from Malaysia, have called for greater representation and safeguards in multilateral AI governance, providing contextual support for the claim [S100] and [S99].

External Sources (105)
S1
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — -Sanjiv Bikhchandani- Founder of InfoEdge (Naukri.com)
S2
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — -Roopa Purushothaman- Chief Economist and Head of Policy Advocacy at Tata Sons
S3
New Development Actors for the 21st Century / DAVOS 2025 — – Iqbal Dhaliwal – Global Director of J-PAL at MIT
S4
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — – Iqbal Dhaliwal- Ronnie Chatterji – Iqbal Dhaliwal- Sanjiv Bikhchandani
S5
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — “I’m Kavita Gunjikannan from the Global Affairs team at OpenAI.”[85]. “We want to take a moment to celebrate a few educa…
S6
S7
OpenAI economist shares four key skills for kids in AI era — As AIreshapesjobs and daily life, OpenAI’s chief economist, Ronnie Chatterji, teaches his children four core skills to h…
S8
High-Level Dialogue: The role of parliaments in shaping our digital future — – **Doreen Bogdan-Martin** – Role/Title: Secretary-General of ITU (International Telecommunication Union) – **Martin Ch…
S9
World e-parliament report 2018 — ## World e-Parliament Report 2018 © Inter-Parliamentary Union, 2018 For personal and non-commercial use, all or parts…
S10
IGF Parliamentary track — – Martin Chungong: Secretary General of Inter-Parliamentary Union (IPU)
S11
S12
https://dig.watch/event/india-ai-impact-summit-2026/the-global-power-shift-indias-rise-in-ai-semiconductors — Joining us is Professor Vivek Kumar Singh, Senior… advisor on science and technology at NITI IO. Professor Singh plays…
S13
The Global Power Shift India’s Rise in AI & Semiconductors — -Rahul Garg: Founder and CEO of Moglix; built one of India’s leading industrial supply chain platforms, expanded into ma…
S14
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — -Lord Krish Ravel- Member of House of Lords, devout member of the Gayatri Parivar
S15
AI for Democracy_ Reimagining Governance in the Age of Intelligence — -Lord Rawal: Member of House of Lords, devout member of Gayatri Parivar – expertise in British parliamentary system and …
S16
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — -Dr. Fadi Dao- Chairman of Globe Ethics (organization based in Geneva)
S17
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Diana Nyakundi:Thanks, Fadi. Good morning, everyone. I am Diana Nyakundi. I am based in Nairobi, Kenya. I work as a seni…
S18
AI for Democracy_ Reimagining Governance in the Age of Intelligence — – Dr. Chinmay Pandya- Martin Chunggong
S19
AI for Democracy_ Reimagining Governance in the Age of Intelligence — -Om Birla: Speaker of Parliament of India (Lok Sabha) – expertise in parliamentary procedures and democratic governance …
S20
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — -Om Birla- Speaker of Parliament of India (Lok Sabha)
S21
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ananya Birla Birla AI Labs — -President Obama: Role/Title: Former U.S. President; Area of expertise: Politics, governance (mentioned in reference to …
S22
Open Forum #75 Shaping Global AI Governance Through Multistakeholder Action — – Concentration of power without adequate oversight Ernst Noorman: Thank you very much, Zach, and thank you, Rasmus, fo…
S23
Generative AI: Steam Engine of the Fourth Industrial Revolution? — Continuous learning is necessary in order to adapt to the rapidly evolving technological landscape. The half-life of ski…
S24
Why science metters in global AI governance — The conversation highlighted the unique challenges of governing AI technology, particularly its rapid pace of developmen…
S25
https://dig.watch/event/india-ai-impact-summit-2026/impact-the-role-of-ai-how-artificial-intelligence-is-changing-everything — And that is why a student who studies in the political fields of spiritual, religion and culture can use AI technology w…
S26
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — India’s advantages in this transformation include demographic energy, linguistic complexity, cultural depth spanning tho…
S27
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — This comment provides crucial context about India’s position in the global AI ecosystem, distinguishing between applicat…
S28
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — Amish points out that most global AI models operate in English, making Indian‑language capability crucial for the countr…
S29
WS #100 Integrating the Global South in Global AI Governance — Fadi Salim: Great, thanks. Any other comments on this? OK, we can move on. You have one? Fadi Salim: Great. Thank …
S30
Safe and Responsible AI at Scale Practical Pathways — It should be located at every organization and governed locally by the organizations that are… using it. And so one of…
S31
AI governance struggles to match rapid adoption — Accelerating AI adoptionis exposingclear weaknesses in corporate AI governance. Research shows that while most organisat…
S32
Leveraging the UN system to advance global AI Governance efforts — Gilbert Houngbo highlights the imperative role of the United Nations in spearheading global coordination efforts, thereb…
S33
WS #97 Interoperability of AI Governance: Scope and Mechanism — Rapid technological advancement poses challenges for governance frameworks to keep pace
S34
AI adoption reshapes UK scale-up hiring policy framework — AI adoption is prompting UK scale-ups torecalibrateworkforce policies. Survey data indicates that 33% of founders antici…
S35
How AI Drives Innovation and Economic Growth — <strong>Jeanette Rodrigues:</strong> all around the Bharat Mandapam. So once again, thank you very much for your time th…
S36
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan — Sanjeev Bikhchandani provided important counterbalance, noting that business fundamentally involves managing people, bui…
S37
Sticking with Start-ups / DAVOS 2025 — – Hemant Taneja- Arvind Jain- Kate Ryder Bhatnagar explains how AI is transforming content creation and enabling new bu…
S38
A Digital Future for All (afternoon sessions) — AI is enabling economic progress and entrepreneurship, especially in emerging markets. It can boost productivity across …
S39
Ethics and AI | Part 5 — Concerned that certain activities within the lifecycle of artificial intelligence systems may undermine human dignity an…
S40
Networking Session #60 Risk &amp; impact assessment of AI on human rights &amp; democracy — Heramb Podar: Yes, I am. I hope you can hear me. For the opportunity to speak, CAIDP has been, indeed, a very vocal ad…
S41
GermanAsian AI Partnerships Driving Talent Innovation the Future — He acknowledges that every new technology causes disruption, underscoring the need for proactive policy and skill develo…
S42
The Impact of Digitalisation and AI on Employment Quality – Challenges and Opportunities — – Employment policies should be interwoven with education, addressing both labour market demand and supply. – The impera…
S43
Artificial intelligence — The disruptions that AI systems could bring to the labour market are another source of concern. Many studies estimate th…
S44
Trade regulations in the digital environment: Is there a gender component? (UNCTAD) — In conclusion, the analysis reinforces the potential of digitalisation and emerging technologies, such as artificial int…
S45
Séance d’ouverture : « La gouvernance internationale du numérique et de l’IA : à la croisée des chemins ? » — Lacina Koné Well, thank you so much for having invited us. Thank you. A lot has already been said since the introduction…
S46
Interdisciplinary approaches — AI-related issues are being discussed in various international spaces. In addition to the EU, OECD, and UNESCO, organisa…
S47
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — In conclusion, the UNESCO recommendation on AI ethics provides crucial guidance for global AI governance. By grounding A…
S48
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — AI is a general-purpose technology that holds the potential to increase productivity and build impactful solutions acros…
S49
Hard power of AI — In conclusion, the analysis provides insights into the dynamic relationship between technology, politics, and AI. It hig…
S50
Why science metters in global AI governance — The conversation highlighted the unique challenges of governing AI technology, particularly its rapid pace of developmen…
S51
How to make AI governance fit for purpose? — Legal and regulatory | Development The AI revolution is fundamentally challenging the governance structures as we know …
S52
Panel Discussion: Europe’s AI Governance Strategy in the Face of Global Competition — – Alexander E. Brunner- Enzo Maria Le Fevre Cervini While disagreeing that governance is dead, Curioni acknowledges tha…
S53
Practical Toolkits for AI Risk Mitigation for Businesses — AI technology has the potential to bring both positive and negative impacts. On the positive side, it can create job opp…
S54
How AI Drives Innovation and Economic Growth — Rodrigues emphasizes that while early AI discussions were dominated by fear about job displacement and technological thr…
S55
AI for Democracy_ Reimagining Governance in the Age of Intelligence — Authorities and independent media will lag behind while malicious actors remain behind. one step ahead. Accountability w…
S56
Ethics and AI | Part 5 — Recognizing that activities within the lifecycle of artificial intelligence systems may offer unprecedented opportunitie…
S57
Building the Workforce_ AI for Viksit Bharat 2047 — Responsibility is to carve out trust -based collaborative ethical frameworks so that the demands of fast -paced dynamic …
S58
Impact &amp; the Role of AI How Artificial Intelligence Is Changing Everything — That is why we must frame this not simply as technology policy, but as democratic governance. The choices made today abo…
S59
Interdisciplinary approaches — AI-related issues are being discussed in various international spaces. In addition to the EU, OECD, and UNESCO, organisa…
S60
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — AI is a general-purpose technology that holds the potential to increase productivity and build impactful solutions acros…
S61
Global AI Policy Framework: International Cooperation and Historical Perspectives — Given your role in leading AI policy at United Nations Office for Digital and Emerging Technologies, what are the AI pri…
S62
Artificial intelligence — The disruptions that AI systems could bring to the labour market are another source of concern. Many studies estimate th…
S63
Comprehensive Report: AI’s Impact on the Future of Work – Davos 2026 Panel Discussion — Bhan argues that AI’s impact on jobs cannot be viewed in isolation but must be considered alongside broader economic dis…
S64
AI for Social Empowerment_ Driving Change and Inclusion — Add to that the fact that, like, there’s all these different forces coming that we don’t know, you know, if AI disrupts …
S65
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — Panellists offered different outlooks on employment implications. Rees-Jones maintained optimism about AI tutoring enhan…
S66
Capacity development — If you really want to be good at something, you need to understand the issues at hand thoroughly. You need to be able to…
S67
Workshop 6: Perception of AI Tools in Business Operations: Building Trustworthy and Rights-Respecting Technologies — Skills development and training programs are crucial for both internal and external stakeholders
S68
WS #279 AI: Guardian for Critical Infrastructure in Developing World — Daniel Lohrman: I think they really lead into that really well. And I think that this is a huge challenge. I would ju…
S69
WSIS Action Line C2 Information and communication infrastructure — Development | Sociocultural It’s essential to ensure that people working in public authorities have the necessary knowl…
S70
WS #75 An Open and Democratic Internet in the Digitization Era — Nur Adlin: Can you hear me? Okay. Okay. Assalamualaikum warahmatullahi wabarakatuh. Good day. Good day. Good afte…
S71
Open Forum #17 AI Regulation Insights From Parliaments — Balancing Innovation and Regulation Detailed vs. framework-based regulation Need for flexible regulatory frameworks ra…
S72
Ateliers : rapports restitution et séance de clôture — Joseph Nkalwo Ngoula Merci. C’est toujours difficile de restituer la parole d’experts de haut vol. sans courir le risque…
S73
Empowering India &amp; the Global South Through AI Literacy — The programme has been implemented across multiple states, with specific mentions of Odisha, Kerala, and Jharkhand. The …
S74
Inclusive AI_ Why Linguistic Diversity Matters — The conversation expanded to broader themes of cultural preservation, data sovereignty, and the balance between open-sou…
S75
Ministerial Roundtable — Artificial Intelligence and Emerging Technologies Careful understanding of opportunities for cultural and language aspe…
S76
Open Forum #75 Shaping Global AI Governance Through Multistakeholder Action — **Devine Salese Agbeti**, Director General of the Cyber Security Authority of Ghana, provided perspective on bidirection…
S77
Opening Plenary: Working Together for a Human-Centred Digital Future – Parliamentary Cooperation for Democratic Digital Governance — Marijana Puljak: Thank you so much. I like to start with the sentence that is behind you that says safeguarding human ri…
S78
UN High Commissioner urges human rights-centric approach to mitigate risks in AI development — While AI holds transformative potential for solving critical issues like curing cancer and addressing global warming, it…
S79
Impact &amp; the Role of AI How Artificial Intelligence Is Changing Everything — “When the systems that govern aspects of people’s daily lives, their access to information services and economic opportu…
S80
9821st meeting — Humans must always retain control over decision-making functions guided by international law, including international hu…
S81
AI for Democracy_ Reimagining Governance in the Age of Intelligence — Because, while using technology, if we do not use all the technology, then its direction can also be wrong. And that is …
S82
AI Collaboration Across Borders_ India–Israel Innovation Roundtable — This comment fundamentally reframed the conversation’s conclusion, moving from practical collaboration discussions to de…
S83
Bridging the Digital Skills Gap: Strategies for Reskilling and Upskilling in a Changing World — ## Labour Market Transformation and Decent Work Celeste Drake: Thank you very much, Chair. And I want to begin by thank…
S84
Comprehensive Discussion Report: AI’s Transformative Potential for Global Economic Growth — Fink acknowledged that while some jobs may be displaced, new opportunities are simultaneously created. Both speakers agr…
S85
GermanAsian AI Partnerships Driving Talent Innovation the Future — He acknowledges that every new technology causes disruption, underscoring the need for proactive policy and skill develo…
S86
AI for Social Empowerment_ Driving Change and Inclusion — But AI is one really big disruption that comes on top of all the other disruptions, and there’s already plenty of eviden…
S87
Generative AI is enhancing employment opportunities and shaping job quality, says ILO report — A new study conducted by the International Labour Organization (ILO) investigates the consequences of Generative AI on t…
S88
Séance d’ouverture : « La gouvernance internationale du numérique et de l’IA : à la croisée des chemins ? » — Lacina Koné Well, thank you so much for having invited us. Thank you. A lot has already been said since the introduction…
S89
How AI Drives Innovation and Economic Growth — Artificial intelligence | Social and economic development
S90
A Digital Future for All (afternoon sessions) — AI has the potential to accelerate progress on the UN Sustainable Development Goals. It can be applied to benefit humani…
S91
Keynote-Dario Amodei — Social and economic development | Artificial intelligence | Information and communication technologies for development
S92
Interdisciplinary approaches — AI-related issues are being discussed in various international spaces. In addition to the EU, OECD, and UNESCO, organisa…
S93
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — AI is a general-purpose technology that holds the potential to increase productivity and build impactful solutions acros…
S94
Setting the Rules_ Global AI Standards for Growth and Governance — And it’s going to have to be a collective effort. Yeah. Okay. Key areas of convergence included the importance of proce…
S95
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — Multi-stakeholder cooperation and inclusive governance frameworks are essential
S96
How to make AI governance fit for purpose? — All speakers recognize that AI’s global nature requires international cooperation and coordination, though they may diff…
S97
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-democracy_-reimagining-governance-in-the-age-of-intelligence — I say this because the theme of this session, AI for Democracy, cuts to the heart of the matter. We are not simply debat…
S98
#205 L&amp;A Launch of the Global CyberPeace index — Marlena Wisniak: Yeah, thanks so much Vinit. And I’ll keep it short because I know we’re running out of time. Congrats o…
S99
Open Forum #13 Bridging the Digital Divide Focus on the Global South — Chern Choong Thum: And good morning from Malaysia. It is a great honour to be here, not only as a representative of the …
S100
UNSC meeting: Multilateral cooperation for peace and security — A recurring theme throughout the debate was the need for more inclusive and equitable multilateralism. Developing countr…
S101
Digital democracy and future realities | IGF 2023 WS #476 — However, the analysis also highlights the positive aspects of platforms like Wikipedia. It underscores that Wikipedia is…
S102
(Interactive Dialogue 1) Summit of the Future – General Assembly, 79th session — Chair: I thank Minister of National Planning and Development, Coordination of Solomon Islands. I give the floor to the…
S103
Day 0 Event #83 Empowering Afghan Women: Bridging Digital Gaps for Education — Abdul Ghayoor Bawary: hear us well. Over to Leymah first, because Leymah is my co-moderator and she’ll be doing the re…
S104
WS #189 AI Regulation Unveiled: Global Pioneering for a Safer World — Auke Pals: Thank you very much for your great info. And yeah, we really see the importance of civil society being acti…
S105
[Parliamentary Session 5] Collaborative approaches to address online harms — David Alamos: Thank you very much. I don’t know if we have any other questions. Yes, I think there is one more. And …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
M
Martin Chungong
1 argument96 words per minute936 words584 seconds
Argument 1
Concentration of AI power threatens democracy
EXPLANATION
Martin warns that a small number of technology corporations now control vast AI capabilities, concentrating benefits while the costs fall on the most vulnerable. This concentration undermines democratic oversight and strains the social contract.
EVIDENCE
He cites an automated traffic management system in Amsterdam that routed congestion through low-income neighborhoods because the algorithm learned those communities lacked political influence, illustrating how AI can reinforce existing inequities [4]. He also notes that a handful of corporations have market capitalisations exceeding entire national equity markets, while workers in the global south receive minimal pay for data annotation, showing the imbalance of power and benefit distribution [7-13].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The concentration of AI capabilities in a few corporations, whose market capitalisations exceed those of whole nations, and the minimal compensation for data annotators in the Global South are highlighted as democratic risks in [S15] and [S1]; concerns about lack of oversight are also noted in [S22].
MAJOR DISCUSSION POINT
Concentration of AI power threatens democracy
AGREED WITH
Dr. Fadi Dao, Om Birla
DISAGREED WITH
Om Birla
L
Lord Krish Ravel
1 argument126 words per minute115 words54 seconds
Argument 1
Adaptability to rapid AI change is essential
EXPLANATION
Lord Ravel stresses that the speed of AI advancement creates uncertainty for citizens, and political leaders must be prepared to manage that change. Adaptability is presented as a core value that can help contain public unease.
EVIDENCE
He explains that adaptability to change is a tenet of the Gayatri Parivar and argues that the rapid pace of AI can unsettle people, requiring politicians to contain that uncertainty and leverage preparedness for change as a cardinal value [154-159].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Lord Ravel’s emphasis on adaptability is reflected in the discussion of his role and the need for continuous learning due to the rapid pace of AI, with skill half-life dropping to seven years [S1][S23].
MAJOR DISCUSSION POINT
Adaptability to rapid AI change is essential
AGREED WITH
Iqbal Dhaliwal, Sanjiv Bikhchandani
O
Om Birla
1 argument113 words per minute1924 words1016 seconds
Argument 1
Integrating AI with Indian spiritual/cultural values and building a unified digital parliamentary platform
EXPLANATION
Om Birla outlines a vision where AI development is guided by India’s spiritual and cultural heritage, while also creating a single digital platform for all parliamentary activities. He argues that this integration will showcase Indian values globally and improve governance efficiency.
EVIDENCE
He repeatedly references the Vishgayati family, Dev Sanskriti Vidhyalaya, and Vedic values as the cultural foundation for AI use, and describes a digital assembly that will host all Lok Sabha and Rajya Sabha proceedings on one platform by 2026, making the parliament paperless and searchable via AI-driven metadata [68-70][100-108][109-110].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s approach of blending ethical and cultural values with AI development is described, noting the country’s demographic energy, linguistic complexity and cultural depth as a foundation for AI policy [S15][S26][S27].
MAJOR DISCUSSION POINT
Integrating AI with Indian spiritual/cultural values and building a unified digital parliamentary platform
AGREED WITH
Dr. Chinmay Pandeya
DISAGREED WITH
Martin Chungong
D
Dr. Chinmay Pandeya
1 argument27 words per minute509 words1121 seconds
Argument 1
India’s linguistic and cultural diversity should guide AI democratization
EXPLANATION
Dr. Pandeya highlights India’s vast linguistic and cultural mosaic, arguing that AI policies must reflect this diversity to ensure inclusive democratization. He calls for a path forward that respects the principle of Vasudhaiva Kutumbakam.
EVIDENCE
She notes that India has 27 official languages, 19,500 dialects, and over 400 documented cultures, and asks how the summit should proceed in light of these realities, invoking the cultural ethos of Vasudhaiva Kutumbakam [142-146].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of multilingual AI for India’s 27 official languages and 19,500 dialects is underscored, emphasizing sovereign AI capabilities that respect linguistic diversity [S28][S26].
MAJOR DISCUSSION POINT
India’s linguistic and cultural diversity should guide AI democratization
AGREED WITH
Om Birla
D
Dr. Fadi Dao
1 argument147 words per minute101 words41 seconds
Argument 1
AI should be leveraged for social empowerment and participation, not domination
EXPLANATION
Dr. Dao asserts that AI must serve to empower societies and enable broad participation rather than become a tool for manipulation or control. He commits his organization to building on the summit’s outcomes toward this goal.
EVIDENCE
He states that the purpose of AI democratization is not people’s manipulation or domination, but social empowerment and participation, and pledges Globe Ethics’ commitment to capitalize on the summit’s outcomes for the 2027 Geneva summit [147-149].
MAJOR DISCUSSION POINT
AI should be leveraged for social empowerment and participation, not domination
AGREED WITH
Martin Chungong, Om Birla
R
Roopa Purushothaman
2 arguments192 words per minute842 words262 seconds
Argument 1
AI can generate millions of new jobs in health, education, and entrepreneurship
EXPLANATION
Roopa explains that AI can free specialists from routine tasks, creating new roles for mediators and enabling large‑scale entrepreneurship, especially in underserved sectors like health and education. She estimates tens of millions of jobs could emerge.
EVIDENCE
She describes how AI can shift 50 % of doctors’ time from non-specialist work to patient care, creating a new class of workers and estimating about 30 million jobs across health, education, logistics, and finance [262-280]. She adds that voice-activated local-language models can empower entrepreneurs to access markets and finance, potentially raising SME employment from 10 % to 40 % of private-sector jobs [281-285].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-enabled entrepreneurship, voice-activated local-language models and estimates of tens of millions of new jobs in health, education, logistics and finance are documented in [S1][S25][S38].
MAJOR DISCUSSION POINT
AI can generate millions of new jobs in health, education, and entrepreneurship
AGREED WITH
Sanjiv Bikhchandani, Iqbal Dhaliwal
DISAGREED WITH
Iqbal Dhaliwal, Sanjiv Bikhchandani
Argument 2
Large organisations need to share AI best practices across units to accelerate adoption
EXPLANATION
Roopa argues that within conglomerates, sharing successful AI use‑cases across business units can speed up diffusion and avoid duplicated effort. Collaborative platforms are needed to capture and disseminate lessons learned.
EVIDENCE
She notes that Tata is building platforms for cross-company dialogue on AI best practices, citing examples such as AI safety on shop floors being shared with other units, and the challenge of integrating legacy data silos across sectors [347-354].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for internal knowledge-sharing platforms and the need for mature corporate AI governance are discussed, highlighting open-source stacks for organisations and gaps in current governance practices [S30][S31][S33].
MAJOR DISCUSSION POINT
Large organisations need to share AI best practices across units to accelerate adoption
AGREED WITH
Ronnie Chatterji, Sanjiv Bikhchandani, Iqbal Dhaliwal
I
Iqbal Dhaliwal
1 argument196 words per minute1115 words341 seconds
Argument 1
AI’s rapid, low‑cost diffusion creates labor‑market challenges; policy must keep pace
EXPLANATION
Iqbal points out that AI’s unprecedented speed, low price point, and multimodal capabilities outpace labor‑market adjustment, creating a mismatch that policy must address. He calls for slower rollout and stronger policy infrastructure to protect workers.
EVIDENCE
He describes AI’s low marginal cost, ubiquitous presence on smartphones, and multimodality as unprecedented, noting that labor markets will struggle to keep up and that capital-biased policies exacerbate the issue, urging a “dial-down” of AI speed and better policy support [287-306][307-320].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The unprecedented speed and low marginal cost of AI, combined with a shrinking skill half-life, create labor-market mismatches that outpace policy frameworks, as noted in [S23][S33][S34].
MAJOR DISCUSSION POINT
AI’s rapid, low‑cost diffusion creates labor‑market challenges; policy must keep pace
AGREED WITH
Roopa Purushothaman, Sanjiv Bikhchandani
DISAGREED WITH
Roopa Purushothaman, Sanjiv Bikhchandani
S
Sanjiv Bikhchandani
2 arguments176 words per minute1223 words416 seconds
Argument 1
Individuals must acquire AI skills to remain employable
EXPLANATION
Sanjiv stresses that AI adoption is inevitable and that personal employability now hinges on learning AI tools. He shares his own experience of being the only PC‑literate employee as a parallel to today’s AI literacy gap.
EVIDENCE
He recounts the history of early computers, how being the only PC-literate worker made him indispensable, and argues that today anyone who masters AI platforms will be highly employable, urging people to “just learn AI” [321-340].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The necessity for continuous upskilling in AI is reinforced by observations that skill half-life has dropped dramatically, making AI literacy essential for employability [S23].
MAJOR DISCUSSION POINT
Individuals must acquire AI skills to remain employable
AGREED WITH
Lord Krish Ravel, Iqbal Dhaliwal
DISAGREED WITH
Roopa Purushothaman, Iqbal Dhaliwal
Argument 2
AI tools boost productivity, enable rapid content creation, and open new business models
EXPLANATION
Sanjiv provides concrete examples of how AI accelerates analysis, content production, and customer outreach, leading to new efficiencies without job losses. He highlights AI‑driven investments, rapid film production, and voice‑bot outreach as proof points.
EVIDENCE
He describes using ChatGPT to supplement investment analysis, creating a film in two days that would have taken six weeks and 60 lakhs, summarizing videos in minutes, and deploying voice bots to serve underserved customers, all of which increased productivity while preserving jobs [393-424].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Evidence of AI accelerating analysis, content production, and new business models, while preserving jobs, is provided in business-focused discussions of productivity gains and innovation [S37][S38]<a href="https://dig.watch/event/india-ai-impact-summit-2026/how-ai-drives-innovation-and-economic-growth-2/" target="_blank" class="diplo-source-cite" title="How AI Drives Innovation and Economic Growth" data-source-title="How AI Drives Innovation and Economic Growth" data-source-snippet="Jeanette Rodrigues: all around the Bharat Mandapam. So once again, thank you very much for your time this afternoon and for choosing us to have a conversation with. To start off, I wo”>[S35].
MAJOR DISCUSSION POINT
AI tools boost productivity, enable rapid content creation, and open new business models
AGREED WITH
Roopa Purushothaman, Iqbal Dhaliwal
R
Ronnie Chatterji
2 arguments218 words per minute968 words266 seconds
Argument 1
AI can democratize early childhood education and improve learning outcomes
EXPLANATION
Ronnie highlights the potential of AI to scale early‑childhood education by delivering personalized content to teachers and parents, thereby overcoming previous barriers to widespread adoption.
MAJOR DISCUSSION POINT
AI can democratize early childhood education and improve learning outcomes
Argument 2
OpenAI is developing certification platforms to teach AI skills
EXPLANATION
Ronnie announces that OpenAI is launching a new jobs and certification platform designed to help individuals acquire AI competencies, supporting workforce readiness.
EVIDENCE
He mentions that OpenAI is releasing a new jobs and certification platform connected to AI skill development [390-392].
MAJOR DISCUSSION POINT
OpenAI is developing certification platforms to teach AI skills
AGREED WITH
Sanjiv Bikhchandani, Iqbal Dhaliwal, Roopa Purushothaman
A
Anmol Garg
1 argument118 words per minute109 words54 seconds
Argument 1
OpenAI is expanding its presence in India through partnerships and collaboration
EXPLANATION
Anmol outlines OpenAI’s strategy to deepen its engagement in India by showcasing its leadership team, highlighting work in education, social impact, and expressing excitement about building together with Indian partners.
EVIDENCE
He introduces the OpenAI chief economist and chief of global affairs, notes work in education and social impact, and expresses enthusiasm for continued collaboration and building in India [179-187].
MAJOR DISCUSSION POINT
OpenAI is expanding its presence in India through partnerships and collaboration
K
Kavita Gunjikannan
1 argument156 words per minute57 words21 seconds
Argument 1
OpenAI announced new education partnerships to broaden AI learning
EXPLANATION
Kavita celebrates recent education partnerships announced by OpenAI, indicating a commitment to expanding AI learning opportunities globally.
EVIDENCE
She thanks the audience, introduces herself from OpenAI’s Global Affairs team, and announces that OpenAI has just announced several education partnerships the previous day [471-476].
MAJOR DISCUSSION POINT
OpenAI announced new education partnerships to broaden AI learning
Agreements
Agreement Points
Democratic and ethical governance of AI is essential to protect democracy and ensure inclusive benefits
Speakers: Martin Chungong, Dr. Fadi Dao, Dr. Chinmay Pandeya
Concentration of AI power threatens democracy AI should be leveraged for social empowerment and participation, not domination India’s linguistic and cultural diversity should guide AI democratization
All three speakers stress that AI must be governed democratically and ethically so that it empowers people rather than concentrates power, emphasizing inclusive participation and respect for cultural diversity [12-16][21-22][147-149][142-146].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with calls for democratic AI oversight highlighted in AI for Democracy reports warning of accountability gaps [S55] and the ethical AI principles emphasizing protection of human rights and rule of law [S56]; it also reflects discussions on adapting governance structures to keep pace with AI at the IGF 2023 workshop [S51][S52].
Capacity development and AI skill acquisition are critical for individuals, organisations and societies
Speakers: Ronnie Chatterji, Sanjiv Bikhchandani, Iqbal Dhaliwal, Roopa Purushothaman
OpenAI is developing certification platforms to teach AI skills Individuals must acquire AI skills to remain employable AI’s rapid, low‑cost diffusion creates labor‑market challenges; policy must keep pace Large organisations need to share AI best practices across units to accelerate adoption
The speakers agree that building AI competencies through training, certification, and internal knowledge-sharing is essential to keep pace with AI’s rapid diffusion and to mitigate labour market risks [390-392][321-340][287-306][307-320][347-354].
POLICY CONTEXT (KNOWLEDGE BASE)
Capacity-building is identified as a cornerstone of AI policy in UN-UNESCO recommendations and national AI literacy programmes, e.g., India’s multilingual AI literacy initiative [S73] and broader capacity development frameworks stressing skills for public sector actors [S66][S69].
Adaptability and preparedness are needed to manage the rapid pace of AI change
Speakers: Lord Krish Ravel, Iqbal Dhaliwal, Sanjiv Bikhchandani
Adaptability to rapid AI change is essential AI’s rapid, low‑cost diffusion creates labor‑market challenges; policy must keep pace Individuals must acquire AI skills to remain employable
These speakers highlight that the speed of AI development creates uncertainty, requiring political, policy and personal adaptability, as well as proactive skill-building [154-159][287-306][321-340].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple analyses note the gap between fast-moving AI technology and slower policy cycles, urging adaptable regulatory mechanisms [S49][S58] and flexible, three-month legislative windows for AI oversight [S71].
AI can generate substantial new employment opportunities, especially in health, education and entrepreneurship
Speakers: Roopa Purushothaman, Sanjiv Bikhchandani, Iqbal Dhaliwal
AI can generate millions of new jobs in health, education, and entrepreneurship AI tools boost productivity, enable rapid content creation, and open new business models AI’s rapid, low‑cost diffusion creates labor‑market challenges; policy must keep pace
While acknowledging challenges, the speakers concur that AI will create large numbers of new jobs and business models, particularly by freeing specialists and enabling local-language entrepreneurship [262-280][393-424][287-306].
POLICY CONTEXT (KNOWLEDGE BASE)
Studies and panel discussions underline AI-driven job creation in health, education and entrepreneurship, citing evidence of new roles from AI tutoring and reskilling programmes [S54][S65] and broader economic growth potential [S53][S63].
AI development should reflect cultural and linguistic diversity, using India’s heritage as a model
Speakers: Om Birla, Dr. Chinmay Pandeya
Integrating AI with Indian spiritual/cultural values and building a unified digital parliamentary platform India’s linguistic and cultural diversity should guide AI democratization
Both speakers argue that AI policies and platforms must be rooted in India’s rich cultural, spiritual and linguistic context to ensure inclusive and representative AI deployment [68-108][109-110][142-146].
POLICY CONTEXT (KNOWLEDGE BASE)
Initiatives such as the “AI for Viksit Bharat 2047” programme and UNESCO-aligned work stress multilingual AI and culturally appropriate content, highlighting India’s model for linguistic inclusion [S73][S74][S75].
International cooperation and inclusive multilateral governance are required for responsible AI
Speakers: Martin Chungong, Dr. Fadi Dao, Om Birla
Concentration of AI power threatens democracy AI should be leveraged for social empowerment and participation, not domination AI is a truly global challenge whose effects transcend national borders
The speakers stress that AI challenges cross borders and must be addressed through inclusive, multilateral governance structures and cooperation among nations and parliaments [30-38][39-41][147-149].
POLICY CONTEXT (KNOWLEDGE BASE)
Global AI governance frameworks stress inclusive participation of all stakeholders, as outlined in the UN’s Global AI Policy Framework and OECD/UNESCO collaborative efforts [S61][S59][S50].
Similar Viewpoints
Both warn that unchecked AI concentration endangers democratic values and advocate for AI that empowers citizens rather than enables domination [12-16][21-22][147-149].
Speakers: Martin Chungong, Dr. Fadi Dao
Concentration of AI power threatens democracy AI should be leveraged for social empowerment and participation, not domination
Both emphasize that the unprecedented speed of AI deployment requires adaptable policies and institutions to mitigate labour market disruption [154-159][287-306][307-320].
Speakers: Lord Krish Ravel, Iqbal Dhaliwal
Adaptability to rapid AI change is essential AI’s rapid, low‑cost diffusion creates labor‑market challenges; policy must keep pace
Both see knowledge‑sharing—whether personal upskilling or intra‑organizational best‑practice diffusion—as essential for harnessing AI benefits and avoiding job loss [321-340][347-354].
Speakers: Sanjiv Bikhchandani, Roopa Purushothaman
Individuals must acquire AI skills to remain employable Large organisations need to share AI best practices across units to accelerate adoption
Both argue that AI policy and platforms must be grounded in India’s cultural, spiritual and linguistic diversity to achieve inclusive democratization [68-108][109-110][142-146].
Speakers: Om Birla, Dr. Chinmay Pandeya
Integrating AI with Indian spiritual/cultural values and building a unified digital parliamentary platform India’s linguistic and cultural diversity should guide AI democratization
Unexpected Consensus
Recognition that AI can both pose democratic risks and simultaneously create massive employment opportunities
Speakers: Martin Chungong, Sanjiv Bikhchandani
Concentration of AI power threatens democracy AI tools boost productivity, enable rapid content creation, and open new business models
Martin focuses on the democratic dangers of AI concentration, while Sanjiv highlights AI’s capacity to generate new jobs and business models without immediate job loss. Their concurrence that AI is a double-edged sword-risking democracy yet offering economic gains-is not an obvious alignment given their differing emphases [7-13][393-424].
POLICY CONTEXT (KNOWLEDGE BASE)
Reports balance democratic risks-such as erosion of accountability [S55]-with potential for job creation and economic empowerment highlighted in AI-driven growth analyses [S54][S53].
Overall Assessment

Across the panel there is strong convergence on four core themes: (1) the necessity of democratic, inclusive and ethical AI governance; (2) the urgent need for capacity development and skill acquisition to keep pace with AI’s rapid diffusion; (3) the importance of adaptability in policy and institutions; and (4) the potential of AI to generate new jobs and drive socio‑economic development when guided by cultural and linguistic diversity. These shared positions cut across speakers from inter‑parliamentary leadership, academia, industry and civil society.

High consensus – most speakers echo each other’s concerns and recommendations, indicating a broad, cross‑sectoral agreement that AI must be governed democratically, supported by robust skill‑building, and leveraged for inclusive development. This consensus strengthens the prospect of coordinated policy action and multilateral initiatives at the summit.

Differences
Different Viewpoints
Concentration of AI power and need for democratic regulation versus an optimistic, culturally‑driven deployment of AI
Speakers: Martin Chungong, Om Birla
Concentration of AI power threatens democracy Integrating AI with Indian spiritual/cultural values and building a unified digital parliamentary platform
Martin warns that a handful of technology corporations control AI benefits while costs fall on the vulnerable, calling for red lines, parliamentary oversight and democratic safeguards [4-13][14-18]. Om Birla, by contrast, emphasizes using AI to showcase India’s spiritual and cultural heritage and to create a single digital platform for all parliamentary activities by 2026, focusing on benefits and efficiency without addressing concentration concerns [68-70][100-108][109-110].
POLICY CONTEXT (KNOWLEDGE BASE)
Concerns about AI power concentration are documented in AI for Democracy analyses [S55] while calls for culturally-sensitive AI deployment appear in multilingual AI discussions [S74][S73].
AI’s impact on employment – job creation versus labor‑market disruption and the need for policy control
Speakers: Roopa Purushothaman, Iqbal Dhaliwal, Sanjiv Bikhchandani
AI can generate millions of new jobs in health, education, and entrepreneurship AI’s rapid, low‑cost diffusion creates labor‑market challenges; policy must keep pace Individuals must acquire AI skills to remain employable
Roopa argues that AI will free specialists, creating a new class of mediators and estimating about 30 million new jobs across health, education, logistics and finance, and that voice-activated local-language models will spur entrepreneurship [262-280][281-285]. Iqbal counters that AI’s unprecedented speed, low marginal cost and multimodality will outpace labour-market adjustment, urging a “dial-down” of AI rollout and stronger policy support to protect workers [287-306][307-320]. Sanjiv adds that individuals must learn AI tools to stay employable, likening current AI literacy to early PC literacy [321-340].
POLICY CONTEXT (KNOWLEDGE BASE)
Labor market disruption and policy responses such as universal basic income debates are covered in AI labour impact studies [S62] and future-of-work panels emphasizing both job loss and new opportunities [S63][S64].
Purpose of parliamentary involvement in AI – governance and accountability versus internal efficiency and cultural promotion
Speakers: Martin Chungong, Om Birla
Concentration of AI power threatens democracy Integrating AI with Indian spiritual/cultural values and building a unified digital parliamentary platform
Martin stresses that parliaments must shape AI rules, ensure open debate, transparency and accountability, and act as a bridge between AI impact and political responsibility [21-27][39-44]. Om Birla focuses on using AI to digitise parliamentary work, create a unified, paper-less platform and project Indian spiritual values globally, without foregrounding democratic oversight [68-70][100-108][109-110].
POLICY CONTEXT (KNOWLEDGE BASE)
Parliamentary roles in AI governance are debated in forums stressing accountability and democratic oversight [S71][S58] contrasted with internal efficiency goals highlighted in cultural promotion initiatives [S57].
Unexpected Differences
Adaptability to rapid AI change versus emphasis on democratic regulation
Speakers: Lord Krish Ravel, Martin Chungong
Adaptability to rapid AI change is essential Concentration of AI power threatens democracy
Lord Ravel, speaking from a cultural-spiritual perspective, highlights the need for politicians to be adaptable to AI’s fast pace to contain public uncertainty [154-159]. Martin, from a parliamentary governance stance, focuses on regulating AI concentration and establishing democratic safeguards [14-18]. The clash is unexpected because both address AI governance but prioritize different levers – adaptability versus formal regulation.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between fast-adaptable regulatory models and democratic oversight is discussed in fit-for-purpose governance literature [S51] and calls for evolving governance systems [S52][S58].
Multilingual AI inclusivity versus a single unified digital parliamentary platform
Speakers: Dr. Chinmay Pandeya, Om Birla
India’s linguistic and cultural diversity should guide AI democratization Integrating AI with Indian spiritual/cultural values and building a unified digital parliamentary platform
Dr. Pandeya stresses that AI must accommodate India’s 27 official languages, 19 500 dialects and 400 cultures to achieve true democratization [142-146]. Om Birla promotes a unified, paper-less parliamentary platform that aggregates all proceedings on one system by 2026, implying a standardized interface that may not fully address multilingual needs [100-108][109-110]. The tension between preserving linguistic diversity and pursuing a single platform was not anticipated.
POLICY CONTEXT (KNOWLEDGE BASE)
Multilingual AI inclusion is championed in India’s AI literacy programme and UNESCO discussions on linguistic diversity [S73][S74], while proposals for a unified digital parliamentary interface raise concerns about limiting language diversity [S71].
Overall Assessment

The discussion revealed several substantive disagreements: (1) the extent to which AI concentration requires strict democratic regulation versus a culturally‑driven, development‑focused rollout; (2) divergent views on AI’s impact on employment, with some seeing massive job creation and others warning of labour‑market disruption and the need for policy safeguards; (3) differing conceptions of parliamentary roles, either as overseers of AI governance or as internal users of AI to showcase cultural values. While participants broadly agreed on the importance of inclusive, responsible AI, they diverged on the mechanisms—regulatory red lines, linguistic inclusivity, skill‑building platforms, or cultural integration—to achieve it. These disagreements are moderate to high in intensity and suggest that achieving consensus on AI governance will require bridging regulatory, cultural, and capacity‑building perspectives.

Moderate‑to‑high disagreement, indicating that without coordinated policy frameworks the summit may struggle to align on concrete governance actions, potentially limiting the effectiveness of AI democratization efforts.

Partial Agreements
Both seek an inclusive, democratic AI future. Martin calls for parliamentary red lines and oversight to prevent power concentration [17-20][21-27], while Dr. Pandeya stresses that AI policies must reflect India’s 27 official languages, 19 500 dialects and the principle of Vasudhaiva Kutumbakam to ensure inclusivity [142-146]. They share the goal of equitable AI governance but differ on the primary mechanism – regulatory oversight versus linguistic‑cultural framing.
Speakers: Martin Chungong, Dr. Chinmay Pandeya
Concentration of AI power threatens democracy India’s linguistic and cultural diversity should guide AI democratization
Both aim to build AI capacity. Ronnie announces a new jobs and certification platform to help people acquire AI competencies [390-392], whereas Sanjiv urges individuals to self‑learn AI tools, arguing that AI‑literate workers will remain employable [321-340]. The shared objective is skill development, but the approaches differ – institutional certification versus personal self‑learning.
Speakers: Ronnie Chatterji, Sanjiv Bikhchandani
OpenAI is developing certification platforms to teach AI skills Individuals must acquire AI skills to remain employable
Takeaways
Key takeaways
Concentration of AI power in a few corporations threatens democratic governance; parliamentary oversight and inclusive, transparent decision‑making are essential. Democratic institutions must debate trade‑offs between innovation, safety, efficiency, equity, profit and the public interest. India aims to integrate AI with its spiritual and cultural values, create a unified digital parliamentary platform by 2026, and use AI to serve its linguistic diversity. AI can be a tool for social empowerment—improving early‑childhood education, health services, and fostering entrepreneurship—potentially creating tens of millions of jobs. The rapid, low‑cost diffusion of AI creates labor‑market challenges; policy frameworks must evolve quickly to protect workers. Individuals need to acquire AI skills to remain employable; personal up‑skilling is presented as a key defence against displacement. Large organisations should share AI best‑practice across units and sectors to accelerate responsible adoption. OpenAI is expanding partnerships in India and launching certification platforms to teach AI skills, reinforcing the education‑focused agenda.
Resolutions and action items
Parliaments worldwide pledged to engage actively in AI governance, coordinate across borders, and develop national legislation or oversight mechanisms (over 60 parliaments have already taken action). Commitment to establish “red lines” that AI must not cross and to ensure an equal voice for the Global South in AI governance discussions. India’s parliamentary bodies will launch a unified, paper‑less digital platform for all Lok Sabha and Rajya Sabha proceedings by 2026. Globe Ethics pledged to leverage summit outcomes for the 2027 Geneva AI summit. OpenAI announced the development of a jobs‑and‑certification platform to teach AI competencies.
Unresolved issues
Specific, binding international AI governance standards and mechanisms for enforcement remain undefined. How to operationalise equitable benefit‑sharing for low‑income and Global South populations was not concretised. Concrete policy tools to slow AI rollout or protect labor markets, as suggested by Iqbal Dhaliwal, were not agreed upon. Effective methods for scaling AI education to low‑skill entrepreneurs and micro‑businesses (e.g., in Kenya or Indian villages) remain open. Details on how parliamentary coordination and cross‑party AI committees will be harmonised globally were not settled.
Suggested compromises
Adopt an inclusive, participatory governance model that balances innovation with equity—e.g., agreeing on “red lines” while allowing continued AI development. Recognition of the need to both accelerate AI adoption for societal benefits and to temper its speed to protect labor markets (Iqbal’s call for a “dial‑down” versus Sanjeev’s acceptance of inevitability). Combining corporate AI deployment with public‑sector oversight, allowing corporations to innovate while parliaments ensure accountability and human‑rights compliance.
Thought Provoking Comments
When the systems that govern aspects of people’s daily lives, their access to information services and economic opportunity are controlled by a small number of actors without meaningful public oversight, then the social contract itself is under strain.
Frames AI not merely as a technical issue but as a fundamental democratic concern, linking concentration of AI power to erosion of the social contract.
Sets the overarching narrative for the summit, prompting all subsequent speakers to address governance, accountability, and the need for parliamentary involvement. It shifts the discussion from pure technology to political and ethical dimensions.
Speaker: Martin Chungong
AI can democratize early childhood education: Rocket Learning, partnered with OpenAI, now reaches millions of Anganwadi workers, generating data that tells us how children learn and what customized messages to send to them and their parents.
Provides a concrete, data‑driven example of AI delivering social impact in a low‑resource setting, illustrating how AI can solve real development challenges.
Moves the conversation from abstract risks to tangible benefits, inspiring other panelists (e.g., Roopa and Sanjeev) to discuss scaling such solutions and linking AI to education outcomes.
Speaker: Iqbal Dhaliwal
We estimated about 30 million new jobs could be created in India through AI‑enabled workers that mediate technology—especially in health, education, finance and logistics—plus a surge in entrepreneurship enabled by local‑language models.
Quantifies AI’s potential as a job‑creation engine for a developing economy, challenging the dominant narrative of AI as a job‑killer.
Broadens the debate to macro‑economic opportunities, prompting Sanjeev and Iqbal to discuss skill gaps, policy, and the role of entrepreneurship in harnessing AI’s upside.
Speaker: Roopa Purushothaman
AI is now relentless, the genie’s out of the bottle. You can either do it or have it done to you. Protect your employment by learning AI yourself—become the power user.
Emphasizes personal agency and the urgency of upskilling, framing AI adoption as an individual survival strategy rather than a purely systemic issue.
Shifts tone toward individual responsibility, leading Ronnie to ask about internal diffusion of AI skills and prompting discussion on capability overhang within organizations.
Speaker: Sanjeev Bikhchandani
The speed and pace of AI is unprecedented… we need to dial down the speed and make sure the policy infrastructure keeps up, otherwise labor markets will struggle.
Highlights a critical tension between rapid technological diffusion and the slower pace of policy making, calling for deliberate regulation.
Introduces a cautionary counter‑point that tempers the optimism of earlier comments, steering the panel toward considerations of governance, regulation, and social safety nets.
Speaker: Iqbal Dhaliwal
None of us have the answers… we’re trying to solve these questions. We’ll keep working together, and OpenAI can be a partner.
Acknowledges collective uncertainty while fostering a collaborative mindset, reinforcing the summit’s purpose as a platform for ongoing dialogue rather than a final verdict.
Serves as a turning point that transitions the panel from debate to a concluding call for partnership, setting the stage for the final remarks and future engagement.
Speaker: Ronnie Chatterji
Overall Assessment

The discussion was shaped by a handful of pivotal remarks that moved the conversation from a high‑level warning about democratic erosion (Martin Chungong) to concrete illustrations of AI’s societal benefits (Iqbal’s early‑childhood example) and its economic potential (Roopa’s job‑creation estimate). Sanjeev’s call for personal upskilling and Iqbal’s caution about regulatory lag introduced both agency and urgency, prompting the panel to grapple with the dual realities of opportunity and risk. Ronnie’s closing acknowledgment of uncertainty tied these threads together, framing the summit as the start of an iterative, collaborative effort rather than a definitive solution. Together, these comments redirected the dialogue, deepened its analytical layers, and set a collaborative tone for future AI governance initiatives.

Follow-up Questions
How do you see the way forward from here, given India’s linguistic and cultural diversity and the principle of Vasudhaiva Kutumbakam?
Seeks strategic guidance on inclusive AI governance for India and the global community.
Speaker: Dr. Chinmay Pandeya (to Dr. Fadi Dao)
Could you shed light on the way that India should take now for democracy in the context of AI?
Requests recommendations on democratic approaches to AI governance and adaptation to rapid technological change.
Speaker: Dr. Chinmay Pandeya (to Lord Krish Ravel)
How are you using data, such as signals and other datasets, to understand how AI is affecting the economy, and what are the most interesting findings?
Aims to uncover data‑driven methods for assessing AI’s economic impact and to identify emerging trends.
Speaker: Ronnie Chatterji (to Iqbal Dhaliwal)
What does Tata want with a chief economist, and what are the most interesting AI‑related developments you are seeing in the enterprise?
Seeks clarification of the chief economist role and insight into AI applications across Tata’s diverse businesses.
Speaker: Ronnie Chatterji (to Roopa Purushothaman)
How are you thinking about the most important uses of AI at InfoEdge, and what key metrics are you tracking?
Looks for concrete examples of AI use cases in a job‑portal context and the indicators used to monitor impact.
Speaker: Ronnie Chatterji (to Sanjeev Bikhchandani)
How do you help power users diffuse best practices and learning to other employees, addressing the capability overhang within large organizations?
Targets the challenge of scaling AI expertise internally and fostering cross‑functional knowledge sharing.
Speaker: Ronnie Chatterji (to Roopa Purushothaman)
What parallels exist in development literature for teaching new skills (e.g., entrepreneurship, education), and can global institutions or J‑PAL develop programs to teach AI effectively?
Explores the possibility of structured AI education programs modeled on proven development interventions.
Speaker: Ronnie Chatterji (to Iqbal Dhaliwal)
Research the need for globally agreed ‘red lines’ and an equal voice for the Global South in AI governance frameworks.
Highlights a gap in current governance that could lead to power concentration and inequitable outcomes.
Speaker: Martin Chungong
Investigate mechanisms for coordinated parliamentary action, capacity‑building, and cross‑border cooperation on AI policy.
Calls for systematic study of how legislatures can jointly develop and enforce AI regulations.
Speaker: Martin Chungong
Study the unintended consequences of AI‑driven traffic management systems routing congestion through low‑income neighborhoods.
Seeks empirical evidence on algorithmic bias and its social impact to inform mitigation strategies.
Speaker: Martin Chungong
Assess the environmental costs of AI development and deployment at scale.
Points to a missing analysis of AI’s carbon footprint and resource consumption.
Speaker: Martin Chungong
Examine how AI influences democratic institutions, public accountability, and the social contract.
Calls for research on AI’s broader political implications and risks to democratic legitimacy.
Speaker: Martin Chungong
Evaluate AI‑enabled early childhood education initiatives (e.g., Rocket Learning) and the quality of data they generate for policy‑making.
Identifies a need to measure effectiveness and scalability of AI tools in education.
Speaker: Iqbal Dhaliwal
Research AI’s potential to create new, meaningful jobs in health, education, financial services, and logistics in India, especially for non‑specialist workers.
Seeks quantitative estimates and case studies on AI‑driven job creation.
Speaker: Roopa Purushothaman
Explore AI‑driven entrepreneurship opportunities for SMEs and micro‑enterprises, including the impact of local‑language models.
Calls for investigation into how AI can lower entry barriers for small businesses.
Speaker: Roopa Purushothaman
Analyze the lag between AI technological speed and policy infrastructure, and propose mechanisms to align them.
Highlights a critical governance gap that could exacerbate labor market disruptions.
Speaker: Iqbal Dhaliwal
Study the diffusion of AI skills within large conglomerates, focusing on overcoming capability overhang and legacy system integration.
Aims to identify best practices for internal AI adoption and knowledge transfer.
Speaker: Roopa Purushothaman
Monitor AI’s impact on the IT job market, particularly for platforms like Nocri, to detect early signs of displacement or transformation.
Calls for ongoing empirical tracking of AI’s effect on employment in the tech sector.
Speaker: Sanjeev Bikhchandani
Develop and evaluate AI certification and training platforms to upskill workers and students.
Proposes creation of standardized credentials to address skill gaps.
Speaker: Ronnie Chatterji
Investigate the effectiveness of AI tools in supporting micro‑entrepreneurs in low‑resource settings (e.g., Kenya study).
Seeks evidence on how AI can level up low‑performing entrepreneurs versus high‑performers.
Speaker: Iqbal Dhaliwal
Research the application of AI for parliamentary processes (e.g., metadata search across debates) and its impact on legislative efficiency and transparency.
Explores how AI can enhance democratic institutions by improving access to legislative information.
Speaker: Om Birla
Examine AI‑enabled answer‑based technologies for improving coordination among international organizations and their member states.
Looks at potential for AI to streamline inter‑governmental communication and decision‑making.
Speaker: Om Birla

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

How Trust and Safety Drive Innovation and Sustainable Growth

How Trust and Safety Drive Innovation and Sustainable Growth

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel, moderated by IAPP’s Trevor Hughes, brought together a civil-society leader (Alex Reed-Gibbons) [2], a tech-industry representative (Amanda Craig, Microsoft) [4], and two regulators (UK Information Commissioner John Edwards [7] and Singapore PDPC Deputy Commissioner Denise Wong) [8] to examine whether trust can act as an engine for AI growth [16-18]. Hughes noted a paradox: while a “deregulatory mood” is evident [28-30], every banner on the summit floor highlighted trust, safety or privacy [32-34], prompting the question of whether the market is truly stepping back from guardrails [36-38].


Alex Reed-Gibbons argued that adoption-and thus innovation-depends on multiple dimensions of trust, from cultural fit to data security [56-63], and that thoughtful regulation can supply the “fuel” for that trust [67-70]. John Edwards explained that, in the UK, existing data-protection law (the GDPR) already provides a de-facto regulatory regime that sets common standards such as privacy-by-design and impact assessments, thereby helping businesses demonstrate trust [84-92][94-99][102-108]. Denise Wong described Singapore’s approach of regulating clear harms (e.g., AI-generated deepfakes in elections) while leaving broader issues to sectoral rules and adaptable codes of practice [136-141][144-148][252-258].


A consensus emerged that trust and safety are essential, though additional regulation is only clearly needed for high-risk scenarios [164-169]. When asked for promising innovations, the panel highlighted provenance tools to increase transparency [316-322], the concept of agency to restore user control [328-334], privacy-enhancing technologies such as federated learning [346-351], and well-funded, independent watchdogs to represent the public interest [355-356]. The discussion concluded that ongoing coordination among regulators, industry and civil society is crucial to embed trust and safety into AI’s expanding role in society [412-419][423-425].


Keypoints


Major discussion points


Trust and safety are seen as the engine of AI adoption and innovation, even amid a “deregulatory” climate.


Trevor notes the paradox of a deregulatory mood while trust-and-safety messaging dominates the summit ([28-39]). Alex expands that trust is the economic driver that makes AI tools usable and that thoughtful regulation can actually fuel innovation ([56-68]). John reinforces that regulatory standards (e.g., data-protection-by-design) provide a common metric for trust ([88-95]).


Existing data-protection regimes are being repurposed for AI, but there is debate over the need for new, AI-specific rules.


The UK relies on the UK-GDPR as a de-facto AI framework and issues guidance that maps AI practices to GDPR principles ([84-107]). Singapore’s approach blends sector-specific regulation for clear harms with broader “codes of practice” that sit alongside the PDPA, showing a preference for flexible, non-prescriptive tools ([136-158][252-258]). Alex points out that even where laws exist, AI’s opacity makes enforcement difficult, highlighting the need for a transparency layer ([150-162]).


Identifying and managing AI harms requires a mix of high-risk taxonomies, sector-agnostic principles, and supply-chain-wide governance.


Denise describes emerging global harm taxonomies (e.g., the International AI Safety Report) and the difficulty of prospectively defining harms, advocating for agile mechanisms like codes of practice ([236-247][250-257]). Amanda outlines Microsoft’s “sensitive-use” categories-impact on life opportunities, psychological/physical harm, and human-rights impacts-and stresses the challenge of governing risk across the entire AI supply chain ([195-204][209-219]).


Promising innovations to strengthen trust and safety were highlighted, ranging from technical tools to institutional capacity.


Amanda cites provenance tools that track dynamic AI components as a way to increase transparency and accountability ([316-322]). John emphasizes “agency”-giving users control beyond consent, such as delete or opt-out mechanisms ([328-334]). Denise points to privacy-enhancing technologies like federated learning that can protect data when law falls short ([346-351]). Alex stresses the importance of well-staffed, independent regulatory bodies and civil-society watchdogs ([355-356]).


Cross-jurisdictional coordination among regulators is essential to avoid fragmented oversight.


John describes active collaboration with Ofcom, the Global Privacy Assembly, and other regulators to share expectations and avoid duplicated effort ([274-301]). Trevor underscores that regulator interaction becomes critical in the absence of a unified AI standard ([272-273]).


Overall purpose / goal of the discussion


The panel, convened by the IAPP, aimed to explore why trust and safety are crucial for AI-driven growth, assess the current patchwork of regulatory and industry governance, identify gaps where new safeguards may be needed, and surface practical ideas-both technical and institutional-that can help align innovation with responsible, trustworthy AI deployment.


Overall tone and its evolution


The conversation begins analytically and slightly skeptical, highlighting a “deregulatory” mood versus pervasive trust messaging ([28-39]). It quickly shifts to a collaborative, solution-focused tone as panelists share examples of existing frameworks, emerging harm taxonomies, and innovative governance tools. By the later “speed-round” and closing remarks, the tone becomes upbeat and even humorous (e.g., playful audience polls, “cheat” comments), while maintaining a constructive optimism about building trustworthy AI ecosystems.


Speakers

Alexandra Reeve Givens – CEO of the Center for Democracy and Technology; expertise in civil rights, civil liberties, AI governance, trust and safety [S1][S2]


Amanda Craig – General Manager for Responsible AI Policy at Microsoft; expertise in responsible AI governance and policy 


Trevor Hughes – Moderator representing the International Association of Privacy Professionals (IAPP); expertise in privacy, data protection, and AI trust & safety 


Denise Wong – Deputy Commissioner of the Personal Data Protection Commission (PDPC) in Singapore and Assistant Chief Executive of IMDA; expertise in data protection, privacy-enhancing technologies, and AI regulatory frameworks [S9]


John Edwards – Information Commissioner of the United Kingdom (ICO); expertise in data protection law, privacy regulation, and AI oversight [S11]


Additional speakers:


– None identified beyond the listed speakers.


Full session reportComprehensive analysis and detailed insights

The session was opened by IAPP’s Trevor Hughes, who introduced the four panelists – civil-society leader Alexandra Reeve Givens, CEO of the Center for Democracy and Technology; Microsoft’s General Manager for Responsible AI Policy, Amanda Craig; the UK Information Commissioner, John Edwards; and the Deputy Commissioner of Singapore’s PDPC, Denise Wong – and noted that the discussion would explore whether “trust can act as an engine for AI growth” [1-12][16-18]. Hughes highlighted a paradox: while a “deregulatory mood” seems to be prevailing in policy circles, virtually every banner on the summit floor referenced trust, safety or privacy, raising the question of whether the market is truly stepping back from guard-rails [28-38].


Alexandra Reeve Givens argued that the long-term success of AI depends on adoption, which in turn requires trust across multiple dimensions – fit-for-purpose, linguistic and cultural suitability, privacy protection, data security and the quality of the underlying information [56-63]. She framed trust as the economic driver that will fuel innovation and contended that thoughtful, principle-based regulation can supply the “fuel” for that trust by outsourcing the burden of assurance from individual users to a common standard [67-70].


John Edwards explained that the United Kingdom does not have a dedicated AI statute because the UK-GDPR already provides a de-facto regulatory regime for AI. He described how data-protection-by-design, data-protection impact assessments and other GDPR-derived obligations give businesses a measurable way to demonstrate trust, and how the ICO issues guidance that maps AI practices onto existing GDPR principles, thereby filling any perceived regulatory lacuna [84-108].


Denise Wong outlined Singapore’s layered approach. For harms that are clear and present – such as AI-generated deep-fakes used in elections or AI-facilitated scams – the PDPC has enacted specific regulations [136-141]. For the remainder of AI applications, the regulator relies on existing sectoral laws and on “codes of practice” that can be updated quickly, positioning these tools as an “outcome-driven” complement to the more prescriptive PDPA [144-148][252-258].


Alexandra Reeve Givens illustrated a concrete transparency problem: U.S. equal-employment-opportunity statutes prohibit discrimination, yet AI-driven hiring tools can hide bias, making it extremely difficult for a candidate to prove a violation without a disclosure regime that forces providers to reveal model details and conduct impact assessments [150-162]. She therefore stressed that a horizontal transparency layer is essential to give existing anti-discrimination laws practical effect [150-162].


Amanda Craig described Microsoft’s internal governance model, which categorises “sensitive uses” into three high-impact buckets – impacts on life opportunities (e.g., employment, education), psychological or physical harm (especially for vulnerable groups), and human-rights implications [199-204]. She argued that AI risk must be managed across the entire supply chain, drawing on lessons from cybersecurity where risk is addressed holistically rather than focusing on a single component [209-219].


Across the panel there was strong consensus that trust and safety are prerequisites for AI adoption and that transparency, provenance and robust governance are the means to achieve them. All speakers emphasized that trust is a central outcome of both policy and corporate action[56-63][84-92][112-119][128-132]; that provenance tools can make dynamic AI components traceable [316-322]; that the concept of “agency” can restore user control beyond mere consent [328-334]; that privacy-enhancing technologies such as federated learning can protect data where law cannot [346-351]; and that well-staffed, independent watchdogs are essential to represent the public interest [355-356].


Nevertheless, the panel diverged on how far regulation should go. Edwards maintained that the GDPR already supplies sufficient safeguards and that new AI-specific legislation would be redundant [84-107]. Craig counter-argued that internal responsible-AI programmes must be complemented by external regulation to sustain trust [112-119]. Wong advocated a risk-based, “clear-harm-first” approach, limiting formal statutes to obvious threats while using agile codes of practice for the rest [136-147][251-258]. Givens pointed to emerging AI-specific statutes in the EU and several U.S. states as examples of targeted regulation that can actually promote innovation [262-270]. Hughes highlighted the tension between the deregulatory narrative and the ubiquity of trust-focused messaging [28-38].


When asked to name a promising innovation, Craig highlighted “provenance tools” – software-built material that tracks the provenance of dynamic AI components, thereby increasing transparency and accountability [316-322]. Edwards responded with the word “agency”, suggesting that restoring users’ ability to withdraw consent, delete data and control outcomes is a more powerful safeguard than traditional consent mechanisms [328-334]. Wong selected privacy-enhancing technologies, noting that federated learning is already moving from research into production and can secure personal data where legislation falls short [346-351]. Givens concluded with a call for “well-staffed, empowered, independent regulatory bodies and technically informed civil-society organisations” to act as public-interest watchdogs [355-356].


Cross-jurisdictional coordination was repeatedly stressed as essential. Edwards described active collaboration between the ICO, Ofcom and the Global Privacy Assembly to share expectations on AI safety, especially in the ongoing GROK investigation, and warned that fragmented oversight can leave gaps [274-301]. Hughes echoed this, noting that in the absence of a unified AI standard, regulator interaction becomes critical [272-279]. Wong added that an emerging international harm taxonomy – such as the International AI Safety Report – is beginning to provide a common language for regulators worldwide [244-247].


The audience poll on the relationship between innovation and regulation produced a majority thumbs-up, indicating a generally positive view of the innovation-regulation relationship [181-188]. The discussion turned to the difficulty of prescriptive regulation, illustrated by the long-standing debate over cookie consent mechanisms, which still burden users despite decades of experience [181-188][190-197]. Craig linked this to AI by arguing that identifying harms requires a blend of high-risk taxonomies, sector-agnostic principles and supply-chain-wide governance [195-204][209-219].


In closing, Hughes asked the panel to imagine the name of the AI summit five years hence, prompting playful answers such as “AI Trust Summit”, “Nostalgia”, “Thriving” and “For the people, by the people” [392-399][403-408]. He then reflected that the hard work of embedding trust and safety into AI – likened to the historic challenge of bringing electricity into the White House – is being carried out daily by organisations, regulators and civil-society actors, and urged the audience to thank those who are doing this work [412-419][421-425]. Trevor closed by likening the challenge of building trust in AI to the historic task of bringing electricity to the White House, and thanked the audience for the work being done across sectors[412-425].


Key take-aways


– Trust and safety are essential drivers of AI adoption; without trust users will not “flip the switch” [45-46][56-63][84-92][112-119][128-132].


– Existing data-protection regimes (UK-GDPR, Singapore PDPA) already provide a baseline regulatory layer for AI, offering tools such as privacy-by-design and impact assessments [84-108][144-148].


– A paradox exists between a deregulatory climate and the pervasive emphasis on trust in industry and policy messaging [28-38].


– Thoughtful, principle-based regulation can act as a catalyst for innovation rather than a brake [67-70].


– High-risk or clearly harmful AI applications (e.g., election deep-fakes, discriminatory hiring tools) merit targeted regulation; broader AI use can be governed through sectoral rules and internal standards [136-141][199-204].


– Identifying AI harms requires a mix of existing law, emerging harm taxonomies, corporate risk taxonomies and supply-chain-wide governance [195-204][244-247].


– Promising innovations include provenance tools, the agency concept, privacy-enhancing technologies and well-resourced independent watchdogs [316-322][328-334][346-351][355-356].


– International coordination (ICO-Ofcom-GPA collaboration, global harm taxonomies) is critical to avoid fragmented oversight [274-301][272-279][244-247].


Unresolved issues and suggested compromises


– The extent to which new AI-specific legislation is needed beyond existing data-protection frameworks remains contested.


– Prospective definition and prioritisation of emerging AI harms must balance cultural specificity with global consistency.


– Achieving a true “Brussels effect” for AI governance is still a work in progress.


– A hybrid approach is proposed: regulate clear, high-impact harms; use sectoral rules and agile codes of practice for the rest; and supplement with internal responsible-AI programmes and technical tools such as provenance and PETs [136-147][251-258][262-270][112-119].


Follow-up questions raised


1. How can effective transparency and disclosure regimes be built for high-risk AI contexts such as hiring to enable enforcement of anti-discrimination laws? [150-162]


2. What mechanisms allow regulators to prospectively identify and classify AI-related harms in culturally specific ways? [236-247]


3. How should international regulator coordination be structured to address cross-jurisdictional AI incidents like the GROK case? [274-301]


4. What is the effectiveness of regulatory sandboxes and codes of practice as less-prescriptive tools, and how can they be evaluated? [251-258]


5. How can provenance tools be standardised to provide traceability for agentic AI systems? [316-322]


6. How can the concept of “agency” be operationalised to shift responsibility back to providers rather than burdening users? [328-334]


7. What is the current state of adoption and impact of privacy-enhancing technologies such as federated learning? [346-351]


8. Why has the EU AI Act not generated a Brussels-effect comparable to the GDPR, and what factors influence global diffusion of AI regulatory models? [232-235][262-270]


9. How can independent regulatory bodies be protected and resourced to effectively represent the public interest in AI governance? [355-356]


10. Are current consent mechanisms adequate for AI-driven data processing, or are new user-centric remedies required? [181-188][210-218]


These points capture the breadth of the discussion, the areas of consensus and contention, and the concrete ideas proposed for advancing trustworthy AI governance.


Session transcriptComplete transcript of the session
Trevor Hughes

and then we’re going to dive right into my immediate left. I have Alex Reed -Gibbons, who is the CEO of the Center for Democracy and Technology, one of the leading advocacy organizations in the world, working on civil rights, civil liberties all around the world. She’s based in D .C. To her immediate left is Amanda Craig. Amanda is the General Manager for Responsible AI Policy at Microsoft. To Amanda’s left, we have John Edwards. John Edwards is known to many. He is the Information Commissioner of the United Kingdom. And to John’s left, we have Denise Wong, who is the Deputy Commissioner of the PDPC in Singapore, the Privacy and Data Protection Commission. Welcome to our panelists. So we have two regulators, an industry representative and a civil society representative.

And I come from the IAPP. If you don’t know the IAPP, we are a global professional association, a not -for -profit but also policy, and we’re neutral. We’re not just a company. an advocacy or a lobbying body, we bring together the people who do the work. Many of them are in the room right now who do the very hard work of data protection and AI governance all around the world. All right, let’s jump in. The title of the session reflects trust as an engine for growth. Let’s think about that just for a minute. Just a few short years ago, I think it was two and a half, maybe three years ago, this event started in Bletchley Park in England.

And in that iteration of the event, it was named the AI Safety Summit. Right around that time, the EUAI Act was being negotiated. It soon passed after that. But a lot has changed in that two or three years. This event is the AI Impact Summit. The event last year in Paris was the AI Action Summit. More recently, we have seen, the not yet fully implemented EUA. AI Act become subject to an omnibus package where some of the expectations of that original act are being dialed back a little bit. And we’ve seen broad critique of regulatory structures, trust and safety structures that might inhibit growth and innovation in AI. There clearly is a deregulatory mood in the air.

In fact, I think it’s notable that there has not been much discussion of law or regulatory initiatives that might create guardrails to help guide the adoption of AI. So clearly, we’re in an odd moment, and an odd moment for this panel. But as I walked around the campus of this event, this enormous campus, I noted something that was, I think, quite significant. Just about every second banner or poster, just about every large printout, printed word in the show floor, somewhere had trust. safety or privacy as part of the messaging. In fact, the sutras, and we’ll talk about them as we go through the session, the principles announced by the Indian government largely around trust and safety.

And so what gives? What’s the dichotomy here? At one moment we are saying it’s a deregulatory mode, we step back. Well, at the same time, we are actively embracing and discussing trust and safety, risk management, protecting consumers, citizens, human beings as they engage with AI. So do we care or not? Are we actually in a deregulatory moment, or have we just gotten quiet about the need for guardrails and trust and safety in these systems? I would say for business, risk exists regardless of whether there’s a law in place or not, and so businesses have an imperative to respond. I’m going to tell a very, very quick story, and that is that in 1891, when electricity was first being brought into the White House in the United States, then President Benjamin Harrison and his wife, Caroline, were actually terrified of flipping the light switch.

And so they hired the electrician from the Edison Company, a man named Ike Hoover, who went on to become the chief usher of the White House. They hired him to flip the light switch. I think the message of this story is that we won’t use it if we don’t trust it. And so as AI is being pulled through the walls of our world, as it’s creating light and switches and tools for us to use, I think we need to ensure that we’re comfortable flipping those switches. And that is the topic of our panel today. So let’s jump in. And our first question is going to be about just the moment that we find ourselves in.

And I’m going to start with Alex. why are trust and safety important to innovation? And maybe speak to this dichotomy that I’ve highlighted. Why is it in this moment that we can’t talk about regulation, but everywhere it seems we’re talking about trust and safety?

Alexandra Reeve Givens

Yeah, first of all, thank you for convening us, and it’s a pleasure to be here. I think you really hit the nail on the head in your introduction, which is when we think about the long -term success and sustainability of AI, and that is business sustainability for the companies, as well as societal sustainability for all of us. The secret is not just an acceleration, the biggest, fastest, most capable model. The real story is one of adoption, and that has been the overwhelming theme of the summit this year. And for people to adopt this technology, they need to trust it. And that’s trust in multiple different facets, right? Is the tool fit for purpose? Does it work in your language?

Is it appropriate for your culture? Will it protect your privacy? Is your data going to be secured? What is the quality of the information that is grounding that model and those outputs? And I think people are really waking up to this, and they’re demanding more. This is both as individual users and then, of course, for enterprise customers, too, who themselves are saying, we’re on the front lines thinking about how to integrate AI into our business operations. We’re the ones who will likely be sued if this goes wrong. So this is where trust really is the fuel of innovation because it is what’s going to be the economic driver of these tools being adopted. And the other thing that I would add is what we see is not only that trust is important for innovation in the abstract, but this is also where responsible, thoughtful regulation can be fuel for innovation as well.

Because the same way that we want to be able to drive cars without all of us being experts in how a motor works, Product liability and good laws around the creation of these tools help make sure they outsource some of that work for us so that we don’t all have to be doing the individual labor of deciding whether we can trust. So many times people will create this false framing of regulation versus innovation, as opposed to thoughtful being regulation being the fuel that actually allows us to sell, buy, and use these tools.

Trevor Hughes

Excellent. Fascinating. John, I’m going to jump to you, and Amanda, I will come right back. But I’m going to jump to you. The U .K. doesn’t have an AI law in place. It has lots of laws that will apply to AI. I think data protection and the GDPR Act in the U .K. is a great example of that. But talk to us a little bit about regulating in the absence of an AI law. What does that look like in the U .K.? And do you see organizations exhibiting behavior that demonstrates that they’re focused on the ideas that Alex suggested, that trust and safety matter regardless of the relationship? What is the regulatory structure that sits over them?

John Edwards

Yeah, absolutely. Absolutely. Absolutely.

Trevor Hughes

There it is.

John Edwards

No, very much so. I mean, the data protection laws apply across the board wherever technology touches personal data. So we have a de facto regulatory regime under the UK GDPR. Coming back to your comment about trust, it’s so important, and there is a role for regulation actually in assisting businesses because businesses are trying to deliver that trust proposition to consumers. But by what metric? Right. And that’s, I think, where regulation can provide a common standard. So, you know, we require, it’s a regulatory tool, that you have to do data protection by design. You have to do data protection impact assessment. You know, we expect. Privacy by design. We expect. respect risk assessment. So all of these things are regulatory requirements, but they are also tools that help intermediate between businesses and the consumers to demonstrate that there is a basis for trust.

And an organization like the ICO is there for both sides to see, well, there’s someone actually overseeing that. And that’s a role that we do discharge. Judge, to your point about the absence of prescriptive regulation in the UK on AI, we don’t see that particularly as a deficit. I mean, I think there’s a lot of policy work going on in areas where policymakers and regulators do need to step in. That’s ongoing, and I won’t comment on that. But, you know, there are ongoing issues about the distribution of proceeds from the use of creative materials and the like. That carries on. But… In the absence of an explicit rule, it’s incumbent on my office to deliver safety and confidence and metrics for industry and to deliver certainty over what can be seen as an uncertain law.

So we’ve gone out and said, well, here’s how we see the technology -neutral general principles of the GDPR apply when you train a model, for example. We see, for example, the EU AI Act in Article 10 talks about the need for fairness. Well, we’ve been able to articulate those obligations by way of guidance, linking it back already to the GDPR principles. So, you know, there’s a mapping. I don’t think at the moment for the available applications of artificial intelligence technologies that there is a lacuna. It’s there. With the GDPR. And we are there to provide. confidence and certainty about how you apply that, how you improve your products with it, and how by doing so you engender that trust that you described at the outset.

Trevor Hughes

Excellent. Okay, so Amanda, tell us, do you agree that there’s not a need for additional rails, traffic indicators in AI? Is John right that the existing regulatory structure is really providing enough guidance or is it the case that Microsoft is using internal principles, frameworks, standards that you might adopt to build programs and services that you think meet the expectations of trust and safety of the marketplace?

Amanda Craig

Thank you. From a Microsoft perspective, we are focused on implementing our responsible AI governance program and see opportunity for lots of different governance models that governments could pursue in terms of implementing existing regulation, developing additional regulation that complements that existing regulation. I think the through line for us, the bottom line, is very much what Alex started us off with, that we do very much see, we’ve seen through multiple generations of technology, we’re not going to have adoption, we’re not going to have use of this technology without trust. And we need to have governance programs at technology companies. We need to have governance efforts by governments that are ensuring that we have an evolving conversation about trust.

Because if I pull the thread on the analogy you started us with, like how do you flip on a light switch and that can be scary when you’ve not done it before, I think the other thing that is very challenging, true about this technology, is that it is also very dynamic. It is evolving very quickly. quickly. And people might even be scared that, like, they won’t know where to find the light switch next week. And that brings a whole different set of challenges. And so that requires not just confidence in how you are able to sort of trust the technology today, but also that there’s trust in a governance process that will continue to iterate and evolve alongside the technology.

Trevor Hughes

Excellent. Denise, help us then here. I know Singapore has released guidelines, standards around AI. Tell us about the Singaporean experience in thinking about regulating trust and safety in AI.

Denise Wong

Thanks so much, Trevor. And thank you to the IAPP for putting this together and for having us. Maybe I’ll answer that question by linking some of the concepts that we’ve talked about. And that sort of underpins our philosophy. Trust and safety is the outcome that we want. You know, we want to create the necessary conditions for the society. to thrive for the public and the enterprises to use the technology with confidence. So AI for that public good. To do that, we need governance. We need a framework of thinking about how we can govern the technology, and we’ve been doing this for all sorts of technology. AI is but one. Regulations are a mechanism, a type of governance mechanism that you use when the necessary and correct conditions exist.

And so that map of that concept informs how we think about our governance approach. So on issues that are very clear, where there are clear harms, we have stepped in to regulate. An example of this is elections regulations that we put in place where we prohibited the use of AI deepfakes to represent candidates. It was time -limited. It was for the period of elections, but we stepped in and put a law in place for that. We also have laws for AI creating online harms, as well as AI in scam situations. So that is the part where we regulate for clear and present harms. For the rest of it, a lot of it we leave to sectoral regulations where there’s already a web of existing regulations, and on specific issues as well.

John and I and many of us are in the data protection field where, as John has said, there are already existing laws that can be tacked on, updated, reviewed in order to deal with this new technology that has come about. So where we have done AI governance frameworks and tools that you’ve mentioned is where we’ve seen a need to create some sort of horizontal principles and platforms to think about the sector agnostic general issues on transparency, on what model governance for corporates could look like. We haven’t seen the need to regulate that horizontal layer just yet, but certainly a need to articulate some of these principles. And that also allows us to create more certainty for the market, to give them some direction that actually this can be a market -driven assurance system that has demand, has supply, and has what we’ll call them proto -standards, early days of standards about what good looks like.

So that’s the work that we’ve been doing and trying to create and simplify. We have a seed and an assurance ecosystem that sits, I would say, adjacent and complementary to regulations where they’re needed.

Trevor Hughes

Fantastic. Please, please.

Alexandra Reeve Givens

So just to comment on that, one area that I think is proving very important, and people are discovering this across jurisdictions, is even where existing laws apply, there is a problem where AI systems make it hard to know whether or not those laws are being broken. So this is where that transparency layer you were articulating really becomes important.

Trevor Hughes

Give us an example.

Alexandra Reeve Givens

Yeah, and I’m going to make it U .S.-centric just because it’s the one that’s top of mind, so forgive the bias here. So in the U .S., we have equal employment laws. It is against the law to discriminate in the course of hiring. So in theory, a piece of software that perpetuates discrimination against particular candidates, for example, not considering the resumes of people over a certain age, is violating an existing law. So people will say we don’t need any further regulation. We’re done. The problem is you can tell in a human run system where it was just a bad apple in the HR department, it’s been historically easier to prove that case. Now, when it’s AI powered software making that decision, it is really hard as a worker who’s just put in a resume and never got an answer back to know if something was going wrong.

If you actually get up your courage and file a case, it is really hard to prove your case if there is discrimination. And so without some type of disclosure regime that requires transparency in these high risk scenarios, high impact scenarios, to have transparency and disclosure about the system that is being used, impact assessments to make sure that discrimination isn’t happening, you actually don’t get the remedy that people really need under existing law. And so that’s where I think this horizontal piece can complement the sector specific vertical laws in a light touch way, but actually gives meaning to the laws on the books.

Trevor Hughes

So I think that’s a great example of the harm trigger that Denise described, that we identify a clear harm and that may be a place where additional regulatory structure might be helpful. I think we heard pretty significant consensus across our panel. Trust and safety is good. That’s good that we’re there. That’s a great consensus to achieve. And not complete consensus on the idea that additional regulation is needed yet. With the exception perhaps of a few scenarios in which we can identify high risk or harm. Let’s go to our audience for a second. Help us describe the relationship between innovation and regulation in AI. If you think it’s a great relationship, thumbs up. If you think it’s a bad relationship, thumbs down.

If you think it’s complicated, make it complicated. What do we think? Oh, I see a lot of content. What does our panel think? I think it’s a good relationship between innovation and appropriate regulation. Fascinating. We have a very strongly opinioned audience here. That’s great. Let’s talk about regulation again and dive in just a bit deeper. I think one of the things that’s tremendously challenging is prescriptive regulation, trying to understand harms that might occur before technology is fully adopted broadly in the marketplace. I’m a veteran of the privacy world going back to 1995, 1996. And in the late 1990s, we were talking extensively about cookies and how do we regulate cookies and the privacy issues associated with cookies.

Guess what? We’re still talking about cookies often. And I know for many of the privacy and data protection, they’re nodding already. They’re crying a little bit because it’s so, so painful to implement. implement many of the cookie banners and cookie consent mechanisms that we have. And I’m not entirely sure, we might get John to admit this even, that, you know, those cookie banners are actually driving the outcomes that we hope for. We identified the biggest and worst harm or concern and dedicated resources appropriately to that. Amanda, I’m going to jump right to you. Talk to us a little bit about identifying those harms. Alex gave us one, which is perhaps AI reviewing HR submissions, resumes, CVs, and language in those CVs may actually create results that were not intended, that create bias, that, you know, in a human -driven system would be easier to find, in an AI -driven system just much, much harder to find.

That’s a great example. How do we identify those prescriptive harms, those harms that we’re not quite sure about yet, that may emerge? Do we do it through principles, through ethics, through what?

Amanda Craig

I think all of the above to some extent. Part of why we start with principles in our governance program is I think it’s helpful to orient towards what do we care about, right, as we then try to build a program that realizes those outcomes. I think we also can look at existing law that reflects where there are harms, like in the employment context, where people could be mistreated or treated unfairly that we know we care about. And there’s been a lot of effort and regulation to define high risk, high impact. At Microsoft, we have something called the sensitive uses sort of scenarios where, you know, we have three categories where technology could have like an impact on someone’s life opportunity or consequential life impact of something like employment or education opportunities, for example, or how someone’s treated under the law otherwise, all sort of fit in that context.

We have the second big category of harm that we have defined as around sort of the risk for psychological or physical harm. So think about vulnerable populations there. Think about the use of AI in critical infrastructure. And then the third category is the use of AI that impacts human rights. So, you know, we have our way of defining what is really high impact. You know, a lot of governments, again, have taken different routes. I think the other thing that we’ve seen is the kind of emergence of a conversation around sort of technology itself that poses specific high risks. For example, highly capable models that have a whole other set of risks that are the risks that are being defined.

And that’s one thing that I just want to draw out as we think about this and drawing upon what I feel like, you know, and I didn’t grow up in the privacy world, I grew up in the cybersecurity world. And one of the things that I think a lot about as we work on, you know, defining these harms and figuring out what to do about them, that we can learn from the kind of… decades of work on cybersecurity is the challenge of thinking about how to address risk across the supply chain. And I think it’s a slightly different conversation in AI than it has been traditionally in security with software and cloud technologies. But there is like a common principle or approach that I think we should really look at closely, which is, you know, we are oftentimes in the context of AI thinking about risk and harm where the technology is actually used, right?

And then what’s difficult is figuring out what do we do across the whole supply chain to manage that risk and have that be cohesive. And one of the things that in the cybersecurity context, we know what the risk or harm, it’s much simpler. It’s security risk, that we care about. But we have the same challenge in terms of like, how do we manage that risk across the supply chain? And one of the challenges over decades of work in the cybersecurity context is… Instead of wanting to… put emphasis on one part of the supply chain or the other at any given moment instead of, like, really dealing with the really hard governance challenge that it is everything at once.

And so I think when we, you know, think about the complexity of defining harms in the AI space, that’s important work to do. And also, in the context of managing risk for any of those harms being realized, we also need to think really hard about looking across the whole supply chain at once. Even though it’s hard from a governance perspective, that’s going to be most important for managing the risk ultimately.

Trevor Hughes

Fantastic. And I misspoke. It’s prospective, not prescriptive regulation. But John and Denise, maybe talk to us a little bit about that. And let me frame it for you both. And Denise, we’ll have you start. Clearly, with data protection regulation, we have had the GDPR now for over seven years. And the effect of that stands out. And I think it’s important to think about that. And I think it’s important to on the global policy environment has been enormous. We now have over 120 countries that have privacy laws in place. Many, many, many of them have genealogical lines that point back to the GDPR. And yet we haven’t seen that in AI yet. The EU AI Act has not taken off around the world.

We don’t see a Brussels effect happening on AI. Is it because the challenge of identifying harm, the challenge of prospectively trying to identify what might

Denise Wong

You always ask me the tough questions. I think, first of all, the harms question, because I think that’s relevant to the regulation question that you’re asking. I think the starting point must be that every country has a unique context. And it’s the job of the government to figure out how to do that. And I think that’s the challenge of prospectively trying to identify what’s harmful, what’s harmful. harmful to their society. I think there’s going to be a huge amount of overlap, but at the end of the day, what’s harmful in one context that’s harmful in India may not be the same as what’s harmful in the US. And the cultural context matters. That said, I think there’s actually increasing consensus, I feel, about what harms or archetypes of harms there are vis -a -vis AI.

And we see that, for example, the International AI Safety Report is starting to anchor some of this taxonomy and sort of buckets and archetypes of harm, and we also see that beginning to happen at Iceland, for example. Those conversations are happening. How does that link to prescriptive regulation or legislation? I think that if the harms are still being coalesced and formed, it’s quite difficult to be very prescriptive about how you deal with those harms, because that, by definition, is sort of changing and still coalescing. It’s still quite nascent. That’s not to say… we should step back. I think we just probably need a slightly more agile way of thinking about that broader concept of governance.

So in the social media context in Singapore, we did it via codes of practice. So we have a broad sort of umbrella legislation that creates a legislative frame for which these codes of practice apply. But the codes of practice can be updated more easily. Same thing, actually, with our data protection law, the PDPA, which is structured quite differently from the GDPR. Our PDPA is actually very not prescriptive. It’s outcome driven. It’s fairly broad. But most of the guidance that PDPC provides, and these are for compliance, is done in advisory guidelines. So I think there are regulatory mechanisms you can use that are less prescriptive than primary legislation. And that gives you enough levers. It’s tools in a toolkit, basically, to be able to deal with the harms and with the problems that the society is facing.

Trevor Hughes

Excellent.

Alexandra Reeve Givens

To dispute you a little bit on the lack of a Brussels effect, I will say, I mean, going actually back to Denise’s point, so not only is there some harmonization happening around the scoping of the harms, I think that certainly is happening, but also on potential points of intervention. So, for example, one of the key elements of the EUAI Act is looking at high -risk scenarios and having their remitigations in place. We have similar laws under consideration in multiple states in the United States, one in the books already in Colorado. They would never say it is a copycat. It came from its own origins. But it is lawmakers thinking what is an appropriate right -scale intervention to that particular risk.

You can look at the recent transparency laws that were passed in California and New York, very similar discussions to the Code of Practice for General Purpose AI models that came out under the EU practice. You can look at the EUAI Act’s provision for regulatory sandboxes and this notion that we want small and medium -sized enterprises, and others to be able to innovate and get a little bit of forgiveness or wiggle room under the laws as they figure out how the regulations apply. That law just got passed in Utah. So there are these glimmers where we are seeing… smart solutions to specific problems and people learning from each other.

Trevor Hughes

I think in the absence of that umbrella AI standard, that interaction with fellow regulators across disciplines and domains becomes really important. Or I will ask you, does it become really important?

John Edwards

Yeah, it is. It’s hugely important that we coordinate. You know, these are new challenges that we’re all facing. On the GROK issue, obviously, it’s under investigation, so I won’t be able to say too much about it. But, you know, we’re interested in what, you know, how models are trained, what data they’re trained on, what output filters are included, what kind of safety mechanisms. I’m interested in what kind of ingestion there is of data when it’s used at that level. But there’s some complexity in that case as well because, you know, you’ve got users using a tool that’s amplified. It’s amplified by social media. I don’t know whether the same functionality is available in any other image generation tool that just hasn’t got the same media because it’s not amplified by a social media platform.

And but, you know, very early on, I think I was back home in New Zealand, actually, on about the 5th of January and started to see this. And I messaged back to the office and said, what are we doing? What’s Ofcom doing? How are we connecting to our international colleagues? And that’s so important. And so we’ve, you know, we’ve messaged into GPA. We’ve coordinated very closely with Ofcom. And, you know, we have to cope with the fact that regulation is a little bit fragmented. So Ofcom is responsible for administering the Online Safety Act in the UK. Now, that is legislation that seeks to regulate the kinds of harmful content that can be delivered to a child’s device, for example.

Right. I see this thing. Is that regulated by online safety? If so, it’s Ofcom. How did that get to me? Well, that depends on how the underlying data was processed. That becomes an ICO, you know, GDPR issue. So we need to be working very, very closely, and we are. But also with the crock issue, one of the very early things we did was to reach out to our colleagues in the GPA, the Global Privacy Assembly, and say, who else is looking at this? Let’s make sure that we’re not sort of treading on each other’s toes, or at least that we’re sharing information, that we’ve got the same ideas, that we think the same way. And that can be tremendously powerful, whether or not you can point to a regulation that that app or that platform is clearly in breach of.

To describe a set of expectations about harm mitigation across a coordinated group of global regulators, I think can be quite powerful. And, you know, just to see how, you know, the alternative for some of these platforms is not necessarily being investigated and trained by the ICO. So it’s like what I noticed the first day that I was here when I went to flip TikTok on and saw this is not available in this country. So if the offering in a particular jurisdiction does not meet the standards and norms of that jurisdiction, these organizations need to understand that they can be switched off, that they are not actually all powerful.

Trevor Hughes

I just have the image of the U .K. Information Commissioner doom -scrolling TikTok in my head now. Let’s do a quick round, and please do keep your answers short, but innovation is not limited to technology, is not limited to business practices. It’s also very powerful in the… The privacy -enhancing, safety -enhancing tools that we use inside organizations. It’s in regulatory structures. Denise has mentioned regulatory sandboxes, or maybe it was Alex, but we’ve heard regulatory sandboxes mentioned. What is the one innovative idea in trust and safety that you think holds real promise? And I’ll let you do one sentence to explain it, but this is a speed round. So we’ll start with Amanda and then work down and come back to Alice.

Amanda Craig

One sentence. Okay. Is that my sentence? I think about provenance tools as an area of innovation. Again, this is calling upon my cybersecurity background, but I think, you know, something like agentic AI is an area where there’s a lot of interest -concerned governance momentum. And one of the challenges is being able to look at something that is fundamentally not just like one technology. It’s technology. It’s a bunch of very dynamic components, models, platform tools, services, applications all working together. And while that feels like a really new, hard challenge, we actually can draw upon what we know of software to actually be a set of dynamic components as well. And one of the ways that we’ve figured out how to govern that or working towards figuring out how to govern it is with software -built materials, something that really allows you to have the ability to track those dynamic components.

And I think that’s something we can apply to agents.

Trevor Hughes

So it increases transparency. It tells you, you know, which algorithm or which system this might have come from. It helps with accountability broadly. Yeah. Excellent. John, what’s the most promising trust and safety innovation that we have?

John Edwards

Well, you challenged us with one word. So I’m going to go with agency. Agency. And I think it’s, for me, it’s a word that, you know, so much of our world is dominated by consent, which is, I won’t say broken, but it’s under strain as a useful concept. Agency, I think, has capacity to recognize that, the objective is to maintain and to restore and maintain an individual’s agency as it uses any product. And that’s consent. It’s actually making sure that provenance is delivered, for example. You can’t have agency if you don’t know the origin of the data that is delivering this agentic miracle to you. It gives you tools at the other end. And consent is always conceived of as a front -end authorizing concept.

But agency says, okay, I’ve done that now. Where’s my delete everything button? Or my I don’t want to do this anymore button. So I think if developers can be thinking about how they deliver the best possible service in a way that restores and maintains the agency of the consumer, I think that will go a long way to addressing some of the problems that we’re seeing. And I think

Trevor Hughes

Fantastic. I had a lot of… Professor years ago now who described burden -shifting wrenches in the law. And I think consent is a burden -shifting wrench that moved much of the burden to the data subject, to the individual. Agency, it sounds to me, is an idea to move back to those who might be accountable and have them have fiduciary or stewardship responsibilities for that person. Denise?

Denise Wong

I would pick privacy -enhancing technology. I think it’s an interesting technological way to deal with at least one part of the problem, which is how do we secure the data, how do we make sure that the personal information is well protected. And it’s advancing so quickly. So two years ago, we were looking at federated learning for training of AI models, and no one could figure it out. I think it’s actually being done in production now. So there is – I’m a lawyer, so I can say this. Sometimes the law cannot solve the problem. But actually, maybe another technology can.

Trevor Hughes

Fantastic. Alex?

Alexandra Reeve Givens

Well -staffed, empowered, independent regulatory bodies that can help represent the public interest. Wow. And because in some countries those are under attack right now where that is not available, well -resourced, technically informed, independent civil society that can play that role in the interim.

Trevor Hughes

Fantastic. Yeah, the importance of having watchdogs, yeah, entities that are watching and observing, commenting, enforcing, really powerful. So there are four great innovations, provenance, agency, privacy -enhancing technologies, and well -funded regulators or civil society. Well done. I think that is a great start. Let’s do another audience poll. How many of you here in this audience are responsible for AI or AI governance, AI ethics, AI safety inside your organization? Hands up. It’s almost the whole room. Keep your hand up if you’re also responsible for something else in addition to AI or it’s just AI. It’s more. I think it is a pretty complete overlap, almost a complete overlap. There’s at least a significant percentage that were responsible for more than one thing, and one of those things was AI.

I think that’s an example of the complexity that we see inside organizations today. John described the coordination necessary between Ofcom and the ICO in the Grok investigation, which is ongoing, because there was not a single place where regulatory authority existed to address that concern. This is a really complex environment. The number of harms or issues span from children’s safety to intellectual property. From bias and algorithmic discrimination all the way through deepfakes and other things. Alex, how do we… How do we put that all into a pot and make it something meaningful?

Alexandra Reeve Givens

Well, what if you can’t put it all into a pot? A pot is a common denominator in all of those things, but AI is a tool that touches everything. So I really do think you actually need a nuanced approach that looks at a particular risk, what those mitigations are for that risk, and then goes from there. The privacy considerations when you are sharing your most intimate concerns and questions about the world with a chat bot is very different than these questions about deep fakes and fraud and impersonation. It’s just you need to have a different legal regime. I think some of the common elements that run through, one is that transparency and rigorous approaches to risk mitigation really matter, and that can either be through regulation or through principles and best practices with meaning and standardization and watchdogs reading those disclosures.

And the second is this burden of the user. So when Trevor introduced me, we described my org. organization. We represent users’ rights around the world. I am all for user empowerment. And also, we cannot put the burden solely on users to navigate this moment. Indeed. And that is the major lesson of the cookie example you were saying before. We didn’t misdiagnose the harm. We misdiagnosed the remedy, which was the burden on individual users when we don’t actually have market choice, nor the time or mental energy to just read a whole bunch of disclosures and act alone. And so solutions that acknowledge the harm are tailored, but also take that burden off individual users. So you’re empowering users, but not burdening them or leaving them to essentially defend themselves unprotected.

We have to think about that.

Trevor Hughes

Okay. Sadly, we are at the end of our time, but we have one more pop question for all of you, and we’re going to let this be our close. We have gone through the AI Impact Summit, the AI Action Summit, the AI Safety Summit. Five years from now. What is the AI summit going to be called? What’s the word that’s going to be in the middle there? So this is a one -word answer again. What’s it going to be? I know it’s a tough question. So, Denise, I’ll start with you because you’re able to handle the toughest questions. Ah, the AI Trust Summit. Okay, John?

John Edwards

Nostalgia.

Trevor Hughes

Nostalgia.

Amanda Craig

Thriving.

Trevor Hughes

Thriving, AI Thriving Summit. Okay.

Alexandra Reeve Givens

I’m going to cheat. For the people, by the people. It’s more words. They’re so strange.

Trevor Hughes

Some of the people. It’s hilarious. To get on a poster. Here’s what I know. I know that there is incredibly hard work that needs to be done to bring trust and safety to this ridiculously powerful technology that I think, as Sundar Pichai says, will be more profound than electricity. That hard work happens every single day inside organizations. I’m going to cheat. I’m going to cheat. I’m going to cheat. I’m going to cheat. I’m going to cheat. I’m going to cheat. that are implementing these tools inside civil society, that are watching and guiding that behavior inside regulatory offices that are navigating to ensure that marketplaces around the world, that the digital economy gets this right. I feel better because people like this are doing that work every day, and I hope you’ll join me in thanking them.

Thank you very much. Thank you so very much. Thank you. Thank you. Well done. Well done. You were fantastic as expected. So what is the I am and I fly to London for magic.

Related ResourcesKnowledge base sources related to the discussion topics (18)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“Alexandra Reeve Givens is the CEO of the Center for Democracy and Technology.”

The knowledge base lists Alexandra Reeve Givens as the CEO of the Center for Democracy and Technology in multiple entries [S1] and [S3].

Additional Contextmedium

“Regulation can act as a guarantee of trust and an engine for economic growth in AI.”

Broader commentary in the knowledge base describes regulation as a force for economic growth and a guarantee of trust in AI contexts [S89] and [S90], providing nuance to the panelists’ framing of principle-based regulation as a trust-fuel.

Additional Contextmedium

“AI‑generated deep‑fakes used in elections are a clear present harm and Singapore’s PDPC has enacted specific regulations to address them.”

The knowledge base highlights deep-fakes as a significant challenge that requires government oversight and regulation [S97], and notes Singapore’s awareness of vulnerable groups and the need to bridge digital divides, which aligns with a targeted regulatory response [S94].

Additional Contextlow

“Singapore’s regulator relies on existing sectoral laws and rapidly updatable codes of practice as an outcome‑driven complement to the more prescriptive PDPA.”

While the knowledge base does not detail the exact regulatory mechanism, it mentions Singapore’s focus on protecting vulnerable groups and ensuring inclusive digital policies, suggesting an outcome-driven, flexible approach alongside existing legislation [S94].

External Sources (97)
S1
How Trust and Safety Drive Innovation and Sustainable Growth — -Alexandra Reeve Givens- CEO of the Center for Democracy and Technology, one of the leading advocacy organizations worki…
S2
Open Forum: A Primer on AI — Artificial Intelligence (AI) has been widely adopted across various sectors, including facial recognition, online shoppi…
S3
https://dig.watch/event/india-ai-impact-summit-2026/how-trust-and-safety-drive-innovation-and-sustainable-growth — I just have the image of the U.K. Information Commissioner doom -scrolling TikTok in my head now. Let’s do a quick round…
S4
How Trust and Safety Drive Innovation and Sustainable Growth — – Alexandra Reeve Givens- Amanda Craig – Denise Wong- Amanda Craig
S5
How Trust and Safety Drive Innovation and Sustainable Growth — – Alexandra Reeve Givens- Trevor Hughes- Amanda Craig
S6
https://dig.watch/event/india-ai-impact-summit-2026/transforming-health-systems-with-ai-from-lab-to-last-mile — And welcome. And… And her background is also in this both biomedical field, science innovation field, but also has ext…
S7
How Trust and Safety Drive Innovation and Sustainable Growth — and then we’re going to dive right into my immediate left. I have Alex Reed -Gibbons, who is the CEO of the Center for D…
S8
https://dig.watch/event/india-ai-impact-summit-2026/how-trust-and-safety-drive-innovation-and-sustainable-growth — Fantastic. I had a lot of… Professor years ago now who described burden -shifting wrenches in the law. And I think con…
S9
Sandboxes for Data Governance: Global Responsible Innovation | IGF 2023 WS #279 — Denis Wong serves as both the Data Protection Deputy Commissioner and the Assistant Chief Executive of IMDA, highlightin…
S10
https://dig.watch/event/india-ai-impact-summit-2026/how-trust-and-safety-drive-innovation-and-sustainable-growth — and then we’re going to dive right into my immediate left. I have Alex Reed -Gibbons, who is the CEO of the Center for D…
S11
How Trust and Safety Drive Innovation and Sustainable Growth — -John Edwards- Information Commissioner of the United Kingdom
S12
WS #362 Incorporating Human Rights in AI Risk Management — Criticism of lack of enforceability but potential value in encouraging company participation, challenges in articulating…
S13
Advancing digital inclusion and human-rights:ROAM-X approach | IGF 2023 — However, the assessment process has been impeded by insufficient data and other challenges. This has hindered the accura…
S14
NYC’s anti-bias law holds algorithms accountable in hiring decisions — New York City has enforced a new law called Local Law 144,which mandates that employers utilising algorithms for hiring,…
S15
EU’s AI Act faces tech giants’ resistance — As the EU finalises its groundbreaking AI Act, major technology firms arelobbyingfor lenient regulations to minimise the…
S16
Navigating AI regulation: US state lawmakers strive for innovation and accountability — Lawmakers in various US states are directing their focus towards AI,grappling with the intricacies of this rapidly evolv…
S17
Laying the foundations for AI governance — This perspective suggests that well-designed regulation could support innovation by providing clear guidelines and consi…
S18
How can sandboxes spur responsible data-sharing across borders? (Datasphere Initiative) — Harmonizing cross-border regulations and practices within the African continent presents challenges due to differing reg…
S19
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — Thomas Schneider:Yes, thank you. And it is actually good that we live in a hybrid world, so I was able to follow the dis…
S20
https://dig.watch/event/india-ai-impact-summit-2026/agents-of-change-ai-for-government-services-climate-resilience — This is kind of, the result is a little off. I’m going to give it some more feedback. I’m going to reassess the results….
S21
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — Applying the right to information laws to these entities ensures that transparency is maintained and that they are held …
S22
For the record: AI, creativity, and the future of music — Copyright Protection and Legal Framework Legal and regulatory | Human rights Victoria Oakley argues that existing copy…
S23
International multistakeholder cooperation for AI standards | IGF 2023 WS #465 — Matilda Road:Mathilda, over to you. Thank you, Florian. Good morning everyone. It’s great to see so many of you here and…
S24
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — In conclusion, the UNESCO recommendation on AI ethics provides crucial guidance for global AI governance. By grounding A…
S25
Opening address of the co-chairs of the AI Governance Dialogue — Tomas Lamanauskas: Thank you, thank you very much Charlotte indeed, and thank you everyone coming here this morning to j…
S26
Keynote Adresses at India AI Impact Summit 2026 — “We find ourselves grappling with a global supply chain that is massively over -concentrated.”[87]. “We are forging a su…
S27
From summer disillusionment to autumn clarity: Ten lessons for AI — Additionally, the EU’s long-negotiated AI Act imposes strict rules on AI systems (e.g. high-risk systems must meet safet…
S28
Safe Digital Futures for Children: Aligning Global Agendas | IGF 2023 WS #403 — The analysis examines topics such as online crime, the dark web, internet fragmentation, internet companies, innovation,…
S29
Do we really need specialised AI regulation? — The pyramid reveals a clear pattern:most layers of AI are already regulated. Hardware is controlled, data is protected (…
S30
WS #162 Overregulation: Balance Policy and Innovation in Technology — Galvez suggests that countries should consider their local needs and existing regulations when developing AI governance …
S31
What is it about AI that we need to regulate? — What is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed…
S32
AI Meets Cybersecurity Trust Governance &amp; Global Security — Maria stresses that identifying AI‑related harms requires joint effort from governments, civil society, and industry, no…
S33
Secure Finance Risk-Based AI Policy for the Banking Sector — The panel explored how AI governance frameworks must account for India’s linguistic diversity, demographic heterogeneity…
S34
WS #98 Towards a global, risk-adaptive AI governance framework — Sector-specific and use case-specific governance may be needed rather than one-size-fits-all approaches
S35
WS #453 Leveraging Tech Science Diplomacy for Digital Cooperation — Muñoz emphasized that “science diplomacy doesn’t remain confined to policy papers. It creates concrete tools, infrastruc…
S36
Driving Indias AI Future Growth Innovation and Impact — These key comments fundamentally shaped the discussion by expanding it beyond technical infrastructure to encompass trus…
S37
Council of Europe’s Framework Convention on Artificial Intelligence and Human Rights — Need for regulatory coherence and coordination Relationship Between Different Regulatory Frameworks The Council of Eur…
S38
Lightning Talk #107 Irish Regulator Builds a Safe and Trusted Online Environment — Both recognize the critical need for effective coordination between regulators across jurisdictions, though they acknowl…
S39
Competition law and regulations for digital markets: What are the best policy options for developing countries? (UNCTAD) — Coordinated enforcement across jurisdictions is deemed crucial for effective regulation. The EU’s Digital Markets Act se…
S40
Policy Network on Artificial Intelligence | IGF 2023 — A context-specific and rule of law approach, advocated by Sarayu Natarajan, is key to effectively addressing this issue….
S41
Discussion Summary: US AI Governance Strategy Under the Trump Administration — Ball supports addressing AI-related problems through traditional legal mechanisms like courts and liability systems rath…
S42
Technology Regulation and AI Governance Panel Discussion — Regulate against those harms and figure out who’s gonna be responsible for it and legislate that way. I think is the rig…
S43
A Global Human Rights Approach to Responsible AI Governance | IGF 2023 WS #288 — Finally, the analysis suggests that laws in countries like Canada can have a significant influence on global regulations…
S44
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — The “Brussels effect” is mentioned as a factor that may have negative impacts in non-European contexts. Concerns are rai…
S45
Tech diplomacy could help solve global challenges — Protecting citizens with comprehensive and global regulation is a priority to promote the ethical, responsible, human-ce…
S46
WEF Business Engagement Session: Safety in Innovation – Building Digital Trust and Resilience — Regulation can foster innovation rather than constrain it when properly implemented There’s unexpected consensus that r…
S47
Youth-Driven Tech: Empowering Next-Gen Innovators | IGF 2023 WS #417 — Regulation is seen as a means to foster innovation, rather than block it. The stance is that regulation can actually enc…
S48
WS #162 Overregulation: Balance Policy and Innovation in Technology — Flexible, principle-based approaches can foster innovation while protecting rights Regulation is necessary but should n…
S49
AI Governance: Ensuring equity and accountability in the digital economy (UNCTAD) — Building trust in digital systems and expanding participation in AI decision-making are essential for successful impleme…
S50
Closing remarks – Charting the path forward — Al Mesmar emphasizes the importance of unified policy approaches that can adapt to technological changes while maintaini…
S51
Closing the Governance Gaps: New Paradigms for a Safer DNS — Although regulation in the DNS industry is inevitable, it should aim to avoid fragmented jurisdictional approaches. If t…
S52
Informal Stakeholder Consultation Session — Digital transformation affects every sector, so coordinated policymaking helps ensure coherence and better outcomes for …
S53
WS #55 Future of Governance in Africa — Effective digital governance requires collaboration between government and industry stakeholders. This approach ensures …
S54
Setting the Rules_ Global AI Standards for Growth and Governance — So consensus around the need to do it, consensus around the fact that it’s hard, but it’s important for consumers and bu…
S55
How Trust and Safety Drive Innovation and Sustainable Growth — “So this is where trust really is the fuel of innovation because it is what’s going to be the economic driver of these t…
S56
AI as critical infrastructure for continuity in public services — “Trust also can influence economic confidence and cross -border collaboration.”[54]. “Standards are a very important pil…
S57
How AI Drives Innovation and Economic Growth — And I’m seeing two patterns. One is about trust in technology, and the second part is about the reality of the policy wo…
S58
AI for Social Empowerment_ Driving Change and Inclusion — She argues that immediate policy action is required across competition, tax, labour and social protection to mitigate AI…
S59
Open Forum #17 AI Regulation Insights From Parliaments — Balancing innovation incentives with regulatory protection There’s a critical balance needed between regulation and inn…
S60
Israel’s Policy on Artificial Intelligence Regulation and Ethics — Empowering sector-specific regulators:The need for any regulation of the development and use of artificial intelligence …
S61
Sandboxes for Data Governance: Global Responsible Innovation | IGF 2023 WS #279 — Collaboration among different countries and stakeholders is seen as a key driver for advancing regulatory sandboxes and …
S62
Workshop 1: AI &amp; non-discrimination in digital spaces: from prevention to redress — Equality bodies cannot address algorithmic discrimination alone and need to work with data protection authorities, consu…
S63
Competition law and regulations for digital markets: What are the best policy options for developing countries? (UNCTAD) — Coordinated enforcement across jurisdictions is deemed crucial for effective regulation. The EU’s Digital Markets Act se…
S64
Digital Embassies for Sovereign AI — This addresses the need for adaptive governance frameworks that can keep pace with rapid technological change
S65
Panel Discussion: Europe’s AI Governance Strategy in the Face of Global Competition — – Alexander E. Brunner- Enzo Maria Le Fevre Cervini – Enzo Maria Le Fevre Cervini- Armando Geller While disagreeing th…
S66
From principles to practice: Governing advanced AI in action — – Balancing rapid technological advancement with necessary governance frameworks across different regional approaches T…
S67
WS #97 Interoperability of AI Governance: Scope and Mechanism — Rapid technological advancement poses challenges for governance frameworks to keep pace
S68
How Trust and Safety Drive Innovation and Sustainable Growth — And I come from the IAPP. If you don’t know the IAPP, we are a global professional association, a not -for -profit but a…
S69
Keynote-Roy Jakobs — This comment introduces a systems-thinking perspective that acknowledges the complexity of AI implementation beyond just…
S70
Do we really need specialised AI regulation? — The pyramid reveals a clear pattern:most layers of AI are already regulated. Hardware is controlled, data is protected (…
S71
Artificial Intelligence &amp; Emerging Tech — The need for new mechanisms to safeguard data, in addition to consent, is becoming increasingly important There is a gr…
S72
Main Session 2: The governance of artificial intelligence — Claybaugh contends that there are already legal frameworks in place that pre-date ChatGPT covering issues like copyright…
S73
WS #162 Overregulation: Balance Policy and Innovation in Technology — Galvez suggests that countries should consider their local needs and existing regulations when developing AI governance …
S74
Rethinking AI regulation: Are new laws really necessary? — Specialised AI regulation may not be necessary, as existing laws already cover many aspects of AI-related concerns. Jova…
S75
Practical Toolkits for AI Risk Mitigation for Businesses — In healthcare, risks involve threats to life, privacy, equality, and individual autonomy. Similarly, the retail sector a…
S76
AI Meets Cybersecurity Trust Governance &amp; Global Security — Maria stresses that identifying AI‑related harms requires joint effort from governments, civil society, and industry, no…
S77
What is it about AI that we need to regulate? — What is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed…
S78
WS #98 Towards a global, risk-adaptive AI governance framework — Sector-specific and use case-specific governance may be needed rather than one-size-fits-all approaches
S79
WS #453 Leveraging Tech Science Diplomacy for Digital Cooperation — Muñoz emphasized that “science diplomacy doesn’t remain confined to policy papers. It creates concrete tools, infrastruc…
S80
Secure Finance Risk-Based AI Policy for the Banking Sector — Trust is built when systems are predictable, explainable, and accountable. Trust deepens when innovation aligns with pub…
S81
Interdisciplinary approaches — Online trust today faces several main challenges. The technical entities that run the global infrastructure need to pres…
S82
Laying the foundations for AI governance — How to balance the need for regulation with avoiding fragmentation across different jurisdictions
S83
Council of Europe’s Framework Convention on Artificial Intelligence and Human Rights — Need for regulatory coherence and coordination Relationship Between Different Regulatory Frameworks The Council of Eur…
S84
Closing the Governance Gaps: New Paradigms for a Safer DNS — By showcasing their collective commitment to harm mitigation, the DNS sector can send a message to regulators about thei…
S85
Lightning Talk #107 Irish Regulator Builds a Safe and Trusted Online Environment — Both recognize the critical need for effective coordination between regulators across jurisdictions, though they acknowl…
S86
Data free flow with trust: a collaborative path to progress (ICC) — However, concerns about national security, privacy, and economic safety have arisen, leading to the implementation of re…
S87
Day 0 Event #220 Restoring Internet Credibility and Preserving Democracy — Nagai briefly mentioned what he called a “dead loop” in democratic governance, observing that disinformation campaigns c…
S88
Democratizing AI Building Trustworthy Systems for Everyone — I think open source is going to be in my mind a critical aspect of it. You’ll have to see how far open source movement t…
S89
Technology Rewiring Global Finance: A Panel Discussion Summary — Koffey emphasized that regulation must be a force for economic growth and innovation, breeding adoption and trust throug…
S90
Tokenisation and the Future of Global Finance: A World Economic Forum 2026 Panel Discussion — Regulation is not the enemy of innovation but a guarantee of trust, requiring the right balance between innovation and r…
S91
Main Session on Artificial Intelligence | IGF 2023 — Clara Neppel:Well, I think that these are certain things which can be addressed both on a voluntary level as well as at …
S92
Exploring Emerging PE³Ts for Data Governance with Trust | IGF 2023 Open Forum #161 — Additionally, a platform is used for companies to provide feedback and declare their compliance. Interestingly, the syst…
S93
Cross-Border Data Flows: Harmonizing trust through interoperability mechanisms (DCO) — Another noteworthy concern raised during the discussion was the dominance of larger tech companies and its impact on sma…
S94
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — Additionally, there is apprehension about the potential negative impacts of technology, especially in terms of widening …
S95
Michigan to introduce legislation combatting deceptive uses of AI in political advertising — Michigan is set to introduce state-level policies aimed at combating deceptive uses of artificial intelligence (AI) and …
S96
ByteDance unveils AI that creates uncannily realistic deepfakes — ByteDance, the company behind TikTok, hasintroducedOmniHuman-1, an advanced AI system capable of generating highly reali…
S97
360° on AI Regulations — Deepfakes pose significant challenges as they can manipulate information and distort reality. Effective government overs…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Alexandra Reeve Givens
6 arguments190 words per minute1283 words403 seconds
Argument 1
Trust as a prerequisite for adoption and economic growth
EXPLANATION
Alexandra argues that long‑term business and societal sustainability of AI depends on users trusting the technology. Trust drives adoption, which in turn fuels economic growth and innovation.
EVIDENCE
She notes that adoption is the overwhelming theme of the summit and that people need trust in multiple facets-fit-for-purpose, language, culture, privacy, data security, and model quality-before they will use AI, and that this trust will become the economic driver of tool adoption [53-68].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Trust is identified as essential for technology adoption and a driver of innovation and sustainable growth in the Trust and Safety discussion [S1].
MAJOR DISCUSSION POINT
Trust prerequisite for adoption
AGREED WITH
Trevor Hughes, John Edwards, Denise Wong, Amanda Craig
Argument 2
Lack of transparency hampers enforcement of current anti‑discrimination laws
EXPLANATION
Alexandra points out that existing US equal‑employment laws are difficult to enforce when AI systems make hiring decisions, because the opacity of the algorithms prevents plaintiffs from proving discrimination.
EVIDENCE
She describes a scenario where AI-driven hiring software may ignore older candidates, making it hard for a job applicant to know a violation occurred or to prove it in court without a disclosure regime that requires transparency and impact assessments [154-162].
MAJOR DISCUSSION POINT
Transparency needed for anti‑discrimination enforcement
AGREED WITH
John Edwards, Amanda Craig, Denise Wong
Argument 3
Discrimination in AI‑driven hiring illustrates need for disclosure and impact assessments
EXPLANATION
Building on the previous point, Alexandra emphasizes that without mandatory disclosure of AI systems used in high‑risk hiring, affected individuals cannot obtain the remedy provided by existing law.
EVIDENCE
She explains that AI-powered hiring tools can hide discriminatory outcomes, and without a transparency regime and impact assessments, victims lack evidence to bring a case under US equal-employment statutes [154-162].
MAJOR DISCUSSION POINT
Disclosure required for AI hiring decisions
Argument 4
Emerging AI‑specific statutes (EU AI Act, US state bills) show targeted regulation can aid innovation
EXPLANATION
Alexandra notes that despite the lack of a global AI law, several jurisdictions are introducing focused legislation—such as the EU AI Act’s high‑risk provisions and state‑level transparency laws in the US—that can provide clear guardrails while still encouraging innovation.
EVIDENCE
She cites the EU AI Act’s high-risk framework, Colorado’s AI law, California and New York transparency statutes, and Utah’s regulatory sandbox provisions as examples of emerging, targeted regulation [262-270].
MAJOR DISCUSSION POINT
Targeted AI legislation supports innovation
AGREED WITH
Amanda Craig, John Edwards, Denise Wong
DISAGREED WITH
Amanda Craig, John Edwards, Denise Wong
Argument 5
Well‑staffed, independent regulatory bodies and empowered civil society
EXPLANATION
Alexandra stresses that robust, well‑resourced, technically informed regulators and an active civil‑society watchdog are essential to represent the public interest and maintain trust in AI systems.
EVIDENCE
She explicitly calls for “well-staffed, empowered, independent regulatory bodies” and “well-resourced, technically informed, independent civil society” to play a role in safeguarding AI trust [355-356].
MAJOR DISCUSSION POINT
Importance of strong regulators and civil society
AGREED WITH
Trevor Hughes, John Edwards, Denise Wong, Amanda Craig
Argument 6
Emerging harmonisation through high‑risk provisions, sandboxes, and shared best practices
EXPLANATION
Alexandra observes that jurisdictions are beginning to align on high‑risk AI categories, regulatory sandboxes, and shared standards, creating early signs of global harmonisation despite the absence of a strong Brussels effect.
EVIDENCE
She references the EU AI Act’s high-risk approach, US state transparency laws, and Utah’s sandbox provision as examples of cross-jurisdictional learning and coordination [262-270].
MAJOR DISCUSSION POINT
Early global alignment on AI risk management
AGREED WITH
Amanda Craig, Denise Wong
DISAGREED WITH
Trevor Hughes, Denise Wong, John Edwards
A
Amanda Craig
6 arguments173 words per minute1045 words361 seconds
Argument 1
Trust requires evolving governance processes inside firms
EXPLANATION
Amanda argues that trust cannot be static; companies must maintain dynamic governance programs that evolve alongside rapidly changing AI technologies.
EVIDENCE
She describes Microsoft’s responsible AI governance program, the need for ongoing conversation about trust, and the challenge of a technology that changes quickly, likening it to a light switch that may be in a different place next week [112-120].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Amanda’s view that trust depends on dynamic internal governance is reflected in the Trust and Safety dialogue that pairs her with Alexandra and in discussions on evolving AI governance [S1][S19].
MAJOR DISCUSSION POINT
Dynamic internal governance for trust
AGREED WITH
Denise Wong, Alexandra Reeve Givens
DISAGREED WITH
John Edwards, Denise Wong, Alexandra Reeve Givens
Argument 2
Alignment of internal AI principles with existing data‑protection statutes
EXPLANATION
Amanda notes that Microsoft’s internal responsible‑AI frameworks are designed to complement and map onto existing data‑protection regulations, ensuring compliance while fostering trust.
EVIDENCE
She states that Microsoft sees opportunities to implement existing regulation and develop additional regulation, indicating that internal principles are aligned with current data-protection statutes [112-115].
MAJOR DISCUSSION POINT
Internal AI principles map to data‑protection law
Argument 3
Combination of internal responsible‑AI frameworks and external regulation is essential
EXPLANATION
Amanda emphasizes that both corporate governance programs and governmental regulation are needed to build and sustain trust in AI, with each reinforcing the other.
EVIDENCE
She explains that Microsoft focuses on responsible AI governance while also seeing the need for government-led governance models, highlighting the complementary role of internal frameworks and external regulation [112-119].
MAJOR DISCUSSION POINT
Synergy of internal and external governance
Argument 4
Definition of high‑risk “sensitive use” categories and supply‑chain risk management
EXPLANATION
Amanda outlines Microsoft’s categorisation of “sensitive uses”—such as employment, education, and critical infrastructure—and stresses the importance of managing risk across the entire AI supply chain.
EVIDENCE
She lists three categories of sensitive use (life-opportunity impacts, psychological/physical harm, human-rights impacts) and discusses the challenge of addressing risk across the whole supply chain rather than focusing on a single component [199-203] and [209-220].
MAJOR DISCUSSION POINT
Sensitive‑use taxonomy and supply‑chain risk
AGREED WITH
Alexandra Reeve Givens, John Edwards, Denise Wong
DISAGREED WITH
Denise Wong, John Edwards, Alexandra Reeve Givens
Argument 5
Provenance tools for tracking dynamic AI components
EXPLANATION
Amanda proposes that provenance tools—software‑built materials that record the lineage of AI components—can increase transparency and accountability for complex, dynamic AI systems.
EVIDENCE
She describes “software-built materials” that allow tracking of dynamic components across models, platforms, tools, and services, thereby providing provenance for agents [316-322].
MAJOR DISCUSSION POINT
Provenance for AI transparency
AGREED WITH
Alexandra Reeve Givens, John Edwards, Denise Wong
Argument 6
Learning from other jurisdictions’ approaches to AI risk
EXPLANATION
Amanda suggests that AI risk management can benefit from lessons learned in cybersecurity and from the regulatory practices of other countries, encouraging a broader, cross‑jurisdictional perspective.
EVIDENCE
She draws parallels between AI risk and decades of cybersecurity work, noting the need to look across the whole supply chain and to learn from other governments’ approaches [208-210].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Cross-jurisdictional learning and harmonised sandbox approaches are advocated in multistakeholder AI standards initiatives [S23] and sandbox coordination studies [S9][S18].
MAJOR DISCUSSION POINT
Cross‑jurisdictional learning for AI risk
J
John Edwards
6 arguments143 words per minute1144 words477 seconds
Argument 1
Regulatory tools (e.g., GDPR‑by‑design) provide measurable trust signals
EXPLANATION
John explains that data‑protection requirements such as privacy‑by‑design, impact assessments, and ICO oversight give businesses concrete ways to demonstrate trustworthiness to consumers.
EVIDENCE
He lists GDPR-by-design obligations, data-protection impact assessments, privacy-by-design, and risk assessments as regulatory tools that act as trust signals, with the ICO providing oversight for both businesses and consumers [84-95].
MAJOR DISCUSSION POINT
GDPR tools as trust metrics
AGREED WITH
Alexandra Reeve Givens, Amanda Craig, Denise Wong
DISAGREED WITH
Amanda Craig, Denise Wong, Alexandra Reeve Givens
Argument 2
UK GDPR supplies a practical regulatory regime for AI
EXPLANATION
John states that, even without a specific AI law, the UK’s implementation of the GDPR creates a de‑facto regulatory framework that applies to AI systems handling personal data.
EVIDENCE
He notes that data-protection laws apply wherever technology touches personal data, describing the UK GDPR as a “de facto regulatory regime” for AI and highlighting obligations such as data-protection-by-design and impact assessments [86-87] and [90-95].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The UK’s implementation of GDPR is described as a de-facto AI regulatory framework that applies to AI systems handling personal data [S21].
MAJOR DISCUSSION POINT
UK GDPR as AI regulator
Argument 3
Absence of a dedicated AI law is not a regulatory deficit; guidance fills the gap
EXPLANATION
John argues that the lack of a specific AI statute does not leave a void because the ICO issues guidance that maps existing GDPR principles onto AI use, providing certainty for industry.
EVIDENCE
He explains that the office issues technology-neutral principles, links EU AI Act obligations back to GDPR, and provides guidance to fill any perceived lacuna, ensuring confidence and certainty for AI developers [97-107].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Guidance that translates existing GDPR principles to AI use is cited as filling the statutory gap in the AI-compatible data-protection session [S21].
MAJOR DISCUSSION POINT
Guidance compensates for missing AI law
Argument 4
Cross‑agency coordination (ICO, Ofcom, GPA) is vital for addressing multi‑jurisdictional risks
EXPLANATION
John highlights the importance of collaboration among UK regulators and international bodies to manage AI risks that span different regulatory domains and jurisdictions.
EVIDENCE
He recounts reaching out to Ofcom (Online Safety Act), the Global Privacy Assembly (GPA), and coordinating with international colleagues to share information and align expectations on AI safety investigations [286-300].
MAJOR DISCUSSION POINT
Inter‑agency cooperation on AI risk
AGREED WITH
Trevor Hughes, Denise Wong
DISAGREED WITH
Trevor Hughes, Alexandra Reeve Givens, Denise Wong
Argument 5
“Agency” concept to restore user control and post‑consent rights
EXPLANATION
John proposes that focusing on user agency—providing mechanisms to understand data provenance and to withdraw consent—can re‑balance power between providers and individuals.
EVIDENCE
He defines agency as maintaining individual control, linking it to provenance, and describing features such as a “delete everything” button that go beyond traditional consent models [328-339].
MAJOR DISCUSSION POINT
Agency as post‑consent empowerment
Argument 6
Global Privacy Assembly collaboration to avoid regulatory overlap
EXPLANATION
John notes that the GPA serves as a platform for privacy regulators worldwide to coordinate, preventing duplicated efforts and ensuring consistent approaches to AI‑related privacy challenges.
EVIDENCE
He describes early outreach to GPA colleagues during the Grok investigation to ensure shared information and avoid stepping on each other’s toes [286-300].
MAJOR DISCUSSION POINT
GPA as coordination mechanism
D
Denise Wong
6 arguments164 words per minute969 words353 seconds
Argument 1
Trust as the desired outcome of governance and policy frameworks
EXPLANATION
Denise frames trust and safety as the ultimate goal of AI governance, emphasizing that policies should create conditions where both the public and enterprises can use AI with confidence.
EVIDENCE
She states that “trust and safety is the outcome that we want” and that governance frameworks aim to create necessary conditions for society and enterprises to thrive with AI [128-132].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Trust as the ultimate governance outcome is a central theme in the Trust and Safety analysis that includes Denise’s contributions [S1].
MAJOR DISCUSSION POINT
Trust as governance outcome
AGREED WITH
Trevor Hughes, Alexandra Reeve Givens, John Edwards, Amanda Craig
DISAGREED WITH
Amanda Craig, John Edwards, Alexandra Reeve Givens
Argument 2
Singapore’s PDPA is outcome‑driven and supported by advisory guidance
EXPLANATION
Denise explains that Singapore’s Personal Data Protection Act (PDPA) is deliberately broad and outcome‑focused, with the PDPC providing non‑prescriptive advisory guidelines to help organisations comply.
EVIDENCE
She notes that the PDPA is “outcome driven,” not prescriptive, and that most compliance guidance is delivered through advisory guidelines rather than hard law [254-258].
MAJOR DISCUSSION POINT
Outcome‑driven PDPA framework
Argument 3
Regulate only clear, high‑impact harms; rely on sectoral rules and standards otherwise
EXPLANATION
Denise argues that regulation should be reserved for situations with evident, serious harms (e.g., election deepfakes), while other issues can be managed through existing sector‑specific regulations and voluntary standards.
EVIDENCE
She cites concrete regulations on election deepfakes, online harms, and scams as examples of clear-harm regulation, and mentions leaving broader AI issues to sectoral rules and emerging horizontal principles [136-143] and [144-147].
MAJOR DISCUSSION POINT
Targeted regulation for clear harms
AGREED WITH
Alexandra Reeve Givens, Amanda Craig, John Edwards
Argument 4
International harm taxonomy (e.g., AI Safety Report) guides regulatory focus
EXPLANATION
Denise points to emerging global taxonomies, such as the International AI Safety Report, which categorize AI harms and help regulators prioritize interventions.
EVIDENCE
She references the International AI Safety Report as a source of emerging harm buckets and archetypes that are being adopted in places like Iceland [244-245].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The International AI Safety Report’s harm taxonomy is referenced as a tool for prioritising regulatory interventions in discussions of AI safety taxonomy [S24].
MAJOR DISCUSSION POINT
Harm taxonomy informs regulation
AGREED WITH
John Edwards, Trevor Hughes
DISAGREED WITH
Trevor Hughes, Alexandra Reeve Givens, John Edwards
Argument 5
Privacy‑enhancing technologies such as federated learning
EXPLANATION
Denise highlights that technical solutions like federated learning can address data‑privacy challenges that law alone may not solve, enabling AI model training without exposing raw personal data.
EVIDENCE
She notes that two years ago federated learning was theoretical, but it is now being deployed in production to protect personal information while training AI models [349-351].
MAJOR DISCUSSION POINT
Technical privacy solutions for AI
Argument 6
Use of codes of practice and agile governance to adapt to evolving harms
EXPLANATION
Denise describes Singapore’s approach of pairing umbrella legislation with easily updatable codes of practice, allowing the regulatory framework to stay current with rapidly changing AI risks.
EVIDENCE
She explains that Singapore employs codes of practice under a broad legislative frame, which can be updated more readily than primary legislation, and that guidance from the PDPC complements this agile approach [251-258].
MAJOR DISCUSSION POINT
Agile governance via codes of practice
AGREED WITH
Amanda Craig, Alexandra Reeve Givens
T
Trevor Hughes
4 arguments143 words per minute2428 words1015 seconds
Argument 1
Observation of a paradox between deregulation talk and pervasive trust messaging
EXPLANATION
Trevor notes the contradictory situation where the industry talks about stepping back from regulation while simultaneously emphasizing trust, safety, and risk‑management in AI.
EVIDENCE
He observes that “there clearly is a deregulatory mood in the air” yet every banner mentions trust, safety, or privacy, highlighting the dichotomy [28-38].
MAJOR DISCUSSION POINT
Deregulation vs trust paradox
AGREED WITH
John Edwards, Denise Wong
DISAGREED WITH
Alexandra Reeve Givens, Denise Wong, John Edwards
Argument 2
Cookie‑consent experience illustrates how privacy law shapes trust
EXPLANATION
Trevor uses the long‑standing struggle with cookie consent banners as an example of how privacy regulation can both create and undermine user trust, showing the complexity of prescriptive rules.
EVIDENCE
He recounts the history of cookie regulation, the ongoing pain of implementing consent banners, and suggests that these mechanisms may actually drive the desired outcomes despite their burdensome nature [181-188].
MAJOR DISCUSSION POINT
Cookie consent as trust mechanism
Argument 3
Questioning whether the current deregulatory mood truly reduces guardrails
EXPLANATION
Trevor asks whether the apparent deregulatory atmosphere actually means fewer safeguards, or merely a quieter discussion about needed guardrails in AI.
EVIDENCE
He poses the question, “Are we actually in a deregulatory moment, or have we just gotten quiet about the need for guardrails?” and later wonders if the deregulatory mood truly reduces safeguards [36-40] and [169-170].
MAJOR DISCUSSION POINT
Deregulation vs actual guardrails
Argument 4
Lack of a clear “Brussels effect” for AI highlights need for global alignment
EXPLANATION
Trevor points out that, unlike data‑protection law, the EU AI Act has not generated a worldwide “Brussels effect,” underscoring the necessity for coordinated international AI governance.
EVIDENCE
He states that the EU AI Act has not taken off globally and that we do not see a Brussels effect happening on AI, suggesting a gap in global alignment [232-235].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses note that the EU AI Act has not generated a worldwide Brussels effect, underscoring the need for coordinated international AI governance [S15] and for global standards cooperation [S23].
MAJOR DISCUSSION POINT
Absence of AI Brussels effect
Agreements
Agreement Points
Trust is essential for AI adoption and economic growth
Speakers: Trevor Hughes, Alexandra Reeve Givens, John Edwards, Denise Wong, Amanda Craig
Trust as a prerequisite for adoption and economic growth Well‑staffed, independent regulatory bodies and empowered civil society Regulatory tools (e.g., GDPR‑by‑design) provide measurable trust signals Trust as the desired outcome of governance and policy frameworks Trust requires evolving governance processes inside firms
All speakers emphasized that trust is a prerequisite for AI adoption and drives economic growth; regulators see trust as an outcome of governance, industry sees it as requiring dynamic internal programs, and civil society frames it as the ultimate goal of policy. [16-18][56-63][87-95][128-132][112-119]
POLICY CONTEXT (KNOWLEDGE BASE)
Trust is identified as a key driver of AI adoption and economic confidence, highlighted in multiple IGF and WEF discussions emphasizing that trust fuels innovation and cross-border collaboration [S55][S56][S57].
Transparency and provenance are needed to enforce existing laws and build trust
Speakers: Alexandra Reeve Givens, John Edwards, Amanda Craig, Denise Wong
Lack of transparency hampers enforcement of current anti‑discrimination laws Agency concept to restore user control and post‑consent rights Provenance tools for tracking dynamic AI components Use of codes of practice and agile governance to adapt to evolving harms
Speakers agreed that transparency, provenance, and disclosure are critical for enforcing anti-discrimination and other existing laws and for establishing trust in AI systems. Alexandra highlighted opacity in hiring AI, John linked provenance to user agency, Amanda described provenance tools, and Denise advocated agile codes of practice to maintain transparency. [154-162][322-324][316-322][251-258]
POLICY CONTEXT (KNOWLEDGE BASE)
Transparency and provenance are linked to standards that enable enforcement of laws and build trust, as noted in discussions on the role of standards and the need for clear definitions in AI regulation [S55][S56][S44].
Targeted, risk‑based regulation can support innovation rather than stifle it
Speakers: Alexandra Reeve Givens, Amanda Craig, John Edwards, Denise Wong
Emerging AI‑specific statutes (EU AI Act, US state bills) show targeted regulation can aid innovation Definition of high‑risk “sensitive use” categories and supply‑chain risk management Regulatory tools (e.g., GDPR‑by‑design) provide measurable trust signals Regulate only clear, high‑impact harms; rely on sectoral rules and standards otherwise
All participants noted that focused, risk-based regulatory approaches-such as high-risk categories, sector-specific rules, or targeted statutes-provide necessary guardrails while still encouraging innovation. Alexandra cited emerging AI laws, Amanda described sensitive-use taxonomy, John pointed to GDPR tools, and Denise argued for regulation only where clear harms exist. [262-270][199-203][84-95][136-143]
POLICY CONTEXT (KNOWLEDGE BASE)
Risk-based, principle-based regulation is argued to foster innovation, with examples such as regulatory sandboxes and EU AI Act risk assessments encouraging growth while mitigating risks [S46][S47][S48][S49].
Cross‑jurisdictional coordination among regulators and stakeholders is vital
Speakers: John Edwards, Trevor Hughes, Denise Wong
Cross‑agency coordination (ICO, Ofcom, GPA) is vital for addressing multi‑jurisdictional risks Observation of a paradox between deregulation talk and pervasive trust messaging International harm taxonomy (e.g., AI Safety Report) guides regulatory focus
Speakers concurred that international and inter-agency cooperation is essential to manage AI risks that cross borders. John described coordination with Ofcom and the GPA, Trevor highlighted the need for coordination amid deregulation trends, and Denise referenced global harm taxonomies that aid alignment. [286-300][272-279][244-247]
POLICY CONTEXT (KNOWLEDGE BASE)
Coordinated policymaking across international, regional, and subnational levels and multi-stakeholder engagement are repeatedly emphasized as essential for coherent AI governance [S52][S53][S61].
Dynamic, evolving governance frameworks are needed to keep pace with AI change
Speakers: Amanda Craig, Denise Wong, Alexandra Reeve Givens
Trust requires evolving governance processes inside firms Use of codes of practice and agile governance to adapt to evolving harms Emerging harmonisation through high‑risk provisions, sandboxes, and shared best practices
All agreed that AI governance must be adaptable. Amanda stressed the need for evolving internal programs, Denise promoted agile codes of practice, and Alexandra noted emerging harmonisation efforts that require flexible approaches. [112-119][251-258][262-270]
POLICY CONTEXT (KNOWLEDGE BASE)
Recent panels and workshops call for adaptive, agile governance that evolves with rapid AI advances, stressing flexible frameworks and continuous updates [S50][S64][S65][S66][S67].
Similar Viewpoints
Both regulators and industry see provenance and agency as key mechanisms to give users control and build trust in AI systems. [322-324][328-339]
Speakers: John Edwards, Amanda Craig
Agency concept to restore user control and post‑consent rights Provenance tools for tracking dynamic AI components
Both civil‑society and regulator perspectives recognise emerging global harm taxonomies and targeted statutes as a basis for coordinated, innovation‑friendly regulation. [262-270][244-247]
Speakers: Denise Wong, Alexandra Reeve Givens
Emerging AI‑specific statutes (EU AI Act, US state bills) show targeted regulation can aid innovation International harm taxonomy (e.g., AI Safety Report) guides regulatory focus
Unexpected Consensus
Regulators endorsing provenance tools as a trust‑building innovation
Speakers: John Edwards, Amanda Craig
Agency concept to restore user control and post‑consent rights Provenance tools for tracking dynamic AI components
It is notable that a data-protection regulator (John) and a corporate AI leader (Amanda) both highlighted provenance and agency as promising innovations for trust, despite their different institutional roles. [322-324][328-339]
Civil‑society and industry both advocating agile, code‑of‑practice governance
Speakers: Denise Wong, Amanda Craig, Alexandra Reeve Givens
Use of codes of practice and agile governance to adapt to evolving harms Definition of high‑risk “sensitive use” categories and supply‑chain risk management Emerging harmonisation through high‑risk provisions, sandboxes, and shared best practices
While regulators often favour formal legislation, the panel showed unexpected alignment among civil-society (Denise), industry (Amanda), and civil-society again (Alexandra) on the need for flexible, code-based approaches to keep pace with AI evolution. [251-258][112-119][262-270]
POLICY CONTEXT (KNOWLEDGE BASE)
Multi-stakeholder approaches and agile, code-of-practice models are advocated by civil society and industry to ensure practical, inclusive AI governance [S52][S50][S40].
Overall Assessment

The panel displayed strong consensus that trust is foundational for AI adoption, that transparency and provenance are essential for enforcing existing laws, and that targeted, risk‑based regulation—combined with agile governance—can support innovation. Participants also agreed on the necessity of cross‑jurisdictional coordination and dynamic governance models.

High consensus across regulators, industry, and civil‑society on the core principles of trust, transparency, and coordinated, risk‑based regulation, suggesting a shared roadmap for future AI governance that balances innovation with safeguards.

Differences
Different Viewpoints
Extent of regulatory intervention needed for AI
Speakers: John Edwards, Amanda Craig, Denise Wong, Alexandra Reeve Givens, Trevor Hughes
Regulatory tools (e.g., GDPR‑by‑design) provide measurable trust signals Trust requires evolving governance processes inside firms Regulate only clear, high‑impact harms; rely on sectoral rules and agile codes for the rest Emerging AI‑specific statutes (EU AI Act, US state bills) show targeted regulation can aid innovation Observation of a paradox between deregulation talk and pervasive trust messaging
John argues that the UK GDPR already supplies a sufficient de-facto regime for AI, so no new AI-specific law is needed [84-87][90-95][97-107]. Amanda stresses that both internal responsible-AI programs and external regulation are needed to sustain trust [112-119]. Denise contends that regulation should be limited to clear high-impact harms and that sector-specific rules and codes of practice are preferable for other issues [136-143][144-147][251-258]. Alexandra points to emerging targeted AI legislation (EU AI Act, US state laws) as beneficial for innovation and guardrails [262-270]. Trevor highlights the contradictory deregulatory mood versus the ubiquity of trust messaging [28-38][232-235].
POLICY CONTEXT (KNOWLEDGE BASE)
Views diverge between proponents of traditional legal mechanisms and limited pre-emptive regulation versus advocates for targeted regulatory measures to address AI harms [S41][S42].
Preferred mechanism to achieve trust and safety
Speakers: Amanda Craig, John Edwards, Denise Wong, Alexandra Reeve Givens
Trust requires evolving governance processes inside firms Regulatory tools (e.g., GDPR‑by‑design) provide measurable trust signals Trust as the desired outcome of governance and policy frameworks Emerging AI‑specific statutes (EU AI Act, US state bills) show targeted regulation can aid innovation
Amanda argues that dynamic internal governance programs are essential for trust, complementing external regulation [112-119]. John emphasizes that regulatory requirements such as privacy-by-design and ICO oversight give concrete trust signals to consumers [84-95]. Denise frames trust as the ultimate outcome of governance, achieved through outcome-driven laws and advisory guidance rather than prescriptive rules [128-132][251-258]. Alexandra adds that thoughtful, targeted regulation (high-risk provisions, sandboxes) can also fuel innovation and trust [262-270]. All agree trust is vital but differ on whether internal corporate measures, regulatory mandates, or a mix are the primary driver.
POLICY CONTEXT (KNOWLEDGE BASE)
Mechanisms such as regulatory sandboxes, risk assessments, and principle-based codes are discussed as ways to build trust and ensure safety in AI systems [S46][S47][S61][S48].
Approach to identifying and addressing AI harms
Speakers: Denise Wong, Amanda Craig, John Edwards, Alexandra Reeve Givens
International harm taxonomy (e.g., AI Safety Report) guides regulatory focus Definition of high‑risk “sensitive use” categories and supply‑chain risk management Cross‑agency coordination (ICO, Ofcom, GPA) is vital for addressing multi‑jurisdictional risks Emerging harmonisation through high‑risk provisions, sandboxes, and shared best practices
Denise cites the International AI Safety Report as a developing taxonomy to prioritize harms [244-245]. Amanda outlines Microsoft’s taxonomy of sensitive uses and stresses managing risk across the entire AI supply chain [199-203][209-220]. John highlights the need for coordination among regulators (ICO, Ofcom, GPA) to handle cross-jurisdictional risks [286-300]. Alexandra notes early global alignment on high-risk categories, sandboxes, and shared standards, though she acknowledges the process is nascent [262-270]. The speakers differ on whether a top-down taxonomy, corporate risk categories, inter-agency coordination, or emerging harmonisation should lead the effort.
POLICY CONTEXT (KNOWLEDGE BASE)
Approaches range from targeting specific harms and assigning responsibility to broader policy actions across sectors to mitigate AI-driven disruptions [S42][S58][S59].
Extent of global harmonisation and the ‘Brussels effect’ for AI
Speakers: Trevor Hughes, Alexandra Reeve Givens, Denise Wong, John Edwards
Observation of a paradox between deregulation talk and pervasive trust messaging Emerging harmonisation through high‑risk provisions, sandboxes, and shared best practices International harm taxonomy (e.g., AI Safety Report) guides regulatory focus Cross‑agency coordination (ICO, Ofcom, GPA) is vital for addressing multi‑jurisdictional risks
Trevor observes that unlike data-protection law, the EU AI Act has not produced a Brussels effect, suggesting a lack of global alignment [232-235]. Alexandra counters that there are early signs of alignment via high-risk frameworks, sandboxes, and cross-jurisdictional learning [262-270]. Denise points to an emerging international harm taxonomy that is beginning to be adopted globally [244-245]. John stresses the practical need for regulator coordination across borders to manage AI risks [274-277][286-300]. The disagreement centers on how far global harmonisation has progressed.
POLICY CONTEXT (KNOWLEDGE BASE)
The ‘Brussels effect’, where EU regulations influence global standards, is highlighted as a factor shaping AI governance worldwide, raising questions about the degree of harmonisation needed [S43][S44][S45].
Unexpected Differences
Sufficiency of existing data‑protection law versus need for new agile regulatory tools
Speakers: John Edwards, Denise Wong
Regulatory tools (e.g., GDPR‑by‑design) provide measurable trust signals Use of codes of practice and agile governance to adapt to evolving harms
John asserts that the UK GDPR and ICO guidance fully cover AI risks, leaving no regulatory deficit [97-107]. Denise, however, contends that because AI harms are still coalescing, the law must be complemented by flexible codes of practice and advisory guidance to stay current [251-258]. This contrast between confidence in existing law and the call for agile supplementary tools was not anticipated given their shared regulatory focus.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates focus on whether current data-protection frameworks are adequate or whether new, agile tools are required, with concerns about premature legislation and sector-specific regulatory needs [S44][S41][S60].
Overall Assessment

The panel shows strong consensus that trust and safety are critical for AI adoption, but there is notable disagreement on the scope and form of regulation needed. While regulators (John) emphasize existing data‑protection frameworks as sufficient, industry (Amanda) and civil‑society (Denise, Alexandra) call for additional targeted legislation, agile codes, and internal governance mechanisms. Disagreements also appear around the degree of global harmonisation and the best approach to identifying AI harms.

Moderate to high disagreement on regulatory strategy and global coordination, which could impede unified policy development but also reflects a healthy multi‑stakeholder debate that may lead to more nuanced, hybrid governance models.

Partial Agreements
All speakers concur that trust and safety are essential for AI adoption and societal benefit, but they diverge on the primary means to achieve it: Amanda stresses internal corporate governance, John highlights regulatory compliance mechanisms, Denise focuses on outcome‑driven policy and advisory guidance, while Alexandra advocates for targeted legislative interventions [112-119][84-95][128-132][262-270].
Speakers: Amanda Craig, John Edwards, Denise Wong, Alexandra Reeve Givens
Trust requires evolving governance processes inside firms Regulatory tools (e.g., GDPR‑by‑design) provide measurable trust signals Trust as the desired outcome of governance and policy frameworks Emerging AI‑specific statutes (EU AI Act, US state bills) show targeted regulation can aid innovation
Both agree that regulation should be proportionate and focused on clear harms, but John believes existing GDPR tools already cover AI sufficiently, whereas Denise argues that additional agile codes of practice are needed to address emerging AI risks beyond what current law captures [84-95][251-258].
Speakers: John Edwards, Denise Wong
Regulatory tools (e.g., GDPR‑by‑design) provide measurable trust signals Regulate only clear, high‑impact harms; rely on sectoral rules and agile codes for the rest
Takeaways
Key takeaways
Trust and safety are seen as essential drivers of AI adoption and economic growth; without trust, users will not ‘flip the switch’ on AI technologies. Existing data‑protection regimes (e.g., UK GDPR, Singapore PDPA) already provide a de‑facto regulatory layer for AI, offering tools such as privacy‑by‑design, DPIAs, and outcome‑driven guidance. There is a perceived paradox between a deregulatory climate and the pervasive emphasis on trust and safety in industry and policy messaging. Regulators, industry, and civil‑society agree that thoughtful, principle‑based regulation can fuel innovation rather than stifle it. High‑risk or clearly harmful AI applications (e.g., deep‑fakes in elections, discriminatory hiring tools) merit targeted regulatory action, while broader AI use can be governed through sectoral rules and internal standards. Identifying AI harms requires a mix of principles, existing law, sector‑specific risk taxonomies, and supply‑chain‑wide risk management. Innovative mechanisms such as provenance tools, the “agency” concept, privacy‑enhancing technologies (e.g., federated learning), and well‑resourced independent watchdogs are viewed as promising ways to strengthen trust. International coordination (e.g., ICO‑Ofcom collaboration, Global Privacy Assembly, codes of practice) is critical to avoid fragmented oversight and to share emerging best practices.
Resolutions and action items
Regulators (e.g., ICO, PDPC) will continue issuing guidance that maps existing data‑protection principles to AI use cases. Industry (Microsoft) will advance responsible‑AI governance programs, focusing on provenance tools and dynamic component tracking. Stakeholders will pursue agile, outcome‑driven regulatory mechanisms such as codes of practice and sandboxes to address evolving harms. Cross‑agency coordination mechanisms will be maintained and expanded (e.g., ICO‑Ofcom, GPA collaboration).
Unresolved issues
The extent to which new, AI‑specific legislation is needed beyond existing data‑protection frameworks. How to prospectively define and prioritize emerging AI harms in a way that is globally consistent. Achieving a true “Brussels effect” for AI governance and harmonising standards across jurisdictions. Specific implementation details for high‑risk AI sandboxes and how they will balance innovation with oversight. How to allocate responsibility for AI risk across the entire supply chain without over‑burdening individual users.
Suggested compromises
Regulate only clear, high‑impact harms while relying on sectoral rules and internal responsible‑AI standards for the broader AI landscape. Use existing data‑protection laws as the baseline regulatory layer and supplement them with agile codes of practice or guidance for AI‑specific issues. Combine internal governance tools (e.g., provenance, agency mechanisms) with external oversight to provide measurable trust signals without heavy prescriptive legislation.
Thought Provoking Comments
In 1891, when electricity was first being brought into the White House, President Benjamin Harrison and his wife were terrified of flipping the light switch. They hired an electrician just to turn it on. The lesson: we won’t use technology if we don’t trust it.
Uses a vivid historical analogy to illustrate that trust is a prerequisite for adoption of any new technology, framing the entire panel around the central theme of trust as an engine for growth.
Set the tone for the discussion, prompting each panelist to address trust from their perspective and leading directly to the first question about why trust and safety matter for innovation.
Speaker: Trevor Hughes
Regulation isn’t a brake on innovation; thoughtful, well‑designed regulation can actually be fuel for innovation because it outsources the trust‑building work from individual users to a common standard.
Challenges the common narrative that regulation stifles progress and reframes it as a catalyst, introducing a nuanced view that bridges civil‑society concerns with business interests.
Shifted the conversation from a binary ‘regulation vs. innovation’ debate to a more collaborative framing, prompting John and others to discuss how existing rules already serve that purpose.
Speaker: Alexandra Reeve Givens
The UK doesn’t need a separate AI law because the UK GDPR already provides a de‑facto regulatory regime for AI. We map GDPR principles—privacy by design, DPIAs, fairness—to AI use cases, giving businesses certainty.
Highlights a pragmatic approach: leveraging existing data‑protection law to cover AI, thereby questioning the necessity of new, AI‑specific legislation.
Reinforced the idea that existing frameworks can fill gaps, leading Amanda and Denise to discuss whether additional rails are needed or if sector‑specific guidance suffices.
Speaker: John Edwards
We regulate only where harms are clear (e.g., election deep‑fakes, online scams). For the rest we rely on sectoral regulations and horizontal principles—proto‑standards and assurance ecosystems—that sit adjacent to law.
Introduces a layered governance model that distinguishes between clear‑cut harms requiring law and broader, evolving issues handled by standards and market‑driven assurance, adding complexity to the regulatory discussion.
Prompted Alexandra to note the transparency problem in existing laws, and led the group to explore the need for a “horizontal” layer of accountability beyond sector‑specific rules.
Speaker: Denise Wong
In the U.S., existing equal‑employment laws prohibit discrimination, but AI‑driven hiring tools make it practically impossible for a candidate to prove bias without a disclosure regime. Transparency and impact assessments are needed to give those laws meaning.
Provides a concrete, jurisdiction‑specific example where existing law is insufficient without AI‑specific transparency, illustrating the gap between legal theory and practical enforcement.
Deepened the analysis of why new governance mechanisms (e.g., disclosure requirements) are essential, influencing Denise’s point about agile codes of practice and prompting further discussion on enforcement challenges.
Speaker: Alexandra Reeve Givens
From cybersecurity we’ve learned how to manage risk across the entire supply chain. AI risk isn’t just at the point of use; we need a holistic, supply‑chain‑wide governance approach.
Brings cross‑domain expertise to the AI debate, suggesting that lessons from a mature field (cybersecurity) can inform AI risk management, thereby expanding the conversation beyond AI‑specific silos.
Shifted the dialogue toward systemic risk management, encouraging other panelists to think about broader, coordinated regulatory and industry responses rather than isolated measures.
Speaker: Amanda Craig
Because harms are still being coalesced, prescriptive legislation is premature. Instead we should use agile tools like outcome‑driven umbrella legislation combined with quickly updatable codes of practice.
Advocates for a flexible, iterative regulatory approach, directly addressing the difficulty of forecasting AI harms and offering a practical alternative to rigid statutes.
Reinforced the earlier theme of layered governance, and led to a consensus that while some high‑risk scenarios merit direct regulation, most AI governance will evolve through standards and best‑practice frameworks.
Speaker: Denise Wong
The EU AI Act’s high‑risk provisions, regulatory sandboxes, and transparency laws are already being echoed in U.S. states like Colorado, New York, and Utah—showing a nascent ‘Brussels effect’ for AI.
Counters the claim that there is no global harmonisation, pointing out concrete examples of cross‑jurisdictional learning and diffusion of regulatory ideas.
Broadened the perspective from a purely national view to a global one, encouraging the panel to acknowledge emerging international convergence and influencing John’s remarks on coordination among regulators.
Speaker: Alexandra Reeve Givens
Agency—not just consent—is the innovation we need: give users the ability to understand provenance, withdraw consent, and control their data after the fact.
Proposes a shift from the traditional consent model to a more dynamic, user‑centric notion of agency, adding a fresh conceptual tool for building trust.
Inspired a brief exchange on burden‑shifting, with Trevor linking agency to fiduciary responsibilities, and set the stage for the rapid “innovation round” where each panelist highlighted a promising idea.
Speaker: John Edwards
Provenance tools—software‑built material that tracks dynamic AI components—can bring transparency to agentic AI systems.
Introduces a concrete technical innovation that could operationalise the abstract concepts of trust and accountability discussed throughout the panel.
Provided a tangible example for the speed‑round, linking back to earlier calls for transparency and influencing John’s and Denise’s selections of agency and privacy‑enhancing technologies respectively.
Speaker: Amanda Craig
Overall Assessment

The discussion was driven forward by a series of pivotal remarks that reframed the trust‑and‑safety debate from a binary regulation‑vs‑innovation stance to a nuanced, layered governance model. Trevor’s opening story anchored the theme of trust, while Alexandra’s and John’s insights about leveraging existing law and the EU AI Act’s influence opened space for pragmatic solutions. Denise’s distinction between clear‑harm regulation and horizontal standards, coupled with Amanda’s cross‑domain supply‑chain perspective, added depth and highlighted the need for agile, coordinated approaches. Concrete examples—such as the U.S. employment discrimination case and provenance tools—grounded the abstract concepts, leading the panel to converge on four promising innovations (provenance, agency, privacy‑enhancing tech, and well‑resourced regulators). Collectively, these comments shifted the tone from skepticism about regulation to a collaborative view that sees thoughtful governance as essential infrastructure for AI innovation.

Follow-up Questions
How can we develop effective transparency and disclosure regimes for AI systems in high‑risk contexts (e.g., hiring) to enable enforcement of existing anti‑discrimination laws?
Without transparency, existing laws such as equal‑employment regulations cannot be applied to AI‑driven decisions, leaving victims without remedy.
Speaker: Alexandra Reeve Givens
What mechanisms can regulators use to prospectively identify and classify AI‑related harms in a culturally specific way, given the difficulty of a one‑size‑fits‑all approach?
Prospective identification of harms is essential for crafting agile, context‑sensitive regulation that avoids over‑ or under‑regulation across diverse societies.
Speaker: Denise Wong
How can international regulator coordination (e.g., ICO, Ofcom, GPA) be structured to address cross‑jurisdictional AI issues such as the Grok incident?
Fragmented oversight hampers effective enforcement; a clear coordination framework would enable consistent responses to AI‑driven harms that cross borders.
Speaker: John Edwards
What is the effectiveness of regulatory sandboxes and codes of practice as less‑prescriptive tools for AI governance, and how can they be evaluated?
Sandboxes and codes aim to provide flexibility while protecting users, but their impact is unclear; systematic evaluation would inform whether they achieve desired outcomes.
Speaker: Denise Wong
How can provenance tools and software‑built materials be standardized to provide traceability for agentic AI systems?
Provenance enhances transparency and accountability for complex, dynamic AI components, helping regulators and users understand system origins.
Speaker: Amanda Craig
How can the concept of “agency” be operationalized in AI products to shift responsibility back to providers rather than burdening users?
Embedding agency (e.g., clear opt‑out, data‑deletion mechanisms) restores user control and reduces reliance on consent as the sole protection mechanism.
Speaker: John Edwards
What is the current state of adoption and practical impact of privacy‑enhancing technologies such as federated learning in production AI systems?
Understanding real‑world deployment of PETs informs whether they can fill gaps that law cannot, guiding both policy and industry investment.
Speaker: Denise Wong
Why has the EU AI Act not generated a “Brussels effect” similar to GDPR, and what factors influence global diffusion of AI regulatory models?
Identifying barriers to international regulatory convergence helps policymakers design frameworks that are more likely to be adopted worldwide.
Speaker: Alexandra Reeve Givens
How can independent, well‑staffed regulatory bodies be protected and resourced to effectively represent public interest in AI governance?
Robust, independent regulators are critical for trustworthy oversight; without adequate resources they cannot fulfill their mandate.
Speaker: Alexandra Reeve Givens
Are current consent mechanisms (e.g., cookie banners) adequate for AI‑driven data processing, or do we need new user‑centric remedies?
AI introduces opaque processing that may render traditional consent ineffective, necessitating new models of user protection.
Speaker: Trevor Hughes (implied) and Alexandra Reeve Givens
How can a “horizontal” layer of AI transparency principles be designed to complement sector‑specific regulations without creating regulatory duplication?
A unified transparency framework can provide consistent expectations across sectors while allowing tailored vertical rules where needed.
Speaker: Alexandra Reeve Givens, Denise Wong
What lessons can be learned from the Grok investigation about the need for multi‑agency collaboration and the gaps in existing regulatory frameworks?
The Grok case highlights practical coordination challenges and regulatory blind spots that must be addressed for future AI incidents.
Speaker: John Edwards
How can risk across the entire AI supply chain be managed cohesively, drawing on cybersecurity supply‑chain risk management practices?
AI supply‑chain risks are distributed; a holistic approach similar to cybersecurity is needed to prevent fragmented mitigation.
Speaker: Amanda Craig
What criteria should define “high‑risk” AI uses for targeted regulation versus sector‑specific self‑regulation?
Clear, evidence‑based thresholds ensure that regulatory effort focuses on the most harmful applications while allowing innovation elsewhere.
Speaker: Denise Wong, Amanda Craig

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

IndoGerman AI Collaboration Driving Economic Development and Soc

IndoGerman AI Collaboration Driving Economic Development and Soc

Session at a glanceSummary, keypoints, and speakers overview

Summary

The summit opened with Anandi Iyer highlighting that the India-Germany MOU on innovation and AI has already been activated and that a diverse group of German research institutes, industry CEOs and Indian officials are gathered to explore joint value propositions [1][2][3][4][5][9-15]. She noted Fraunhofer’s long-standing work on secure cloud data spaces and introduced the panel of four Bangalore CEOs and a representative from the Bertelsmann Stiftung as key industry partners [5][9-16][17].


Georg Enzweiler then emphasized AI’s projected contribution of $5-15 trillion to global GDP by 2030, while stressing the need for inclusive, climate-friendly growth and citing the newly launched India-Germany IA Pact that covers industry, talent, research and social-good applications [36-45][46-53][54-56]. He pointed to Germany’s investment in over 60 AI-for-sustainability projects since 2020 and India’s 15 % share of the global AI talent pool as foundations for deeper cooperation [46-53].


Dr Thomas Kuhn described Fraunhofer’s focus on trustworthy industrial AI, including methods to preserve retiring employees’ knowledge through virtual “colleagues” and the use of rule-based data spaces that can handle up to 10 000 transactions per second [70-78][80-84][102-108][110-118][120]. He also highlighted federated training that keeps proprietary data on-site and the institute’s alliance network of more than 30 partners that develop sector-specific AI solutions for health, logistics, energy and security [85-92][113-119].


Rajkumar Upadhyay outlined concrete collaboration areas: co-creating smart-manufacturing standards and cross-border data flows, applying AI to boost Indian agricultural yields, and jointly tackling cybersecurity, quantum communications and large-scale fraud detection using AI platforms such as SanchalSathi .gov.in [149-157][158-165][170-180][202-209]. He underscored that AI should reinforce inclusion, productivity and resilience, and called for the continuation of the Fraunhofer-CDOT partnership amid shifting geopolitical dynamics [202-209].


Prof Kristina Sinemus reinforced the theme of trustworthy AI, noting Germany’s €60 billion AI funding programme that supports 170 startups, including a load-bearing robotic wheelchair and an AI-driven cardiology risk-certification tool, and announced an AI innovation lab and quality-testing hub to translate research into public value [226-240][242-250][254-262][264-270]. She argued that AI must serve people, not the reverse, and that German expertise in regulation, data protection and quality assurance can complement India’s scale-driven digital infrastructure such as UPI and IndiaStack [272-279].


The CEOs then discussed practical challenges: Bosch highlighted the tension between deploying AI-enabled products and reshaping engineering processes, while SAP stressed the need for explainability, fairness and auditability when embedding AI across core enterprise workflows [300-322].


Anandi Iyer concluded by stressing the mutual opportunity for a sustainable, inclusive AI ecosystem that leverages German precision engineering and Indian scale, and invited further dialogue among the participants [280-286][397-405][416-423].


The discussion therefore reaffirmed a shared commitment to deepen Indo-German AI collaboration through joint standards, trusted data spaces, startup funding and sector-specific pilots aimed at economic growth and social good [54-56][202-209].


Keypoints

Major discussion points


Strategic Indo-German AI partnership and policy framework – The summit highlights the newly-launched India-Germany IA Pact and the existing MOU, positioning both countries as natural partners for “inclusive, sustainable AI” that can boost GDP while safeguarding people and the planet [28-55]. Anandi Iyer stresses that the collaboration aims to move from “lab to market” quickly and to create long-term dialogue [19-22][25-26].


Technical pillars: trustworthy AI, data spaces, and knowledge preservation – Fraunhofer’s Thomas Kuhn outlines key research areas such as AI reliability, uncertainty quantification, “industrial AI” for SMEs, and the creation of rule-based cross-company data spaces that can handle up to 10 000 transactions per second [70-84][102-108][112-118]. He also stresses the need to retain expert knowledge through “virtual colleagues” when senior staff retire [97-100].


Sector-specific collaboration opportunities – Speakers from the public and private sectors point to concrete domains where joint work is envisaged: smart manufacturing standards and energy-efficient factories [150-154]; AI-enhanced agriculture for yield and pesticide reduction [156-162]; AI-driven cybersecurity, fraud detection, and telecom-scale data processing [158-165][180-184]; and emerging fields such as quantum communication and 5G/6G development [170-176].


German research ecosystem and funding for responsible AI – Prof. Kristina Sinemus describes Germany’s AI strategy, a €60 billion funding programme, the AI Innovation Lab, an AI quality-and-testing hub, and dozens of startup grants that focus on trustworthy, human-centric AI for health, agriculture and industry [241-268][226-236]. She repeatedly links “trust” and “democratic values” to the deployment of AI [233-240][276-278].


Industry perspectives on AI adoption and risk management – CEOs from Bosch, SAP, Mercedes-Benz and others discuss the practical challenges of embedding AI into legacy processes, the anxiety around paradigm shifts, and the need for explainability, compliance and ethical safeguards [297-304][308-324][332-340][364-378]. They acknowledge both the competitive advantage and the responsibility that comes with large-scale AI rollout.


Overall purpose / goal


The discussion is a high-level convening of government, research institutes, and industry leaders to (1) celebrate the existing Indo-German AI agreements, (2) map out concrete technical and sectoral collaboration pathways, and (3) agree on a shared commitment to responsible, trustworthy AI that delivers economic growth and social good for both nations.


Overall tone and its evolution


– The opening remarks are formal and celebratory, emphasizing partnership achievements and enthusiasm for joint innovation [1-5][28-34].


– As the session progresses, the tone becomes technical and optimistic, with detailed explanations of AI research, data-space architecture, and funding mechanisms [70-88][241-268].


– When industry leaders speak, the tone shifts to pragmatic and candid, acknowledging “anxieties,” implementation hurdles, and the need for responsible governance [297-304][308-324][332-340].


– Throughout, the conversation remains collaborative and forward-looking, repeatedly stressing mutual benefit, trust, and democratic values [36-42][226-236][276-278].


Overall, the dialogue moves from high-level endorsement to concrete technical detail, then to real-world business concerns, maintaining a consistently constructive and cooperative atmosphere.


Speakers

Anandi Iyer – Head of Fraunhofer in India (18 years); moderator of the session; expertise in applied research ecosystems, AI-driven innovation collaboration.


Georg Enzweiler – Speaker delivering the special address; role not further specified in the transcript.


Dr. Thomas Kuhn – Head of the Division of Embedded Systems, Fraunhofer IESE; expertise in AI, augmented intelligence, trustworthy AI, data spaces, and industry-academia bridging. [S2]


Dr. Rajkumar Upadhyay – CEO, Center for Development of Telematics (CDOT); expertise in telecommunications, quantum communication, cybersecurity, AI for fraud detection and large-scale data processing. [S8][S9]


Sindhu Gangadharan – CEO, SAP (enterprise application software); expertise in enterprise software, AI integration, responsible and explainable AI. [S11]


Prof. Dr. Kristina Sinemus – Minister for Digitalization and Innovation, Germany; background in biotechnology; expertise in AI policy, funding programmes, trustworthy AI and public-value AI projects. [S13]


Dattatri Salagame – Representative, Robert Bosch Software Solutions; expertise in AI for autonomous driving, AI cockpit, AI in healthcare and industrial AI applications.


Anshuman Awasthi – CTO, Mercedes-Benz Research and Development Center (Bangladesh); expertise in AI for automotive operations, product development and operational efficiency.


Prashant Doreswamy – Representative, Averior (formerly Continental); expertise in AI for manufacturing, R&D efficiency, AI-driven quality improvement and product innovation.


Additional speakers:


Murali Nair – Representative, Bretelsmann Stiftung (think-tank); expertise in Indo-German knowledge exchange, policy papers and strategic partnership analysis.


Full session reportComprehensive analysis and detailed insights

Opening remarks – Anandi Iyer


Anandi Iyer opened the summit by celebrating the rapid activation of the India-Germany Memorandum of Understanding on innovation and AI, noting that many of the agreed-upon activities had already begun [1]. She welcomed Dr Padhya and highlighted the presence of Dr Thomas Kuhn from the Fraunhofer Institute of Experimental Software, who would share Fraunhofer’s AI expertise in workplace transformation, manufacturing, agriculture and health [2-4]. Iyer introduced Fraunhofer’s long-standing work on secure cloud-based data spaces [5-6] and announced a “power-packed” panel of four senior Bangalore leaders – Sindhu Gangadharan (SAP), Anshuman Awasthi (Mercedes-Benz CTO), Dattatri Salagame (Robert Bosch Software Solutions) and Prashant Doreswamy (Averio, formerly Continental) – together with Doris Rami, representing Averio, formerly Continental[9-18] and Murali Nair of the Bertelsmann Stiftung, a think-tank that has long advocated India as a strategic partner for Germany [9-18]. Iyer then spoke on behalf of the Fraunhofer-India collaboration, stressing that Fraunhofer has been a “first mover” in India for 18 years, operates 76 institutes worldwide, files two patents for every working day[19-20] and invented the LP3 white LEDs[21-22]; she also noted that Fraunhofer has earned more than 70 million euros in the last 10 years[23-24]. She concluded by inviting Georg Enzweiler to deliver a special address [25-27].


Special address – Georg Enzweiler


Enzweiler thanked the hosts and lauded the panel [28-34]. He framed AI as a transformative driver that could add US$5 trillion-US$15 trillion to global GDP by 2030, while immediately raising ethical questions of inclusivity, environmental impact and labour-market effects [35-42][S69]. He highlighted India’s ambition to build massive, green-powered computing infrastructure and Germany’s investment in more than 60 “AI-lighthouse” projects for climate and environmental protection since 2020 [43-46]. Referring to the newly launched India-Germany IA Pact, he described it as an implementation-driven partnership covering industry, talent, joint research, innovation, infrastructure and AI for social good [54-56]. Enzweiler underscored that India supplies 15 % of the global AI talent pool and ranks third worldwide in AI R&D after the United States and China, making the two nations natural partners for sustainable AI solutions [49-53].


Technical presentation – Thomas Kuhn (Fraunhofer)


Kuhn framed Fraunhofer’s work as “augmented intelligence”, keeping human expertise at the core of AI [62-63]. He outlined three technical pillars.


1. Trustworthy AI – he asked how reliable AI outputs are and described Fraunhofer’s research on uncertainty quantification, which attaches a confidence score to each response [72-78][106-108].


2. Knowledge preservation – he warned of knowledge loss in SMEs when senior staff retire and proposed a “virtual colleague” that learns from experts and preserves organisational know-how [97-100][97-100].


3. Industrial AI & data spaces – he presented specialised models built from proprietary company data that remain on-site, and explained how rule-based data spaces enable secure, cross-company data sharing at up to 10 000 transactions per second[80-84][84-86][110-118][120-121].


Kuhn also described Fraunhofer’s Alliance of over 30 institutes, each contributing domain-specific AI expertise in life sciences, logistics, energy, security and more [85-92].


India’s AI landscape – Rajkumar Upadhyay


Upadhyay reiterated India’s AI momentum: a $2 billion AI budget, distribution of 38 000 GPUs to startups and an estimated US$1.7 trillion contribution to the Indian economy by 2035 [139-141]. He noted that Germany’s AI market is projected to reach €30 billion by 2030, driven by strong industrial integration [143-145]. He identified three concrete collaboration avenues.


* Smart manufacturing – he called for joint standards, cross-border industrial data flows and energy-efficiency safeguards, citing India’s Production-Linked Incentive (PLI) and Development-Linked Incentive (DLI) schemes [150-154].


* Agriculture – he referred to ongoing AI pilots that improve yield, reduce pesticide use and raise farmer incomes [156-162].


* Cybersecurity & telecom – he described India’s massive data-rate of 10 TB s⁻¹ and the SanchalSathi.gov.in platform that integrates telecom, banking and police data to flag spoof calls within 5 ms and block fraudulent financial transactions in real time [158-165][180-184]. He also highlighted India’s home-grown 4G/5G rollout and its ambition to co-develop 6G and quantum-communication technologies, noting his role as chair of the National Quantum Communication Hub[166-176]. Upadhyay concluded by urging that AI reinforce inclusion, productivity and resilience, and that the existing Fraunhofer-CDOT MoU be deepened in light of shifting geopolitical dynamics [202-209].


Bridge to German perspective – Anandi Iyer & Kristina Sinemus


After thanking Upadhyay and stressing that the partnership is co-creation rather than technology transfer [210-216], Iyer invited Prof Dr Kristina Sinemus to speak. Sinemus began by lamenting missed opportunities in translating research into economic growth, but argued that AI (and quantum) can bridge that gap [217-220]. She presented Germany’s over $60 billion AI funding programme, which has already supported 170 startups and financed projects such as a load-bearing robotic wheelchair for mobility-impaired users ( $1.8 million) and RISCA, an AI-driven cardiology risk-certification tool [241-250][254-262]. Sinemus announced the creation of an AI Innovation Lab at Hessian AI, offering high-performance computing and advisory services [264-270] and an AI Quality & Testing Hub that develops methods to certify AI systems, turning “trustworthy AI” from a slogan into a practical standard [267-270]. She linked these initiatives to India’s digital public infrastructure (e.g., UPI, IndiaStack), suggesting that German expertise in regulation, data protection and quality assurance can complement India’s scale [271-276][S70].


CEO round-table


Prompted by Iyer, the four senior leaders discussed practical adoption challenges.


* Dattatri Salagame (Bosch) asked what keeps him awake – the tension between deploying AI-enabled products (autonomous driving, AI cockpit, AI in healthcare) and the need to re-engineer software development processes[297-304][300-307].


* Sindhu Gangadharan (SAP) stressed that AI must be embedded with explainability, transparency, fairness and auditability, especially as autonomous workflows become commonplace [308-324].


* Prashant Doreswamy (Averio) highlighted AI-driven gains in R&D efficiency (over 20 % improvement), quality assurance and fraud detection, describing tools such as ReckNet and an e-travel companion that enhance camera vision and driver interaction [332-340][345-354].


* Anshuman Awasthi (Mercedes-Benz CTO) countered that AI is not a challenge per se; the company has been integrating AI into cars since 2019 and now focuses on operational efficiency, noting that the NBRDI in Bangalore contributed to the 2019 AI application [364-372][383-385].


Key take-aways


1. The MOU and the newly signed IA Pact provide a concrete, implementation-driven framework for Indo-German AI cooperation across government, industry, research and skill development [1][54-55].


2. Trustworthiness and testability of AI were highlighted by multiple speakers (Kuhn, Sinemus, Salagame) and are supported by secure data-space architectures, uncertainty wrappers, virtual-colleague concepts and dedicated quality-testing hubs [72-78][84-86][106-108][97-100][267-270].


3. Both nations disclosed substantial financial commitments – Germany’s AI-lighthouse projects, over $60 billion funding programme and 60 sustainability projects; India’s $2 billion AI budget, 38 000 GPUs distribution and projected US$1.7 trillion economic impact [44-46][242-244][139-141].


4. Sector-specific collaboration was identified as a priority: smart manufacturing, agriculture, health care, cybersecurity, quantum communication and 5G/6G [150-162][226-240][264-270].


5. Fraunhofer’s network of 30+ institutes and its high-throughput data-space platform are positioned as technical enablers for the partnership [85-92].


6. Industry leaders acknowledged the need to manage paradigm shifts, ensure explainability and address client anxieties while leveraging German precision engineering and Indian scale [300-307][321-324].


7. Several speakers (Enzweiler, Sinemus) emphasized that AI development must be guided by democratic and inclusive values[35-42][S69][277-279].


Proposed actions


a. Sustain dialogue and co-creation between Indian agencies (CDOT, NBRDI) and German research bodies (Fraunhofer, AI Innovation Lab, Quality & Testing Hub).


b. Organise reciprocal visits to deepen mutual understanding of labs and platforms such as SanchalSathi.


c. Develop joint standards for smart-manufacturing data flows, AI trustworthiness metrics and cross-border data-space governance.


d. Launch pilot projects in early-disease plant detection, AI-driven cardiology decision support and large-scale fraud detection, leveraging the IA Pact’s implementation focus.


e. Pursue joint research on quantum-safe communication and 6G, building on India’s National Quantum Communication Hub and German expertise.


f. Facilitate Indian startups’ access to German AI funding and German SMEs’ entry into India’s AI talent pool [205][274-276][S71].


Thought-provoking remarks that shaped the dialogue included Enzweiler’s macro-level question about inclusive AI growth and its labour-market impact [35-42]; Kuhn’s articulation of “augmented intelligence” and the virtual-colleague concept for knowledge preservation [62-63][97-100]; Upadhyay’s vivid description of the SanchalSathi platform detecting spoof calls in 5 ms and processing 10 TB s⁻¹ data streams [158-165][180-184]; Sinemus’s mantra that “technology must serve people, not the other way around” backed by concrete funding examples [237-240][241-250]; and the contrasting CEO perspectives on AI difficulty – Awasthi’s confidence versus Salagame’s expressed anxiety [364-372][300-307].


Follow-up questions for future sessions


– How can AI-driven growth remain inclusive and environmentally benign?


– What are the labour-market implications of widespread AI adoption?


– Which methods best achieve trustworthy AI in safety-critical domains?


– How can expert knowledge be preserved via virtual colleagues?


– What rules should govern secure, rule-based data-space sharing?


– How do telecom-fraud patterns compare between Germany and India?


– What standards are needed for smart-manufacturing data flows and cross-border exchanges?


– How can AI enhance agriculture to boost yields and farmer incomes?


– What joint solutions are feasible for high-rate cybersecurity and real-time fraud detection?


– How should India and Germany collaborate on quantum-communication research?


– How can democratic values be embedded in AI regulation while supporting economic development?


Overall, the summit demonstrated strong consensus on the strategic importance of Indo-German AI collaboration, while highlighting moderate disagreements on implementation pathways, trust-building mechanisms and the optimism-caution spectrum. The dialogue set a clear agenda for concrete joint standards, pilot projects and sustained co-creation to harness AI for economic growth, sustainability and social welfare. [All relevant questions cited in the transcript]

Session transcriptComplete transcript of the session
Anandi Iyer

And we are delighted to note that many of the activities that were outlined in the MOU have already kick -started. So I think to that extent, it’s fantastic that Dr. Padhya is here today and will share with us his ideas on where an Indo -German collaboration in innovation and AI can have a value proposition for both countries. I’m also delighted to have my own colleague, Dr. Thomas Kuhn, who has come all the way from Germany from the Fraunhofer Institute of Experimental Software. He will be sharing some of our experiences and competencies in AI, particularly with focus on workplace changing, which is, of course, one of the biggest topics we have today, but also manufacturing, agriculture, and health.

And I think one of the key topics that Fraunhofer can present to the Indian audience is data space. We have created a secure data space in the cloud for many years where challenges remain. Industries are brought in and discussed. So you will hear a lot from him. I’m particularly delighted that four Bangalorean CEOs are here today. We have with us Sindhu Gangadharan, who needs no introduction, I think. Anyone knows that, and who doesn’t know is coming under a stone. So we also have Anshuman Eversi for CTO of Mercedes -Benz Research and Development Center in Bangladesh. And we have Dattatri Salagame from Robert Bosch Software Solutions. And, of course, Prashant, he’s sitting at the back, Doris Rami from Averior, which was earlier Continental.

So as you can see, a power -packed industry captain session, which will follow after the panel. And last but not the least, we have Murali Nair from Bretelsmann Stiftung, which is a think tank, which has been producing products. A lot of knowledge papers around India and Germany, and actually positioning India as a positive partner for Germany long before Germany even started looking at us so seriously. So thank you for that, Murali. I’m standing here in front of you today as the head of Raunafa in India for the last 18 years extremely proud that Raunafa is one of the largest applied research ecosystems in the world we have 76 institutes in Germany we are present in more than 80 countries we produce two patents for every working day we are also the inventors of the LP3 white LEDs and many such inventions but what really singles us out in this innovation corridor is the fact that we take research from lab to the market in the shortest period of time so this is an area where India really needs support and we are delighted that we were one of the first movers coming into India 18 years ago and even today there are very few international R &D organizations that are active in India you We have been earning more than 70 million euros in the last 10 years from research contracts with Indian industry, government and research organizations.

And we have an absolutely amazing network of partners in India, including organizations like CDOT, but also with Indian industry, which is really gearing up for innovation. So I think we came in at the right time and took advantage of this, you know, innovation trajectory that has picked up in India. And it’s exciting to see what’s unfolding before us. I have to say that today what we are bringing together is really a thought -provoking session. We only have 15 minutes, so I would like to get on with business immediately. And we hope that some of the thoughts that we are leaving behind with you today will resonate and will actually trigger a long -term dialogue and engagement so that we can take the Indo -German innovation collaboration to greater heights.

I would now like to invite Mr. Georg Enzweiler. To kindly deliver his special address.

Georg Enzweiler

Good morning, ladies and gentlemen. I don’t know whether it’s maybe that we are just as crazy as you are sometimes that we are big fans of Karnova here in India. It’s always delightful to be invited by you here in India. You do amazing work here in the country. I wish we’d have several Anandi ears here in the country to multiply your efforts. So thank you very much for the invitation. And I’m impressed by the distinguished panel of speakers today and of panelists here. Thank you. All of the panelists and speakers here today are drivers of technology, and so it makes sense that you are also at the forefront of innovative AI technology. From increasing crop revenue, improving diagnostics, or minimizing errors in manufacturing, AI undoubtedly has huge potential for social and economic good.

AI is predicted to contribute between $5 and $15 trillion to the global GDP by 2030. But there are also questions, of course. How can we ensure that this growth is inclusive? How do we minimize negative effects for people and the planet? What kind of effect would this have on labor markets, for example? After all, the motto of this summit is actually welfare for all. India, for one, has ambitious goals to create massive computing infrastructure in big parts powered by green energy. Germany is investing in so -called AI lighthouses, which foster AI innovations for climate and environmental protection. Since 2020, Germany has funded over 60 projects leveraging AI for sustainability. The topics range from wildfire prevention and renewable energy to biodiversity monitoring.

And circular economy. that government, academia, and industry work hand -in -hand. By promoting research and development, creating a clear regulatory environment, and investing in the training of skilled workers, we can unlock a broader potential for AI. India shows huge potential here. It accounts for 15 % of the global AI talent pool and has the highest AI skill penetration rate. In terms of research and development in AI, India ranks third in the world after the U .S. and China. India and Germany, with their longstanding scientific partnership, are natural partners in creating solutions that are sustainable and inclusive for all. Hence, only two days left. Ministers Vaishnaw and Wildberger launched the India -Germany IA Pact, a new partnership focused on implementation -driven collaboration across government, industry, research, skill development, and innovation.

It will include aspects as AI for industry and manufacturing, talent, skills, and mobility, joint research, innovation, and infrastructure, and overall AI for social good. So I very much look forward to the discussion of today’s session, that it can give fruit for thought on this most relevant topic. Thank you again for the invitation, and I wish all of us an interesting discussion on, many of these pertinent topics. Thank you.

Anandi Iyer

Thank you for your kind words, Mr. Ensweiler. We now have the pleasure of hearing Dr. Thomas Kuhn, who heads the Division of Embedded Systems in Fraunhofer IESE. As you all know, we talk artificial intelligence. In Fraunhofer, we call it augmented intelligence, which means that human intelligence is still at the core of what we’re talking in terms of AI. It’s a pleasure to bring Dr. Thomas Kuhn to India. The floor is all yours.

Dr. Thomas Kuhn

Yes, thank you for inviting me. Yeah, what is Fraunhofer? doing in the field of AI. You all know AI is on one hand large language models, mostly driven by the US. We have huge open source models coming from China. So what is Fraunhofer contributing? And our goal is to bridge industry and academia to support the industry in creating new products and bringing ideas into practice. And I’ve chosen here a few highlights on what we can do as Fraunhofer. So for example, when you speak about AI, the question is, how reliable is the result of AI? Do you get concrete, do you get reliable information? That’s one topic, trustworthiness. How can we achieve trustworthy responses? That’s something that Fraunhofer is researching on.

For sustainable growth, it’s also very important to preserve knowledge that is available in the company, especially if people retire. Small and medium -sized enterprises have exactly this problem that knowledge is leaving, and you cannot get this knowledge back because the new employees simply don’t have this knowledge anymore. So how can you preserve knowledge? We call this industrial AI, so bringing AI models into practice, creating specific, specialized AI models based on company data, based on sensitive data, not personal data of persons, but sensitive data for business, sensitive data for companies, so that they can support these companies with these needs that the AI can provide. For example, it creates a colleague that is an expert for a specific device and that can help.

humans to work more efficiently with this device. And last but not least, as my colleague already did introduce, data spaces are a key technology for that because for AI, we need to be able to train AI and therefore we need a reliable access to data and a way to share data, but based on rules. And that’s what data spaces are meant for. With Raumhofer, we have the ability to instantiate data spaces that scale up to 10 ,000 transactions per second, and that’s quite a lot. So Raumhofer is structured into alliances, and the most important one is the big data and the artificial intelligence. Alliances, so more than 30 institutes that team up with AI, in creating AI strategies, each institute with its own field of competence, but all of us support best practices, studies, expert opinions, and we support all companies in creating new solutions.

And as you can see, we have a lot of fields that our institutes are working on, life sciences, healthcare when it comes to diagnostics, to personalized medication, logistics and mobility for optimizing supply chains, making them more resilient, production and industry for that all, very important. We also need to keep production in Europe to remain resilient, so it must be economic to produce in Europe. Energy and environment. Renewable energies. Smart goods. self -managing grids. That’s very important topics also for AI and big data, business and finance, security, to make sure that our data is well protected. I cannot go into detail for all of these topics, so we’ve collected a few samples on web we are doing from embedded AI, image analysis, collaborative transport, cancer diagnostics, swarm intelligence, of course also defense topics.

I cannot go too much into detail here but AI has a huge impact and why the ways regarding general purpose LLMs is not our ways. The applications that can be created with AI, the transformation of industry that is very well our knowledge. to bring AI to use. for industry. And therefore also to use for all, and then we are back at the topic of this event here, welfare for all and happiness for all, because every industrial revolution in the end did yield much more wealthiness for everybody. I have brought some highlights, and I promise I won’t go into details here, no worries. So the virtual colleague is one thing that is important when people leave companies.

We have expert knowledge, we have experience, and this is very hard to preserve. So just imagine a virtual colleague that follows the people in the field, that learns from them, and that keeps the knowledge in the company once this person leaves, so that the company knowledge is retained. That’s one project. One project that Fraunhofer is doing in the field of AI. to support industry and also smaller medium -sized industries. Trustworthiness, that’s very important. We all know that AI is not perfect, and probably some of you have already tweaked the AI maybe a little bit on purpose to see, okay, how far can I go? With that, AI basically is stupid. It has a lot of knowledge, but it has no understanding of that.

Trustworthy AI means we observe, for example, the field of use for a specific AI model and track whether we are leaving this field. We have an uncertainty wrapper that gives you, with each response of the AI, an uncertainty value that says, okay, how trustworthy is this particular response? And that’s very important when we want to apply AI also in safety -relevant environments, for medical. for medical diagnostics, for image recognition, and also in traffic. That’s very important then to know how safe is the assumption of the AI here. Moderated AI training is very important when we want to work together. For example, a robot that can grab something that every human can do, grab into a box and take something out.

For a robot, that’s a very big challenge. We can train robots to do that, but it’s much more effective when we can do a federated training where everybody can contribute data, and in the end we receive one model, but one model where you cannot derive any sensitive data back because nobody wants to share their sequence. So that’s where data stays. That’s where data stays. That’s what data spaces are meant for. We share data based on rules. We can provide data just for training of AI. Everybody will see the raw data for that. We just use it for training of AI. And here, Fraunhofer is researching both government policies and efficient implementations of cross -company data spaces.

So that’s my pitch, my motivation. I hope I was able to give you some ideas, some insights in what we can do with AI besides of open AI. I think Sam Altman is not here, so I can’t say that. They’re doing great things, but there are also great things that can be done in addition to that. Thank you.

Anandi Iyer

Thank you very much, Thomas. This was just an appetizer. As you know, we have a lot of time. We have very limited time, but anybody who’s interested can visit our stand, where we’ve also demonstrated some of the use cases in AI and agriculture, and also AI and medicine, and AI in manufacturing. You’re welcome to visit us at the German Pavilion. I would now like to invite the CEO of Center for Development of Telematics to share a few thoughts on where India and Germany can work together. Thank you so much.

Dr. Rajkumar Upadhyay

Thank you so much, Anandi. Thank you, Anandi. Quite pervasive, it is being applied to almost all the sectors. And where it is not applied, it will be applied sooner or later. because it underpins the competitiveness, productivity, and societal resilience. And for India and Germany together, Germany being the precision engineering expert, you know, for years, and India bringing the scale, I think there is a very good way that India and Germany work together and align how the AI is taken forward in a very responsible, ethical way, so that not only it helps us economic progress, but also meets the social good challenges. As we know, India’s AI system is quite vibrant, ranking third globally for its competitiveness and developer activities.

Our AI mission is already in progress. We are investing more than $2 billion. We have given to all the startups 38 ,000 GPUs, and this will be further increased. It could bring $1 .7 trillion value to India’s economy by 2035. And the tech sector particularly is productive to contribute $280 billion revenue this year itself. Germany’s AI market is also rapidly expanding, driven by strong industrial integration, particularly in manufacturing and substantial public -private investment. Germany’s AI market is expected to be nearly 30 billion euros by 2030. I wrote that there are more than 600 startups in Germany, and we also have a lot of startups. I think together, how the startup ecosystem could work together for the benefit of not only both the countries for bringing global good to this sector.

So there will be a fair win -and -win situation for India and Germany, partnership between two great countries given our diplomatic relations, given the, as I said, the precision in engineering which Germany is famous, and the scale what we bring in India. I think there are many areas where we can contribute, but I thought I will focus on two, three. One, I think it was mentioned in smart manufacturing. Germany’s leadership in industry 4 .0 and India’s expanding manufacturing. I think this government has invested heavily on the manufacturing under various schemes like PLI and DLI schemes, and it is going to go forward. And therefore, how do we use AI in smart manufacturing? We would like to work with Germany in terms of developing smart manufacturing standards, cross -border industrial data flows and safeguards, enable energy efficiency.

And since we arrived late in this manufacturing space, a lot of our manufacturing plants have actually started using AI. In fact, recently there was a science evenistic wherein the Tata’s, how they are going to bring down the cost. using AI in manufacturing. The second point I think Anandi brought up for agriculture, I think Parnasar is already working in India for agriculture, and agriculture being one of the key partners for economy, the service sector and the economy, agriculture sector, I think it also makes us a good partner for how do we improve the productivity, the yield, the income for farmers, the efficiency gain, productivity gain, and that will be another area we can work together. I would also like to say that, you know, given, especially given the economic progress we are making, and we see a lot of challenges coming from the cyber security area.

We get millions of attacks in our country, millions of attacks. and I’m not sure what is happening in that part. So, therefore, AI in cybersecurity is very, very important, which we use to some extent and we would like to learn from Germany, whatever, and we would like to share these, what we are doing, how we are using AI to, you know, come to a pattern because the kind of data flow, for example, in India, we receive data rate at 10 terabyte per second. 10 terabyte per second is the data rate. And how do you process this data in real time and be able to tell that, that how and where the attacks are happening? As far as the CDOT is concerned, we work in various areas.

I don’t know, some of you may be aware. I would like to place it on record that India made its own 4G and 5G, and it was launched by Honorable Prime Minister in September 25. And this was a journey we took alone. For 6G, we would like this journey to be together with the world because we were actually not. We never developed 1G, 2G, 3G. Suddenly, our Honorable Prime Minister said, you know, 4G and 5G, and we did in two, three years, and it is launched today. Thank you. close to 170 ,000 base stations are radiating, generating a data rate of 5 petabyte. And we’ll go further. So 5G, 6G is one area. The second area is quantum because as all we know, many of the leading economies have put a sunset date to the current level of encryptions which will be broken because of the quantum computers.

And therefore, there is a need to work on quantum together. We, particularly in CBOX, we work only in quantum communication. In fact, I am the chair of the National Quantum Communication Hub along with IIT Madras. We fund a lot of research into quantum to start up. So this could be another area where we could work together. The third thing is I’m not aware what kind of frauds happen in Germany or happen or don’t happen. In India, there were, you know, there were a series of frauds, cyber fraud calls or this type of calls. So we developed the system at India scale and are successful today in blockage. For example, I’m just giving example of we used to get 1 million regular international calls, and we were getting 15 million spoofed calls, 15 million spoofed calls per day.

And at my gateway, in 5 milliseconds, I have to decide whether this call is a real call or spoof call. Today we have deployed the system in India, which in 5 milliseconds tells you this is a spoof call, drop it, and this is a real call, send it. So at India scale. We have developed a platform called SanchalSathi .gov .in. I don’t know how many of you are aware. This is, again, a digital intelligence platform. One side is exposed to the customers, the citizens, and in back it is connected to all kinds of stakeholders. What used to happen is, as Telecom, I have declared one, let me first check the time. So, yeah, I think it’s happening.

So one is that, you know, in telecom I have said this number is turbulent. Bank doesn’t know. Police doesn’t know. So what we did, we integrated all of the stakeholders in one single platform, which we called digital intelligence platform. Once I declared this number to be a turbulent number, for any reason, it is flagged in bank, police, everywhere, so that everybody takes care. Today, when the actual financial transition happened between A to B, there is a module which we have developed called FRI, financial risk indicator. Before the money is transferred, the bank digs through the database and says, this money is going to be, is it safe? So my system tells me, no, it is not safe.

This guy is not a safe guy. So the transition is stopped. So we have done the fraud management at scale. We would like to work with Germany to further enhance it and how to use AI into this. And going forward, I would only say my time is over. AI should not widen inequalities. It should strengthen inclusion. Productivity and resilience. Let us ensure AI becomes a pillar of sustainable economic growth and social good. As Anandi said, we already have an MOU with Fraunhofer, which we are working together. I would be very happy that, you know, if this partnership goes, must go on. And we together are going forward, given the geopolitical changes happening, we’re going forward together.

We will go and develop the technologies together for the wider social good. Thank you so much.

Anandi Iyer

Thank you, Dr. Upadhyay. I can only underscore that it has been an absolute privilege working with you. And it is not something where German technology is being given to CDOT, but we are actually co -creating. There are India -specific challenges for which teams from CDOT, who are extremely qualified and they have amazing infrastructure, I would really appreciate at some point, Dr. Upadhyay, some of our German delegates, maybe not this time, but next time, come and see the kind of work you’re doing. Because I think that brings a different level of understanding inside and trust. Without wasting any time, I’m sure all of you are waiting to listen to Professor Dr. Sinemus. We had her yesterday, and I must tell you that it’s really amazing to see, first of all, a woman as a head of innovation and digitalization in Germany.

Prof. Dr. Kristina Sinemus

to get the leg between research and then bringing it really to society and to economy growth. And at the end of the day, I’m coming out of the biotechnology area, and I think we really have lost a lot of chances in coming from the research really bringing to economic growth. And I think AI has the opportunity, and as well quantum, as you mentioned, to do this. And, Your Excellencies and gentlemen, ladies, it is a great honor for me to deliver a special address. I never have given a special address, and try to do this in the German National Technology Forum. I’m coming out of a region where we are working together. struggle together with Fraunhofer.

Even in my hometown, we have two, and one of this is the Fraunhofer Cyber Security, where we have the digital hub of cybersecurity as well. So we built a startup ecosystem in the area on the interface of cybersecurity and digital and AI. So many of this could be an anchor point afterwards. As Minister for Digitalization and Innovation, I very much welcome the opportunity to discuss how we can turn AI into concrete public value in manufacturing, in agriculture, in healthcare, and through trustworthy AI, as Thomas Kuhn has pointed it out. The AI Summit, all is about AI and we see how dynamic the process is going on. AI is already transforming how we produce, how we heal, how we grow food, and how we govern.

And the key question therefore is not if AI will shape our society. The question is how. And whether economic development and social growth move forward together rather than separate directions. And looking at the dynamic AI is going forward, we really have to be careful that we balance these economic development and the social part. And I think this is something which has to do with trust or trust as well. In my region developed a clear strategy. And our digital strategy and AI strategy are based. On a simple but a helpful demanding principle. That is. technology must serve people, not the other way around. Our AI -made Innocent Agenda combines innovation with responsibility. And we want AI that strengthens competitiveness, but also social coherence, fairness, and trust.

And this is the way we invest not only in research and infrastructure, but also in practical projects that deliver tangible benefits for citizens and business. So we have a funding program. It’s over $60 billion. And we invest in people who have a vision in how AI can benefit humans. So we invest in 170 startups till now, and I want to give you some examples so that you get an idea. And I start with healthcare, because healthcare is one of the most promising fields for social good. We support innovative projects that would be very difficult to realize without our public backing. And we fund, for instance, one project, it’s called DEMO, and DEMO, it’s not DEMO, it’s DEMO, and it’s a small group, three people who are working on PhD at the Technical University of Darmstadt by Alma Mater, and I was happy to give them 1 .8 million euro, because they are developing a load -bearing roboting wheelchair.

A session to a walking robot. that can safely and autonomously overcome barriers such as stairs or uneven ground using advanced robotics and AI. This project will help for the people with mobility impairments. That is no science fiction. It is a difference between dependence and independence participation on everyday life. And I think that is really a good example of how we invest our funds. Another example is RISCA, Risk Certification in Cardiology Using AI. The goal is to build a clinic decision support system that analyzes patterns in ECG recordings with AI and detects cardiovascular diseases at an early stage. so you can go through prevention. That are two of a lot of AI and healthcare startups we are giving our funds in.

And a second field I would like to highlight is agriculture. Even for India, agriculture is a very important area and a rich and crucial pillar of economic development and food security. Again, we support a project and there is only one of a lot on the early detection of plant diseases. Using automated analysis of satellite data, so we even connected with the space, AI models can recognize signs of plant stress and disease at every early stage of growth circles. So, you can really reduce pesticides and analyze before. So, you can really reduce pesticides and analyze before. So, you can really reduce pesticides and analyze before. AI can, at the end of the day, make agriculture more sustainable and more productive at the same time, and I think this is something where we have to go through.

Another example in our strategic agenda in the future of AI is that we set up an AI innovation lab at Hessian AI, co -funded by my ministry, and we provide a high -performance computing environment specifically for AI applications, along with advisory services for science, business, and public administration. So this is a particular focus in the startup and SME. We want to enable smaller players to develop innovation AI solutions, and we want to be, at the end of the day, in a position that trustworthy AI is not a slogan but a practical challenge. And this is why we established an AI quality and testing hub in Hess, a public -private company that develops methods and tools to test.

AI systems. So at the end of the day, the aim is to translate the idea of trustworthy AI into testable criteria and practical producers that business can use real deployments. So you can say that is trust for AI and we give it as trustworthy criteria. To the end of the day, I think, looking on the discussions we have had in the last days, that the inner German dimension is particularly important. India has shown with initiatives like other UPI and IndiaStack how digital public infrastructure can scale and enable innovation at massive levels. Thank you. Germany can bring expertise in regulation, not too much, data protection. and quality assurance. Together we can build bridges between scale and safeguard, between innovation and rights, between economy, development, and social good.

Because we are thinking grounded on the same values, democratic values. And let me close with a conviction that guides our work. AI will not automatically lead to better outcomes. It depends on the choices we make, what we fund, how we regulate, which ecosystems we build, and whom we include. So let us proceed on our democratic values and go forward with a union, German collaboration in AI. Thank you.

Anandi Iyer

Thank you so much, Dr. Sinemus. It was really a thought -provoking and a very concrete talk on what we can do with AI. There is the scope for the German collaboration. Without much ado, let me take forward what we all on the topics that you have mentioned by inviting the four CEOs to please come on stage. Can I request Atatürk, Sinemus, Anshuman and Prashant to kindly come up? We have a few minutes to really get into the topics. And I would like to just start by very quickly saying that In the last few days, I’ve attended quite a few of the sessions here, and there are so many nomenclatures for AI. It’s been defined as a transformational technology that comes once in several years.

People give examples of the steam engine and of the Internet revelation and Industry 4 .0, which dramatically changed the way we live and work. We are now riding an AI transformation, but complete with anxiety, with speculations, with admiration, but also with excitement. So let me take the audience through to what makes a CEO stay up at night. And let me start with Pratatvi. Vosh has been a pioneer in India. You know, you came here a century ago, and you’ve stayed on, resilient to the Indian market, which has not always been very easy. And you’ve been a pioneer in India. You cover a lot of base, signing across manufacturing, software, medical, and many more sectors. How does the vision board of Bosch lead in terms of the AI development and deployment?

Dattatri Salagame

Maybe the first question not keeps me awake in the night. The fact that you are 100 -year -old and, you know, if you are 100 -year -old, the effort that you make changes also higher. So that keeps me awake at night. As Bosch, I think we are working on two sides of it. So one side, you know, if you see we are deploying AI into the market. You know, if you see autonomous driving, AI cockpit, or AI in healthcare. So these are our products which go into the market. So fundamentally, we are, you know, navigating. We are navigating a new business model, new behavior of consumer, and developing new products for that. on the other side now we are using AI to disrupt long held beliefs of how we build software how we do engineering so this is my say while we are huge excitement to position our AI centric products in the market I would be dishonest if I say we don’t have anxiety how to get over the paradigm shift of engineering and software development using AI so these are the two edges at which we are operating now thank you so much

Anandi Iyer

Sindhu let me come to you you are the world’s leader in enterprise application software one time I remember you telling me that 97 % of companies worldwide use SAP it is already embedded in your workflows even before people are talking and wondering how to go about it but how do you see you have a responsibility because you have a captive client base How do you see these innovative technologies being rolled out to companies while they still have anxieties about their business? And how do you go about assuaging those fears and building in a model that works on layers that are already set and, like he said, very difficult to move right now in retrospect?

Sindhu Gangadharan

Well, first of all, Anandi, it’s great to be here. It’s an absolute honor to be amongst this very distinguished panelist and you and all the previous speakers who have been saved. Yeah, I think, like Zazadeh said, we live in times which are changing, particularly in technology, right? I mean, literally every day you have to open up and see what is the new LLM that’s out there, right? So that’s the way it’s been and which we are now in the open. So in that sense, this is… The assurance, when we say, when we are talking about the world’s largest enterprise application, the failure from SAP, it’s a lot of… The cost that our customers place on us, many of them sitting right here at those tables, are too hard.

And so then when we say 87 % of the world’s business partners, such as an SAP system, it’s a matter of lack of responsibility and a lack of trust, which we continue to take forward, given the 1226 industries and across 12 portfolios that we have, right? But at the same time, given the pace at which technology is changing, our responsibility also from an AI perspective is embedding AI across the core of our business processes. So if you take a process like a lead to cash or a total risk force management or a design to upgrade, strategic sourcing or procurement, this is where we show our customers that we’re not helping you to make your best, we’re doing the most compliant and the most ethical and the most responsible manner, right?

And so I want to make me say, to make sure and protect the choice. Customers also have the choice to run in a secure manner, in a compliant manner, in an ethical manner, and in a responsible manner, and which means also translating from several of our speakers. You need to be able to explain to this decision that you are making or asking the user to make, giving that explainability, the transparency, the fairness of that decision -making, the auditability, right? And that becomes even more important because today when we are talking about urgentity, we are talking about autonomous workflows, we are talking about decision -making. Yes, the human is in the room, but part of the workflow is completely autonomous, right?

And so that’s a lot of the work that’s keeping us busy at night and mornings, I would say, and really making sure that when we say customers can run at their best, in a compliant, in a secure manner, in a responsible manner? How do we make sure that they are to go through the research?

Anandi Iyer

Tim, what is the holdup and failure for customers to run in a safe manner? Whether you’re up at night or not, you’re certainly on your toes. The only person who’s been up at night for several months now, and I can personally vouch for this, Krishan, Continental is now a Romeo. It has not only changed in name, but also in terms of its core processes, some of its business sectors. So innovation is not only in the name, it’s also what you do. What is a Romeo now up to and what is happening?

Prashant Doreswamy

First of all, thank you, Amandeep and Neera. I’m sure all the panelists agree. I think in India, we have moved from support apps to center of excellence. I think in Aumavio, we’re focusing on two things. One is certainty of allocation for our customers, for the audience, innovation for the stakeholders. And the second was customer -centric innovation. Just to let you know, one year I think we’ve been able to gain efficiency improvement in R &D in excess of 20%. How did we do this? Actually, there are cool things. I think we do with one strength and core technology, be it in the plants in terms of how do we enhance the quality using air, be it in terms of, you know, any product that’s done in products, there’s always a lot of false calls, which requires a lot of efforts for you to redo that.

So that’s where the implementation of AI in terms of enhancing the quality. And the second is certainly government development, I think using the GitHub, what I was talking about, where this work is coding and also for developing different test scenarios and a test situation. So these are the things that we are really focusing on. One, as I said, how do we really enhance the velocity in engineering and R &D? We’re using this automated technology, and also we do something called ReckNet. In automotive, the biggest challenge today with so much of complexity of technology is coming with the requirements, because always you leave one or two, which is very, very important. Like how do you really enhance this in terms of the ReckNet?

And second is the attention of the agent, which is used in the enterprise in the simple function like finance, controlling, or market analytics. So this helps us in giving a very good summary for the leadership to take decision based on the data. And third is certainly on the product innovations. And we have a place here in the Jammu and Pakistan, where we have a display, which is dollar code here in India, one of the autonomous mobility, which is called Enhanced Light Fusion. We have been able to, with the use of AI, enhance only the… the camera vision for easier detection and the second is something called e -travel companion where you seamlessly interact with the vehicle and the third is like our driver will seamlessly access with the car.

These are the three which was developed innovations which is displayed in the German stand as well. So in summary, I think we continue to drive because of what we call our increase to seamless integration into the operation.

Anandi Iyer

Thank you so much Prashant. Let me go to Mercedes. Mercedes -Benz is a North Star in innovation. I mean you’ve been around several decades. It’s also been a very competitive environment in manufacturing which a lot of people say that Germany has lost its edge. But it’s an old school field. So how do you bring in AI into the second kind of operations, which is reflected in your standardized operations and very, I said, old school procedures? Is it challenging to get AI into it now?

Anshuman Awasthi

Good morning, everybody, and thank you for having me here. Yes, Mercedes has been one of the most important companies and we plan to remain virtual. We’ve been doing this for many, many years. AI as a technology is not a challenge. We don’t see any technology as a challenge. And we have been using it. I think if I’m not wrong, Mercedes -Benz was the first automotive company to bring AI into the cars in 2019 itself. So if you’re driving cars from 2019, you must have experience AI. It’s a challenge. It’s a challenge. Today our focus is monthly forward works. For example, we want to offer technology to our customers. That means you drive the car and you experience a lot in the car, how the car behaves, whether you direct the car, etc.

And whether the parts are behaving in the seat. And the second part that we also are planning to do is tighten our operations using AI. We want to bring operational efficiency using AI. And many things have been mentioned. One such thing we are trying to implement. Now it can bring benefit to all of us. So, we have technology. It’s very exciting. Because sometimes you feel that it’s making so many things evident. I think that’s now and then has been the same thing and same impact so we are looking further to going ahead with using it.

Anandi Iyer

And ,of course, NBRDI in Bangalore is playing a very big role into this process?

Anshuman Awasthi

Yes. The 2019 application was largely developed NBRDI that was sent to by our AI experts back then.

Anandi Iyer

It was so good we have another 5 minutes and I’d like to use this for a rapid round talking about India so we’ve always heard about the big combination of manufacturing excellence in Germany, engineering, scale and speed in India Let me ask Dr. Tri, is this heightened German interest in India, is it offensive or defensive?

Dattatri Salagame

I would say it was long overdue. I think given the capabilities of the two countries and the common interests that we share, for reasons unknown, we play it very low, while many others play it significantly high. So I think now we can collaborate on many dimensions, not just the precision engineering of Germany, which more than that. Because the pragmatic approach Germany brings in to solve problems from the first principles is what is required now. Because we are in a phase where we have to cut the norms of AI and to marry the substance of AI. So for this, I think we, I mean, we all know in this room… Thank you very much. So it might be a moment generally, you know, how we can remain together.

So it’s an opportunity that I see. Thank you.

Anandi Iyer

That’s very diplomatically put. Sindhu, I come to you. You have just inaugurated a huge campus and your ambitions of taking more than 14 ,000, 15 ,000 people. The controversial question that I’d like to put to you is they say that the cost of the charge is four is to one. That is, for one German, you can get four Indians in it. Not only one. Not even one. Srinu, I will write answers. No, of course. The value of the charge is just the opposite, right? So we are looking at a demographic dividend in India, but there is a huge aspect of inclusion, productivity, skilling, reskilling. How does that play into your vision for India?

Sindhu Gangadharan

You started off by talking about the campus, right? And we were talking only about the campus. What’s the charge point of view? I don’t know. I don’t know. I don’t know. I don’t know. I don’t know. campus, right? So that, and I think we just had Chancellor Max coming to campus as well, and looking at India as a constant detachment, it’s clear of my hands. I don’t see it in any of these conversations. Let’s not be, today when we look at, and we mentioned GitHub for pilot, and if I just look at CloudCode, the latest SOPAs that we have, the speed at which you can do things is of course changing, but what it means is experts can focus on high -value SOPAs, right, and high -value domain use cases, and this is the beauty of the pace at which innovation is being, and this is also why companies like SAP or anybody else here in SOPA are invested in India, because they want to build a system of countries and research question is always the right.

So we really want to bring this and the pace at which our people here in India are able to take forward, understand the context, drive forward things, drive that leadership, creation of IC, it’s the change we’re talking about.

Anandi Iyer

Thank you, Sindhu. I have actually two more provocative questions for my other two, but due to the paucity of time, we’ll have to take a break. Because we still have Morley talking about his study, but I think the fact is clear that the opportunity you have in India has been an opportunity like never before. It’s an influential point. It is an opportunity for cooperation and working together in a very, like Mr. said, with shared values, a very clear agenda. And I thank you for being here and for sharing your thoughts. And can we really quickly come and talk about your study, please? because we just have two minutes and we have to close so quickly come Thank you so much.

Related ResourcesKnowledge base sources related to the discussion topics (38)
Factual NotesClaims verified against the Diplo knowledge base (2)
Confirmedhigh

“Fraunhofer has been a “first mover” in India for 18 years”

The knowledge base identifies Anandi Iyer as Head of Fraunhofer in India with 18 years of experience, confirming Fraunhofer’s 18-year presence in India [S2].

Additional Contextmedium

“India aims to build massive, green‑powered computing infrastructure for AI”

India’s AI infrastructure roadmap includes gigawatt-scale data centre capacity and involvement in designing large-scale, gigawatt-level data centres, adding detail to the claim of massive green-powered computing infrastructure [S117] and [S118].

External Sources (118)
S1
The myth of the lone genius: How scientific revolutions really happen — An epistemological footnote: American historian and philosopher of science Thomas Kuhn has developed atheory of scientif…
S2
IndoGerman AI Collaboration Driving Economic Development and Soc — Thank you for your kind words, Mr. Ensweiler. We now have the pleasure of hearing Dr. Thomas Kuhn, who heads the Divisio…
S4
https://dig.watch/event/india-ai-impact-summit-2026/need-and-impact-of-full-stack-sovereign-ai-by-corover-bharatgpt — Absolutely. And let me tell you the truth. We would be better without investing that much money. Absolutely. In India, …
S5
Announcement of New Delhi Frontier AI Commitments — -Abhishek: Role/Title: Not specified (invited as distinguished leader of organization), Area of expertise: Not specified…
S6
IndoGerman AI Collaboration Driving Economic Development and Soc — -Prashant Doreswamy- Representative from Averior (formerly Continental), expert in automotive technology and R&D
S7
IndoGerman AI Collaboration Driving Economic Development and Soc — – Georg Enzweiler- Dr. Rajkumar Upadhyay- Dattatri Salagame- Sindhu Gangadharan – Georg Enzweiler- Prof. Dr. Kristina S…
S8
WSIS Prizes 2025 Winner’s Ceremony — – **Rajkumar Upadhyay** – Dr., Representative from Centre for Development of Telematics, India India’s AI and Facial Re…
S9
IndoGerman AI Collaboration Driving Economic Development and Soc — -Dr. Rajkumar Upadhyay- CEO of Center for Development of Telematics (CDOT), expert in telecommunications, quantum commun…
S11
IndoGerman AI Collaboration Driving Economic Development and Soc — -Sindhu Gangadharan- CEO (company not explicitly mentioned but appears to be SAP based on context), expert in enterprise…
S12
https://dig.watch/event/india-ai-impact-summit-2026/designing-indias-digital-future-ai-at-the-core-6g-at-the-edge — Morning. My name is Sidhu. I’m from AT &T. One quick question, now that Rajiv is also here. See, across the world, telec…
S13
IndoGerman AI Collaboration Driving Economic Development and Soc — -Prof. Dr. Kristina Sinemus- Minister for Digitalization and Innovation (Germany), background in biotechnology
S14
IndoGerman AI Collaboration Driving Economic Development and Soc — – Dr. Rajkumar Upadhyay- Dattatri Salagame
S15
https://dig.watch/event/india-ai-impact-summit-2026/indogerman-ai-collaboration-driving-economic-development-and-soc — And I think one of the key topics that Fraunhofer can present to the Indian audience is data space. We have created a se…
S16
IndoGerman AI Collaboration Driving Economic Development and Soc — -Anandi Iyer- Head of Fraunhofer in India (18 years), moderator of the session This comprehensive discussion on Indo-Ge…
S17
GermanAsian AI Partnerships Driving Talent Innovation the Future — Glad to do so. Now, of course, we need to bring people together. We yesterday had a tour around our German pavilion. And…
S18
AI Algorithms and the Future of Global Diplomacy — For example, AI in healthcare is a fantastic opportunity for. Indo -German cooperation, there is fantastic data availabl…
S19
How AI Is Transforming Indias Workforce for Global Competitivene — Are we having the same conversations? Are we facing the same kind of issues? I think what I’ve just heard from my fellow…
S20
CEOs unprepared for impact of generative AI, reveals Deloitte survey — A globalsurvey conducted by Deloitte’s AI institutereveals that top executives still need to prepare to handle the impac…
S21
Empowering education through connectivity ( Giga – UNICEF and ITU joint initiative) — A key element in ensuring connectivity for schools is the collaboration between the public and private sectors. The appr…
S22
AI for equality: Bridging the innovation gap — This discussion provided a comprehensive examination of the intersection between AI, women’s empowerment, and digital in…
S23
Germany ramps up AI funding to close global tech gap — Germany is planning to increase its AI research funding by almost one billion eurosin the next two years, aiming to narr…
S24
Germany invests €1.6 billion in AI but profits remain uncertain — In 2025 alone, €1.6 billionis being committedto AI in Germany as part of its AI action plan. The budget, managed by the …
S25
Strengthening bilateral technological cooperation: Indian Prime Minister discusses joint projects in US visit — Indian Prime Minister Narendra Modi is currently undertaking a significant state visit to the United States, where he ha…
S26
From Innovation to Impact_ Bringing AI to the Public — Sharma’s central thesis positions AI not as a threat to employment but as a productivity multiplier that will enable Ind…
S27
India allocates $1.24 billion for AI infrastructure boost — India’s government has greenlit a ₹10,300 Crore ($1.24 billion) fundingprojectto enhance the country’s AI infrastructure…
S28
Democratizing AI Building Trustworthy Systems for Everyone — “of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control …
S29
Human-centred AI development: Italian PM’s key message during Washington visit — Italian Prime Minister Giorgia Meloni visited Washington anddiscussed the futureof AI governance with US President Joe B…
S30
WS #205 Contextualising Fairness: AI Governance in Asia — Nidhi Singh: you Hello everyone. Hi and welcome to our session on contextualizing fairness AI governance in India. I …
S31
Are AI safety institutes shaping the future of trustworthy AI? — As AI advances at an extraordinary pace, governments worldwide are implementing measures to manage associated opportunit…
S32
Germany’s path to global AI leadership: a €5 billion action plan — Bettina Stark-Watzinger’s AI Action Planstrives to position Germany and Europeas global leaders in the field of AI. This…
S33
WSIS Plus 20 Review: UN General Assembly High-Level Meeting – Comprehensive Summary — UNCTAD acknowledges the significant environmental impact of the digital economy and the formidable challenges brought by…
S34
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — Very high level of consensus with no significant disagreements identified. This strong alignment suggests effective coor…
S35
WS #31 Cybersecurity in AI: balancing innovation and risks — Sergio Mayo Macias: Yes, thank you. Thank you, Gladys. Well, actually, the AI environment in Europe is known and has …
S36
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — <strong>Sihao Huang:</strong> of these agents work with each other smoothly. And protocols are so important because that…
S37
Conversation: 02 — This reframes trust from a soft concept to a foundational technical requirement, positioning it as critical infrastructu…
S38
Global Digital Compact: AI solutions for a digital economy inclusive and beneficial for all — A strategic ecosystem approach requires early use cases in areas where private sector can lead, areas where public secto…
S39
Embracing the future of e-commerce and AI now (WEF) — Public-private collaboration is recognised as crucial in preparing for a future dominated by technology in trade. The sp…
S40
Planetary Limits of AI: Governance for Just Digitalisation? | IGF 2023 Open Forum #37 — Additionally, they highlight the importance of considering sustainable development goals and respecting human rights in …
S41
Secure Finance Risk-Based AI Policy for the Banking Sector — The discussion revealed several practical challenges in implementing embedded AI governance, including the need for inte…
S42
WS #98 Towards a global, risk-adaptive AI governance framework — Melinda Claybaugh: Great. Thank you so much. Just a little bit of context to explain Meta’s, to explain my company’s …
S43
From principles to practice: Governing advanced AI in action — ## Industry Implementation Challenges The conversation highlighted the urgent need for governance frameworks that can k…
S44
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — During the discussion, the speakers highlighted the ethical challenges associated with technology development. They emph…
S45
IndoGerman AI Collaboration Driving Economic Development and Soc — The newly launched India-Germany AI Pact provides an institutional framework for sustained collaboration, whilst existin…
S46
Comprehensive Report: Preventing Jobless Growth in the Age of AI — The discussion explored AI’s potential across different economic sectors and global development contexts. Kumar identifi…
S47
Global Digital Compact: AI solutions for a digital economy inclusive and beneficial for all — A strategic ecosystem approach requires early use cases in areas where private sector can lead, areas where public secto…
S48
Ad Hoc Consultation: Friday 2nd February, Afternoon session — In today’s session, Germany has officially declared its agreement with the European Union’s statements, reinforcing the …
S49
Introduction: the entanglements of domestic and international politics — More recently, the most sophisticated work on the domestic determinants of foreign policy has focused on “structu…
S50
Relations between Cyprus and Germany (1960 – 1968) — The German policy, based in many occasions on a legalistic approach, as Germany was often accused of, tried not to miss …
S51
Keynote Adresses at India AI Impact Summit 2026 — The discussion revealed significant financial commitments underpinning the partnership. Google announced substantial inv…
S52
GermanAsian AI Partnerships Driving Talent Innovation the Future — Yeah, I’ll just add one point that AI is primarily based on the pattern. So when both the countries are collaborating, n…
S53
Military AI: Operational dangers and the regulatory void — In October 2022, the US Department of Commerce revealed a new export control on semiconductors and computing chips – mat…
S54
Human-centred AI development: Italian PM’s key message during Washington visit — Italian Prime Minister Giorgia Meloni visited Washington anddiscussed the futureof AI governance with US President Joe B…
S55
AI as critical infrastructure for continuity in public services — Building confidence and security in the use of ICTs | Artificial intelligence | Data governance Resilience, data contro…
S56
Democratizing AI Building Trustworthy Systems for Everyone — I think thanks to the contributions from all of those experts. I truly think it is a testament to the industry that we a…
S57
Safe and Responsible AI at Scale Practical Pathways — Ashish Srivastava brought a practitioner’s perspective, highlighting three critical challenges: data interoperability ac…
S58
AI adoption reshapes UK scale-up hiring policy framework — AI adoption is prompting UK scale-ups torecalibrateworkforce policies. Survey data indicates that 33% of founders antici…
S59
Enterprise AI adoption stalls despite heavy investment — AI has moved from experimentation to expectation, yet many enterprise AI rolloutscontinue to stall. Boards demand return…
S60
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — “I think compute for my companies is a bigger problem than regulation…”[92]”I think that’s one challenge just on the i…
S61
Adoption of agentic AI slowed by data readiness and governance gaps — Agentic AI is emerging as a new stage ofenterprise automation, enabling systems to reason, plan, and act across workflow…
S62
US NTIA recommends policy reforms to foster accountability and trustworthiness in AI systems — The NTIA’sAI Accountability Policy Reportadvocates for increased openness in AI systems, independent inspections, and pe…
S63
Safe, secure, and trustworthy AI: What is it and how do we get there? — While global agreements on core principles are welcome, they need to turn into concrete action. So what does it mean to …
S64
AUDA-NEPAD White Paper: Regulation and Responsible Adoption of AI in Africa Towards Achievement of AU Agenda 2063 — Although the National AI Strategy in the steps is at number 6, it is not a requirement to be in thatposition, it could e…
S65
AI Algorithms and the Future of Global Diplomacy — For example, AI in healthcare is a fantastic opportunity for. Indo -German cooperation, there is fantastic data availabl…
S66
The Foundation of AI Democratizing Compute Data Infrastructure — High level of consensus across diverse stakeholders (academic, government, civil society, private sector, international …
S67
Lightning Talk #247 Nordic AI Centre the Nordic Baltic Path in Responsible AI — Marianne argues that Nordic countries have a strong foundation for collaboration based on shared cultural ties and democ…
S68
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — The tone is consistently optimistic, collaborative, and forward-looking throughout the discussion. Speakers emphasize “l…
S69
IndoGerman AI Collaboration Driving Economic Development and Soc — AI is predicted to contribute between $5 and $15 trillion to the global GDP by 2030. But there are also questions, of co…
S70
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Namaste. Honorable Minister Vaishnav, Your Excellency’s colleagues, let me begin by thanking our host, Prime Minister Mo…
S71
https://dig.watch/event/india-ai-impact-summit-2026/indogerman-ai-collaboration-driving-economic-development-and-soc — And circular economy. that government, academia, and industry work hand -in -hand. By promoting research and development…
S72
GermanAsian AI Partnerships Driving Talent Innovation the Future — Ms. Kofler, please come up. There’s no signs. You can choose in the middle. Next panelist, I would really warmly welcome…
S73
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — The technical requirements for trustworthy AI emerged through multiple perspectives. Valerian Ghez from photonic quantum…
S74
WS #31 Cybersecurity in AI: balancing innovation and risks — Sergio Mayo Macias: Yes, thank you. Thank you, Gladys. Well, actually, the AI environment in Europe is known and has …
S75
Multistakeholder Partnerships for Thriving AI Ecosystems — We’re also joined by Nakul Jain, who’s the CEO and managing director of Wadwani AI Global. Nakul is a mission -driven te…
S76
Safe and Responsible AI at Scale Practical Pathways — Ashish Srivastava brought a practitioner’s perspective, highlighting three critical challenges: data interoperability ac…
S77
Global Digital Compact: AI solutions for a digital economy inclusive and beneficial for all — A strategic ecosystem approach requires early use cases in areas where private sector can lead, areas where public secto…
S78
Embracing the future of e-commerce and AI now (WEF) — Public-private collaboration is recognised as crucial in preparing for a future dominated by technology in trade. The sp…
S79
Public-Private Partnerships in Online Content Moderation | IGF 2023 Open Forum #95 — In conclusion, forming partnerships between the public and private sectors can be challenging due to language barriers a…
S80
Germany ramps up AI funding to close global tech gap — Germany is planning to increase its AI research funding by almost one billion eurosin the next two years, aiming to narr…
S81
Germany’s path to global AI leadership: a €5 billion action plan — Bettina Stark-Watzinger’s AI Action Planstrives to position Germany and Europeas global leaders in the field of AI. This…
S82
Germany invests €1.6 billion in AI but profits remain uncertain — In 2025 alone, €1.6 billionis being committedto AI in Germany as part of its AI action plan. The budget, managed by the …
S83
WS #98 Towards a global, risk-adaptive AI governance framework — Melinda Claybaugh: Great. Thank you so much. Just a little bit of context to explain Meta’s, to explain my company’s …
S84
Secure Finance Risk-Based AI Policy for the Banking Sector — The discussion revealed several practical challenges in implementing embedded AI governance, including the need for inte…
S85
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — During the discussion, the speakers highlighted the ethical challenges associated with technology development. They emph…
S86
From principles to practice: Governing advanced AI in action — ## Industry Implementation Challenges The conversation highlighted the urgent need for governance frameworks that can k…
S87
Opening Remarks (50th IFDT) — The overall tone was formal yet warm and celebratory. Speakers expressed pride in the IFDT’s accomplishments and gratitu…
S88
Panel Discussion AI &amp; Cybersecurity _ India AI Impact Summit — The discussion maintained a consistently optimistic and collaborative tone throughout. Speakers expressed enthusiasm and…
S89
Building the Future STPI Global Partnerships &amp; Startup Felicitation 2026 — The tone was consistently optimistic, collaborative, and forward-looking throughout the session. It maintained a formal …
S90
World Economic Forum Annual Meeting Closing Remarks: Summary — The tone is consistently positive, celebratory, and grateful throughout the discussion. It begins with formal appreciati…
S91
Partner2Connect High-Level Dialogue — The tone was consistently optimistic and collaborative throughout the discussion. It began with celebratory announcement…
S92
AI Development Beyond Scaling: Panel Discussion Report — The tone began as optimistic and technically focused, with researchers enthusiastically presenting their innovative appr…
S93
Laying the foundations for AI governance — The tone was collaborative and constructive throughout, with panelists building on each other’s points rather than disag…
S94
Comprehensive Discussion Report: AI’s Transformative Potential for Global Economic Growth — The conversation maintains a consistently optimistic and enthusiastic tone throughout. Both speakers demonstrate genuine…
S95
Building the AI-Ready Future From Infrastructure to Skills — The tone was consistently optimistic and collaborative throughout, with speakers expressing excitement about AI’s potent…
S96
Comprehensive Summary: The Future of Robotics and Physical AI — The tone was optimistic yet realistic throughout. The panelists demonstrated enthusiasm about recent breakthroughs and n…
S97
Strengthening Corporate Accountability on Inclusive, Trustworthy, and Rights-based Approach to Ethical Digital Transformation — The discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while ack…
S98
Shaping the Future AI Strategies for Jobs and Economic Development — The discussion maintained an optimistic yet pragmatic tone throughout. While acknowledging significant challenges around…
S99
Day 0 Event #257 Enhancing Data Governance in the Public Sector — The discussion maintained a pragmatic and collaborative tone throughout, with speakers acknowledging both opportunities …
S100
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — The tone was collaborative and solution-oriented throughout, with participants acknowledging both the urgency and comple…
S101
Leaders TalkX: Partnership pivot: rethinking cooperation in the digital era — The discussion maintained a professional, collaborative, and forward-looking tone throughout. Despite the moderator’s ac…
S102
WS #193 Cybersecurity Odyssey Securing Digital Sovereignty Trust — The discussion maintained a consistently collaborative and constructive tone throughout. Speakers demonstrated mutual re…
S103
Democratizing AI: Open foundations and shared resources for global impact — The tone was consistently collaborative, optimistic, and forward-looking throughout the discussion. Speakers maintained …
S104
High Level Session 2: Digital Public Goods and Global Digital Cooperation — The discussion maintained a consistently positive, collaborative, and forward-looking tone throughout. Speakers demonstr…
S105
Strengthen Digital Governance and International Cooperation to Build an Inclusive Digital Future — The discussion maintained a consistently collaborative and optimistic tone throughout, with speakers emphasizing partner…
S106
Welfare for All Ensuring Equitable AI in the Worlds Democracies — The conversation maintained an optimistic and collaborative tone throughout, with participants sharing practical solutio…
S107
https://dig.watch/event/india-ai-impact-summit-2026/secure-finance-risk-based-ai-policy-for-the-banking-sector — Good afternoon to everyone. Distinguished policy makers, regulators, industry leaders, members of the FinTech community,…
S108
Rule of Law for Data Governance | IGF 2023 Open Forum #50 — Alibaba Cloud Intelligence Group has played a significant role in cloud-based data governance, offering a range of cloud…
S109
Driving Indias AI Future Growth Innovation and Impact — In the Indian context, as the audience is aware, we had a lot of catching up to do. And it’s fair to say that a lot of w…
S110
Open Internet Inclusive AI Unlocking Innovation for All — “What you need are highly performant, extremely low cost models that are a billion parameters to maybe 100 or 200 billio…
S111
Opening of the session — ## Specific Proposals and Amendments Chair: I thank the High Representative for Disarmament Affairs for her statement. …
S112
https://dig.watch/event/india-ai-impact-summit-2026/ai-algorithms-and-the-future-of-global-diplomacy — The first is the AI panel. I think it’s called Independent Scientific International Panel on AI, but I could be wrong wi…
S113
Unpacking Competencies, Equipping People for Success — Muneera Khalifa Hamad: Good afternoon. If you allow me, I prefer to stand. I have a strap around my wrist that will ju…
S114
Main Topic 2 –  GovTech Dynamics: Navigating Innovation and Challenges in Public Services — The host expressed satisfaction with the speeches, praising their exceptional quality and their influence in motivating …
S115
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — Jonathan Mendoza Iserte:Thank you, Luca. Good afternoon. How are you? I want to thank the organizers for bringing this t…
S116
REGULATING THE DIGITAL ECONOMY: DILEMMAS, TRADE OFFS AND POTENTIAL OPTIONS — 13 On the impacts of market concentration in global markets, see UNCTAD (2017). The concept of ‘superstar firms’ was fir…
S117
From KW to GW Scaling the Infrastructure of the Global AI Economy — He points out his involvement in designing large‑scale, gigawatt‑level data centers, underscoring India’s growing capaci…
S118
Indias Roadmap to an AGI-Enabled Future — Shri Ghanshyam Prasad, Chairperson of Central Electricity Authority, outlined India’s energy readiness for AI infrastruc…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Anandi Iyer
2 arguments129 words per minute1839 words852 seconds
Argument 1
MOU & IA Pact enable Indo‑German AI partnership (Anandi Iyer)
EXPLANATION
Anandi highlights that the existing Memorandum of Understanding (MOU) has already led to several initiated activities, demonstrating early progress in collaboration. She also points to the newly launched India‑Germany IA Pact as a formal mechanism to deepen AI cooperation across multiple domains.
EVIDENCE
She notes that many activities outlined in the MOU have already kick-started, indicating tangible progress (sentence [1]). She further references the launch of the India-Germany IA Pact by Ministers Vaishnaw and Wildberger, describing it as a partnership focused on implementation-driven collaboration across government, industry, research, skill development, and innovation (sentences [54-55]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Cross-border AI collaboration and talent exchange are highlighted in the German-Asian AI Partnerships overview, underscoring the role of MOUs and partnership agreements [S17]; the Indo-German AI Collaboration session describes concrete activities already launched under the existing MOU and the new IA Pact [S2]; Germany’s multi-billion-euro AI funding programme further confirms the commitment to joint AI work [S23].
MAJOR DISCUSSION POINT
Indo‑German AI Collaboration Framework
AGREED WITH
Georg Enzweiler
Argument 2
CEOs worry about AI‑induced paradigm shift and market disruption (Anandi Iyer)
EXPLANATION
Anandi raises the concern that CEOs are staying up at night thinking about how AI will disrupt existing business models and market dynamics. She frames this as a central challenge for industry leaders as they navigate AI integration.
EVIDENCE
She explicitly asks the CEOs what keeps them awake at night, mentioning the need to balance AI innovation with existing market pressures and the potential for disruption (sentences [291-295]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A Deloitte survey shows that senior executives feel unprepared for the disruptive impact of generative AI [S20]; a discussion on AI’s transformation of India’s workforce notes executives’ anxiety about market shifts [S19]; analysis of AI as a productivity multiplier points out that CEOs are concerned about rapid paradigm changes [S26].
MAJOR DISCUSSION POINT
Industry Adoption Challenges and Strategies
AGREED WITH
Dr. Rajkumar Upadhyay, Prof. Dr. Kristina Sinemus
G
Georg Enzweiler
2 arguments100 words per minute470 words280 seconds
Argument 1
IA Pact as implementation‑driven collaboration across sectors (Georg Enzweiler)
EXPLANATION
Georg describes the IA Pact as a concrete, implementation‑focused agreement that will drive AI collaboration in industry, manufacturing, talent development, joint research, and social good. He emphasizes its role in translating policy into actionable projects.
EVIDENCE
He states that the IA Pact will include aspects such as AI for industry and manufacturing, talent, skills, mobility, joint research, innovation, infrastructure, and AI for social good (sentences [54-55]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The German-Asian AI Partnerships brief stresses implementation-focused bilateral agreements, mirroring the IA Pact’s sector-wide approach [S17]; the Indo-German AI Collaboration session outlines how the IA Pact translates policy into concrete projects across industry, talent and research [S2]; Germany’s expanded AI funding reinforces the resources available for such implementation-driven work [S23].
MAJOR DISCUSSION POINT
Indo‑German AI Collaboration Framework
AGREED WITH
Anandi Iyer
Argument 2
AI lighthouses and 60 sustainability projects illustrate German investment (Georg Enzweiler)
EXPLANATION
Georg points out that Germany has invested in AI lighthouses and funded over 60 AI projects aimed at sustainability since 2020, showcasing a strong commitment to AI-driven climate and environmental solutions.
EVIDENCE
He mentions that Germany is investing in AI lighthouses and has funded more than 60 projects leveraging AI for sustainability, covering topics from wildfire prevention to renewable energy and biodiversity monitoring (sentences [44-46]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Germany’s AI Action Plan, which adds nearly €1 billion of research funding and a €1.6 billion budget for 2025, demonstrates the scale of investment that underpins lighthouse and sustainability projects [S23][S24].
MAJOR DISCUSSION POINT
Funding, Innovation Ecosystems, and Startup Support
AGREED WITH
Prof. Dr. Kristina Sinemus, Dr. Rajkumar Upadhyay
D
Dr. Rajkumar Upadhyay
3 arguments148 words per minute1358 words547 seconds
Argument 1
Joint work on smart manufacturing, agriculture, cybersecurity (Dr. Rajkumar Upadhyay)
EXPLANATION
Dr. Upadhyay outlines specific areas where India and Germany can cooperate: establishing smart manufacturing standards, enhancing agricultural productivity, and strengthening cybersecurity capabilities using AI. He stresses the complementary strengths of both nations in these sectors.
EVIDENCE
He discusses developing smart manufacturing standards, cross-border industrial data flows, and energy efficiency (sentences [153-155]); he highlights agriculture as a key partner for improving yield and farmer income (sentences [156-158]); and he describes the massive cyber-attack volume in India and the need for AI-driven security solutions (sentences [158-162]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s AI-driven facial-recognition solution for telecom SIM verification showcases a concrete cybersecurity collaboration with German partners [S8]; a telecom AI automation case study describes large-scale AI-based security monitoring and fraud detection, relevant to joint cyber work [S10]; India’s recent $1.24 billion AI infrastructure boost provides context for the $2 bn investment claim and joint project funding [S27].
MAJOR DISCUSSION POINT
Indo‑German AI Collaboration Framework
AGREED WITH
Anandi Iyer, Prof. Dr. Kristina Sinemus
Argument 2
AI applications in smart manufacturing, agriculture, cybersecurity, quantum, telecom (Dr. Rajkumar Upadhyay)
EXPLANATION
Dr. Upadhyay expands on sector‑specific AI use cases, covering smart manufacturing, agricultural yield improvement, large‑scale cybersecurity monitoring, 5G/6G rollout, and quantum communication research. He presents these as priority domains for bilateral cooperation.
EVIDENCE
He mentions smart manufacturing initiatives (sentences [153-155]), agricultural AI projects (sentences [156-158]), cybersecurity data-rate challenges and AI-based detection (sentences [158-162]), India’s 5G/6G deployment and desire for joint quantum work (sentences [165-170] and [171-176]), and the massive data rates of 10 TB/s that need real-time AI processing (sentences [159-162]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The telecom facial-recognition and AI automation examples illustrate AI use in cybersecurity and telecom, while India’s AI infrastructure funding signals capacity for smart manufacturing, agriculture and quantum research initiatives [S8][S10][S27].
MAJOR DISCUSSION POINT
Sector‑Specific AI Applications and Opportunities
Argument 3
India’s $2 bn AI investment and GPU distribution empower ecosystem (Dr. Rajkumar Upadhyay)
EXPLANATION
Dr. Upadhyay notes that India has committed over $2 billion to AI and distributed 38,000 GPUs to startups, positioning the country as a major AI talent hub with significant economic potential. He links this investment to projected economic gains.
EVIDENCE
He states that India is investing more than $2 billion in AI and has provided 38,000 GPUs to startups, which could generate $1.7 trillion in economic value by 2035 (sentences [139-141]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s $1.24 billion AI infrastructure programme, which includes large-scale GPU distribution to startups, aligns with the reported $2 bn investment and highlights the ecosystem-building effort [S27]; broader forecasts of AI’s contribution to global GDP provide additional economic context [S15].
MAJOR DISCUSSION POINT
Funding, Innovation Ecosystems, and Startup Support
AGREED WITH
Georg Enzweiler, Prof. Dr. Kristina Sinemus
P
Prof. Dr. Kristina Sinemus
4 arguments116 words per minute1131 words584 seconds
Argument 1
Shared democratic values guide joint AI agenda (Prof. Dr. Kristina Sinemus)
EXPLANATION
Prof. Sinemus emphasizes that Germany and India share democratic values, which should underpin their collaborative AI efforts. She argues that these shared principles will ensure AI development aligns with social good and rights‑based frameworks.
EVIDENCE
She explicitly states that Germany and India are thinking grounded on the same democratic values and that this common ground can guide joint AI work (sentences [274-275]).
MAJOR DISCUSSION POINT
Indo‑German AI Collaboration Framework
Argument 2
German investment in AI quality testing and trustworthy standards (Prof. Dr. Kristina Sinemus)
EXPLANATION
She describes Germany’s creation of an AI quality and testing hub that develops methods and tools to assess AI systems, turning trustworthy AI from a slogan into practical criteria for industry deployment.
EVIDENCE
She outlines the establishment of an AI quality and testing hub in Hess, a public-private company that creates methods and tools to test AI systems, translating trust into testable criteria (sentences [267-270]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AI safety institutes article discusses the creation of dedicated testing hubs and standards for trustworthy AI, mirroring Germany’s quality-testing hub initiative [S31]; the Democratizing AI piece outlines challenges and approaches to building trustworthy AI systems, supporting the emphasis on testable criteria [S28].
MAJOR DISCUSSION POINT
Trustworthy and Responsible AI
AGREED WITH
Dr. Thomas Kuhn, Dattatri Salagame
Argument 3
Healthcare and plant‑disease AI projects funded by Germany (Prof. Dr. Kristina Sinemus)
EXPLANATION
Prof. Sinemus provides concrete examples of German‑funded AI projects in healthcare (a load‑bearing robotic wheelchair) and agriculture (early plant‑disease detection using satellite data), illustrating the impact of AI on social good.
EVIDENCE
She cites funding of a €1.8 million project for a load-bearing robotic wheelchair (sentences [247-250]) and the RISCA cardiology decision-support system (sentences [253-255]), as well as a plant-disease detection project using satellite data to reduce pesticide use (sentences [258-262]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Indo-German AI Collaboration session describes a walking-robot project for mobility-impaired users, exemplifying AI-driven healthcare solutions funded by Germany [S2]; a separate source notes Germany’s capacity to invest in AI for healthcare, highlighting data-rich cooperation opportunities with India [S18]; overall German AI funding programmes provide the financial backdrop for such projects [S23].
MAJOR DISCUSSION POINT
Sector‑Specific AI Applications and Opportunities
AGREED WITH
Dr. Rajkumar Upadhyay, Anandi Iyer
Argument 4
€60 bn German AI funding supports 170 startups and innovation labs (Prof. Dr. Kristina Sinemus)
EXPLANATION
She reports that Germany has allocated over €60 billion to AI, investing in 170 startups and establishing AI innovation labs, demonstrating a robust national commitment to AI ecosystem development.
EVIDENCE
She mentions a funding program exceeding $60 billion that has supported 170 startups and created AI innovation labs (sentences [242-244]).
MAJOR DISCUSSION POINT
Funding, Innovation Ecosystems, and Startup Support
AGREED WITH
Georg Enzweiler, Dr. Rajkumar Upadhyay
D
Dr. Thomas Kuhn
2 arguments122 words per minute1096 words537 seconds
Argument 1
Trustworthiness and data spaces are core to industrial AI (Dr. Thomas Kuhn)
EXPLANATION
Dr. Kuhn argues that for AI to be adopted in industry, results must be reliable and trustworthy, and that secure data spaces are essential for training AI models on sensitive corporate data. He links trustworthiness to the broader adoption of industrial AI.
EVIDENCE
He discusses the need for reliable AI outputs and trustworthiness as a research focus (sentences [72-76]), and explains that data spaces enable secure, rule-based data sharing for AI training (sentences [82-84]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AI safety institutes overview stresses the importance of trustworthy AI and secure data spaces for industrial applications [S31]; the Democratizing AI article discusses the need for reliable, rule-based data sharing environments to enable trustworthy model training [S28].
MAJOR DISCUSSION POINT
Trustworthy and Responsible AI
AGREED WITH
Prof. Dr. Kristina Sinemus, Dattatri Salagame
Argument 2
Fraunhofer’s alliance of 30 institutes provides research infrastructure (Dr. Thomas Kuhn)
EXPLANATION
He describes Fraunhofer’s network of more than 30 institutes collaborating on AI strategies, each bringing specific expertise, thereby creating a substantial research infrastructure for AI development and deployment.
EVIDENCE
He notes that the alliance comprises over 30 institutes that team up on AI, supporting best practices, studies, and expert opinions across many fields (sentences [85-87]).
MAJOR DISCUSSION POINT
Funding, Innovation Ecosystems, and Startup Support
A
Anshuman Awasthi
2 arguments113 words per minute255 words135 seconds
Argument 1
AI‑driven vehicle features and operational efficiency at Mercedes (Anshuman Awasthi)
EXPLANATION
Anshuman states that Mercedes‑Benz has been integrating AI into its vehicles since 2019, offering advanced driver‑assistance features, and is now leveraging AI to improve operational efficiency across the company.
EVIDENCE
He mentions that Mercedes-Benz was the first automotive company to bring AI into cars in 2019 and that AI is now used to enhance vehicle experiences and operational efficiency (sentences [370-376]).
MAJOR DISCUSSION POINT
Sector‑Specific AI Applications and Opportunities
Argument 2
AI seen as non‑challenge; focus on integration with legacy systems (Anshuman Awasthi)
EXPLANATION
Anshuman asserts that AI itself is not a challenge; the real work lies in integrating AI capabilities with existing legacy systems and processes, emphasizing smooth adoption rather than technological barriers.
EVIDENCE
He says AI is not a challenge and that the focus is on integration with legacy systems, noting that the 2019 AI application was developed by NBRDI (sentences [367-371] and [384-385]).
MAJOR DISCUSSION POINT
Industry Adoption Challenges and Strategies
P
Prashant Doreswamy
1 argument148 words per minute449 words180 seconds
Argument 1
AI boosts R&D efficiency, product quality, and fraud detection in industry (Prashant Doreswamy)
EXPLANATION
Prashant highlights that AI has enabled a 20% increase in R&D efficiency, improved product quality through automated testing, and facilitated large‑scale fraud detection platforms that can block spoofed calls within milliseconds.
EVIDENCE
He cites a 20% improvement in R&D efficiency (sentences [339-342]), AI-enhanced quality improvements in manufacturing (sentences [343-350]), and a fraud-management platform that distinguishes real from spoofed calls in 5 ms, handling millions of calls per day (sentences [180-184]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AI automation in telecom case study details a fraud-management platform that blocks spoofed calls within milliseconds, illustrating AI-enabled quality and security gains [S10]; Deloitte’s CEO survey highlights the business pressure to adopt AI for efficiency improvements [S20]; macro-level AI economic impact forecasts provide additional context for the value of such gains [S15].
MAJOR DISCUSSION POINT
Sector‑Specific AI Applications and Opportunities
D
Dattatri Salagame
2 arguments139 words per minute341 words146 seconds
Argument 1
Bosch’s need to ensure trustworthy AI amid paradigm shift (Dattatri Salagame)
EXPLANATION
Dattatri explains that Bosch is deploying AI across products such as autonomous driving and healthcare, but must manage the paradigm shift in engineering and ensure AI remains trustworthy as it disrupts traditional development processes.
EVIDENCE
He describes Bosch’s AI deployments in autonomous driving, AI cockpit, and healthcare, and notes the anxiety around the paradigm shift in engineering and software development (sentences [300-305]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AI safety institutes discussion outlines industry-wide requirements for trustworthy AI, directly relevant to Bosch’s challenge of maintaining trust while deploying AI across products [S31]; the Democratizing AI piece highlights the broader difficulty of ensuring trustworthiness during rapid AI adoption [S28].
MAJOR DISCUSSION POINT
Trustworthy and Responsible AI
AGREED WITH
Dr. Thomas Kuhn, Prof. Dr. Kristina Sinemus
Argument 2
Balancing new AI products with internal engineering changes (Dattatri Salagame)
EXPLANATION
He points out that Bosch must simultaneously launch AI‑centric products while re‑engineering its internal software development processes, highlighting the tension between market innovation and internal capability transformation.
EVIDENCE
He mentions navigating a new business model and consumer behavior while also confronting the paradigm shift in engineering and software development caused by AI (sentences [304-307]).
MAJOR DISCUSSION POINT
Industry Adoption Challenges and Strategies
S
Sindhu Gangadharan
2 arguments160 words per minute672 words250 seconds
Argument 1
Explainability, fairness and compliance required for responsible AI (Sindhu Gangadharan)
EXPLANATION
Sindhu stresses that SAP must embed AI responsibly by ensuring explainability, transparency, fairness, auditability, and compliance throughout its enterprise processes, especially as workflows become more autonomous.
EVIDENCE
She outlines the need for explainability, transparency, fairness, and auditability in AI-driven decisions, and the importance of providing customers with compliant, secure, and ethical AI solutions (sentences [321-324]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI safety institutes emphasize explainability, fairness, auditability and compliance as core pillars of responsible AI systems [S31]; the Democratizing AI article further stresses transparency and compliance as essential for building trust in autonomous workflows [S28].
MAJOR DISCUSSION POINT
Trustworthy and Responsible AI
Argument 2
Managing client anxieties through transparent, autonomous workflows (Sindhu Gangadharan)
EXPLANATION
She argues that SAP must address client concerns by delivering AI solutions that are explainable and transparent, especially as parts of workflows become fully autonomous, ensuring trust and compliance.
EVIDENCE
She discusses the need for explainability, transparency, fairness, and auditability in autonomous workflows, emphasizing that the human remains in the loop but decisions must be clearly justified (sentences [321-324]).
MAJOR DISCUSSION POINT
Industry Adoption Challenges and Strategies
Agreements
Agreement Points
Indo‑German AI collaboration framework (MOU & IA Pact)
Speakers: Anandi Iyer, Georg Enzweiler
MOU & IA Pact enable Indo‑German AI partnership (Anandi Iyer) IA Pact as implementation‑driven collaboration across sectors (Georg Enzweiler)
Both speakers highlight that the existing MOU has already triggered activities and that the newly launched India-Germany IA Pact provides a concrete, implementation-focused mechanism for AI cooperation across government, industry, research, skill development and social good [1][54-55].
POLICY CONTEXT (KNOWLEDGE BASE)
The MOU and IA Pact create an institutional framework for sustained Indo-German AI cooperation, echoing the model highlighted in the IndoGerman AI Collaboration report and building on earlier successful partnerships such as the Fraunhofer-CDOT MOU [S45].
Trustworthiness and data spaces are essential for industrial AI
Speakers: Dr. Thomas Kuhn, Prof. Dr. Kristina Sinemus, Dattatri Salagame
Trustworthiness and data spaces are core to industrial AI (Dr. Thomas Kuhn) German investment in AI quality testing and trustworthy standards (Prof. Dr. Kristina Sinemus) Bosch’s need to ensure trustworthy AI amid paradigm shift (Dattatri Salagame)
All three emphasize that reliable, trustworthy AI outputs are a prerequisite for industry adoption and that secure, rule-based data spaces and dedicated testing hubs are needed to achieve this, linking trustworthiness with data governance and security [72-76][82-84][267-270][300-307].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy discussions emphasize that trustworthy AI for industry requires secure data spaces, data sovereignty and verification mechanisms, as outlined in the EU-centric trustworthiness guidelines and the NTIA’s accountability recommendations [S55][S62][S63].
Large‑scale financial commitments underpin AI ecosystems in both countries
Speakers: Georg Enzweiler, Prof. Dr. Kristina Sinemus, Dr. Rajkumar Upadhyay
AI lighthouses and 60 sustainability projects illustrate German investment (Georg Enzweiler) €60 bn German AI funding supports 170 startups and innovation labs (Prof. Dr. Kristina Sinemus) India’s $2 bn AI investment and GPU distribution empower ecosystem (Dr. Rajkumar Upadhyay)
German speakers point to AI lighthouses, €60 bn funding and 170 supported startups, while the Indian speaker cites a $2 bn AI budget and distribution of 38,000 GPUs, showing a shared commitment to building robust AI innovation ecosystems [44-46][242-244][139-141].
POLICY CONTEXT (KNOWLEDGE BASE)
Significant financial commitments, including Google’s multi-billion-dollar investment in an AI hub in Vizag, illustrate the scale of funding underpinning the bilateral ecosystem, as reported at the India AI Impact Summit 2026 [S51][S45].
Sector‑specific AI collaboration opportunities (manufacturing, agriculture, cybersecurity, health)
Speakers: Dr. Rajkumar Upadhyay, Anandi Iyer, Prof. Dr. Kristina Sinemus
Joint work on smart manufacturing, agriculture, cybersecurity (Dr. Rajkumar Upadhyay) CEOs worry about AI‑induced paradigm shift and market disruption (Anandi Iyer) Healthcare and plant‑disease AI projects funded by Germany (Prof. Dr. Kristina Sinemus)
Speakers converge on applying AI to smart manufacturing, agriculture, cybersecurity and health, noting both the opportunities and the anxieties of industry leaders about paradigm shifts and the need for concrete projects and funding [153-162][291-295][245-262].
POLICY CONTEXT (KNOWLEDGE BASE)
Sector-specific collaboration targets manufacturing, agriculture, cybersecurity and health, reflecting opportunities identified in the IndoGerman AI Collaboration report and broader analyses of AI’s role in agriculture, manufacturing and healthcare [S45][S46][S65].
Similar Viewpoints
Both stress that the IA Pact builds on the earlier MOU to create a concrete, implementation‑oriented framework for Indo‑German AI cooperation [1][54-55].
Speakers: Anandi Iyer, Georg Enzweiler
MOU & IA Pact enable Indo‑German AI partnership (Anandi Iyer) IA Pact as implementation‑driven collaboration across sectors (Georg Enzweiler)
Both argue that trustworthy AI requires systematic testing, standards and secure data‑sharing environments to be viable in industry [72-76][267-270].
Speakers: Dr. Thomas Kuhn, Prof. Dr. Kristina Sinemus
Trustworthiness and data spaces are core to industrial AI (Dr. Thomas Kuhn) German investment in AI quality testing and trustworthy standards (Prof. Dr. Kristina Sinemus)
Both highlight Germany’s substantial financial commitment to AI, both through targeted lighthouse projects and a broad €60 bn funding programme supporting startups and labs [44-46][242-244].
Speakers: Georg Enzweiler, Prof. Dr. Kristina Sinemus
AI lighthouses and 60 sustainability projects illustrate German investment (Georg Enzweiler) €60 bn German AI funding supports 170 startups and innovation labs (Prof. Dr. Kristina Sinemus)
Both stress that enterprise AI deployments must be trustworthy, explainable, fair and compliant to address client anxieties and regulatory expectations [300-307][321-324].
Speakers: Dattatri Salagame, Sindhu Gangadharan
Bosch’s need to ensure trustworthy AI amid paradigm shift (Dattatri Salagame) Explainability, fairness and compliance required for responsible AI (Sindhu Gangadharan)
Unexpected Consensus
Emphasis on shared democratic values as a guiding principle for AI collaboration
Speakers: Prof. Dr. Kristina Sinemus, Georg Enzweiler
Shared democratic values guide joint AI agenda (Prof. Dr. Kristina Sinemus) Welfare for all as the summit’s motto, implying inclusive, values‑based AI (Georg Enzweiler)
While the German academic focuses on democratic values shaping AI policy, the German senior official frames the summit’s purpose around inclusive welfare, revealing an unexpected alignment on values-based AI governance across policy and academic domains [274-275][42-43].
POLICY CONTEXT (KNOWLEDGE BASE)
Both sides repeatedly cite shared democratic values-freedom of speech, transparency and human rights-as a cornerstone of the partnership, a theme echoed in the Global Digital Compact and summit remarks emphasizing collaborative optimism [S47][S68][S65].
Overall Assessment

The discussion shows strong convergence on three pillars: (1) a formal Indo‑German AI collaboration framework anchored by the MOU and IA Pact; (2) the necessity of trustworthy, testable AI supported by secure data spaces; (3) substantial bilateral financial commitments to build AI ecosystems, with sector‑specific opportunities in manufacturing, agriculture, health and cybersecurity.

High consensus – most speakers, from government, research institutes and industry, echo the same priorities, indicating a solid foundation for coordinated AI policy, funding and implementation that can accelerate joint innovation while addressing trust, security and societal impact.

Differences
Different Viewpoints
Whether AI itself is a challenge for industry adoption
Speakers: Anshuman Awasthi, Dattatri Salagame, Sindhu Gangadharan, Dr. Thomas Kuhn
AI is not a challenge; focus is on integration with legacy systems (Anshuman Awasthi) AI introduces a paradigm shift in engineering and creates anxiety (Dattatri Salagame) Responsible AI requires explainability, transparency, fairness and auditability, implying significant challenges (Sindhu Gangadharan) Trustworthiness of AI results is a core research focus, indicating AI poses reliability challenges (Dr. Thomas Kuhn)
Anshuman asserts that AI itself poses no challenge and the work lies in integrating it with existing systems [367-371], while Dattatri and Sindhu highlight substantial challenges related to paradigm shifts, trustworthiness, and responsible deployment [300-307][321-324]. Dr. Kuhn also stresses the need for trustworthy AI, underscoring that reliability is a key hurdle [72-76][106-108].
POLICY CONTEXT (KNOWLEDGE BASE)
Industry leaders note that AI adoption faces practical hurdles such as fragmented data, compute limitations and governance gaps, which cause many enterprise rollouts to stall despite heavy investment [S59][S60][S61].
Preferred mechanisms to achieve trustworthy and responsible AI
Speakers: Dr. Thomas Kuhn, Sindhu Gangadharan, Prof. Dr. Kristina Sinemus, Dattatri Salagame
Use uncertainty wrappers and secure data spaces to provide reliability metrics (Dr. Thomas Kuhn) Embed explainability, transparency, fairness and auditability into AI workflows (Sindhu Gangadharan) Create an AI quality and testing hub that develops methods and tools to test AI systems (Prof. Dr. Kristina Sinemus) Emphasise trustworthiness as a research priority and develop AI models with built‑in trust (Dattatri Salagame)
Kuhn proposes technical solutions such as uncertainty wrappers and rule-based data spaces for trustworthy AI [106-108][82-84], Sindhu stresses governance-level safeguards like explainability and auditability [321-324], Sinemus points to institutional testing infrastructure to certify AI [267-270], while Dattatri underscores the broader need for trustworthy AI without detailing a specific toolset [101-106].
POLICY CONTEXT (KNOWLEDGE BASE)
Practitioners propose mechanisms like data interoperability standards, independent inspections and regulatory reforms to achieve trustworthy and responsible AI, as detailed in safe-AI at scale discussions and NTIA policy recommendations [S57][S62][S63].
Optimism versus caution about AI’s economic and societal impact
Speakers: Dr. Rajkumar Upadhyay, Prof. Dr. Kristina Sinemus
AI investment will generate $1.7 trillion economic value for India by 2035 (Dr. Rajkumar Upadhyay) AI will not automatically lead to better outcomes; results depend on choices, funding, and regulation (Prof. Dr. Kristina Sinemus)
Upadhyay highlights massive economic gains from AI, citing a $1.7 trillion value projection [139-141], whereas Sinemus cautions that AI’s benefits are not guaranteed and hinge on responsible choices and governance [276-277].
POLICY CONTEXT (KNOWLEDGE BASE)
Debate balances optimism about AI-driven growth with caution over workforce impacts and stalled deployments, a tension reflected in UK scale-up hiring surveys, enterprise adoption studies and the upbeat yet measured tone of the India AI Impact Summit [S58][S59][S68].
Unexpected Differences
Cost‑ratio question about SAP campus capacity
Speakers: Anandi Iyer, Sindhu Gangadharan
Anandi asks whether the cost of the campus is four Indians per German (sentences 400‑403) Sindhu repeatedly answers “I don’t know” without providing a figure (sentences 410‑413)
Anandi expects a quantitative response to a cost-efficiency query, but Sindhu avoids the question, leading to an unexpected lack of substantive answer on a financial-mechanism issue [400-403][410-413].
Interpretation of German interest as offensive or defensive
Speakers: Anandi Iyer, Dattatri Salagame
Anandi asks if heightened German interest in India is offensive or defensive (sentence 386‑387) Dattatri replies that the interest was “long overdue” and does not address the offensive/defensive framing (sentences 387-390)
Anandi frames the strategic question in geopolitical terms, while Dattatri sidesteps the dichotomy, offering a diplomatic rather than analytical response, which is an unexpected divergence in the discussion of strategic intent [386-387][387-390].
Overall Assessment

The participants largely share a common goal of deepening Indo‑German AI collaboration, but they diverge on how to manage AI’s challenges—particularly its trustworthiness, integration into legacy systems, and the realistic expectations of its economic impact. Disagreements focus on implementation mechanisms (data spaces vs testing hubs vs policy frameworks), the perceived difficulty of AI adoption, and the optimism versus caution about AI’s outcomes.

Moderate disagreement: while there is strong consensus on the strategic importance of cooperation, the varied viewpoints on technical, governance, and economic aspects could slow coordinated action unless reconciled through joint working groups and clear implementation roadmaps.

Partial Agreements
All speakers concur that a structured Indo‑German collaboration framework is essential, but they differ on the primary vehicle: Anandi and Georg stress the IA Pact and policy implementation, Upadhyay emphasizes existing MoU activities, Sinemus highlights democratic‑value‑based cooperation, and Kuhn points to the Fraunhofer research alliance as the technical backbone [1][54-55][205][274-275][85-87].
Speakers: Anandi Iyer, Georg Enzweiler, Dr. Rajkumar Upadhyay, Prof. Dr. Kristina Sinemus, Dr. Thomas Kuhn
MOU and IA Pact enable Indo‑German AI partnership (Anandi Iyer) IA Pact as implementation‑driven collaboration across sectors (Georg Enzweiler) Existing MOU with Fraunhofer and desire to deepen partnership (Dr. Rajkumar Upadhyay) Shared democratic values should guide joint AI agenda (Prof. Dr. Kristina Sinemus) Fraunhofer’s alliance of 30 institutes provides research infrastructure (Dr. Thomas Kuhn)
All CEOs acknowledge the importance of AI for business performance, yet they diverge on how to manage client concerns: Sindhu stresses governance‑level explainability, Dattatri focuses on trustworthiness during engineering shifts, Prashant points to efficiency gains, while Anshuman downplays AI’s difficulty and stresses integration [321-324][300-307][339-342][367-371].
Speakers: Sindhu Gangadharan, Dattatri Salagame, Prashant Doreswamy, Anshuman Awasthi
Explainability, fairness and compliance are required for responsible AI (Sindhu Gangadharan) Need to ensure trustworthy AI amid paradigm shift (Dattatri Salagame) AI boosts R&D efficiency, product quality and fraud detection (Prashant Doreswamy) AI is not a challenge; focus on integration with legacy systems (Anshuman Awasthi)
Takeaways
Key takeaways
The newly signed India‑Germany IA Pact and existing MoU provide a formal framework for implementation‑driven AI collaboration across government, industry, research, and skill development. Both countries view AI as a strategic driver for economic growth, sustainability, and social good, emphasizing trustworthy, explainable, and responsible AI. Key sectors identified for joint work include smart manufacturing, agriculture, healthcare, cybersecurity, quantum communications, and telecom fraud detection. Germany brings expertise in precision engineering, AI quality testing, data‑space architectures, and substantial funding (≈€60 bn) for AI startups and labs; India contributes scale, a large AI talent pool, and rapid deployment capacity. Fraunhofer’s network of 30+ institutes and its data‑space platform are positioned as core technical enablers for industrial AI and cross‑company data sharing. Industry leaders (Bosch, SAP, Mercedes‑Benz, Averior/Continental) stress the need to manage the paradigm shift in engineering, ensure trustworthiness, and address client anxieties through transparency and compliance. A recurring theme is the balance between innovation speed and safeguarding democratic values, fairness, and inclusivity.
Resolutions and action items
Continue dialogue and co‑creation between Indian agencies (e.g., CDOT, NBRDI) and German research institutes (Fraunhofer, AI Innovation Lab, AI Quality & Testing Hub). Organise follow‑up visits for German delegates to Indian research sites (e.g., CDOT labs, NBRDI) to deepen mutual understanding. Develop joint standards for smart manufacturing data flows, AI trustworthiness metrics, and cross‑border data‑space governance. Leverage the IA Pact to launch pilot projects in agriculture (early‑disease detection), healthcare (diagnostic decision support), and cybersecurity (AI‑driven fraud detection). Explore joint research in quantum communication and 6G development, with India’s National Quantum Communication Hub and German partners. Encourage Indian startups to access German AI funding mechanisms and German SMEs to tap into India’s AI talent and GPU resources.
Unresolved issues
Specific mechanisms for ensuring inclusive AI benefits and preventing widening inequalities were discussed but not concretised. Regulatory alignment on data protection, AI liability, and cross‑border data‑space rules remains to be defined. How to scale trustworthy AI solutions for SMEs without imposing prohibitive costs was raised without a clear solution. Details on funding allocation, timelines, and governance structures for the proposed pilot projects were not settled. The extent of German involvement in India’s 6G and quantum roadmaps, and the sharing of sensitive security data, remains unclear.
Suggested compromises
Combine German precision engineering and regulatory rigor with Indian scale and rapid deployment to achieve mutually beneficial standards and solutions. Adopt a co‑creation model where technology is not simply transferred but jointly developed, addressing both countries’ security and ethical concerns. Balance economic competitiveness with social responsibility by embedding explainability, fairness, and auditability into AI products, satisfying both market demands and democratic values.
Thought Provoking Comments
AI is predicted to contribute between $5 and $15 trillion to the global GDP by 2030. But there are also questions: How can we ensure that this growth is inclusive? How do we minimize negative effects for people and the planet? What kind of effect would this have on labor markets?
Sets a macro‑economic context while immediately raising ethical and societal concerns, framing the summit’s theme of ‘welfare for all’ and steering the conversation toward inclusive, responsible AI.
His questions prompted subsequent speakers to address trustworthiness, social good, and concrete policy measures, shifting the tone from a purely technological showcase to a debate about AI’s broader impact.
Speaker: Georg Enzweiler
In Fraunhofer we call it augmented intelligence – human intelligence stays at the core. We focus on trustworthy AI, knowledge preservation via a virtual colleague, and data spaces that enable rule‑based, federated training while keeping data private.
Introduces the concept of augmenting rather than replacing human expertise, and presents concrete technical solutions (uncertainty wrappers, data spaces) that address the trust and privacy challenges highlighted earlier.
His emphasis on trustworthiness and data sovereignty sparked later references to data spaces by other participants and deepened the technical discussion, moving it from high‑level benefits to implementation challenges.
Speaker: Dr. Thomas Kuhn
We have built a digital intelligence platform (SanchalSathi.gov.in) that integrates telecom, banking and police data in real‑time. It can flag a suspicious call in 5 ms and a financial transaction in seconds, preventing fraud at India’s massive scale (10 TB/s data rate).
Provides a vivid, large‑scale example of AI applied to cybersecurity and fraud detection, illustrating India’s capacity to operationalise AI at national scale and highlighting the need for cross‑border data collaboration.
This concrete case shifted the conversation toward real‑world deployments and prompted other CEOs to discuss how their organisations can adopt similar large‑scale, trustworthy AI solutions.
Speaker: Dr. Rajkumar Upadhyay
Technology must serve people, not the other way around. Our AI‑Made Innocent Agenda combines innovation with responsibility – we fund 170 startups, support a robotic wheelchair for mobility‑impaired people, and develop AI‑driven early‑disease detection in cardiology.
Articulates a clear policy principle (people‑first AI) and backs it with tangible funding programmes and use‑case examples, linking governmental strategy to societal outcomes.
Her people‑centric framing reinforced the earlier ethical concerns, encouraging panelists to discuss how their products (e.g., SAP, Bosch, Mercedes) can embed fairness, explainability and social value.
Speaker: Prof. Dr. Kristina Sinemus
SAP’s responsibility is to embed AI across core business processes while guaranteeing explainability, transparency, fairness and auditability – especially as autonomous workflows become more common.
Highlights the practical challenge of integrating AI into entrenched enterprise systems and stresses governance mechanisms needed to maintain trust, echoing earlier calls for trustworthy AI.
Her focus on explainability prompted other CEOs (Bosch, Mercedes) to acknowledge the anxiety around paradigm shifts in engineering and the need for responsible deployment.
Speaker: Sindhu Gangadharan
We are navigating a new business model and a paradigm shift in software engineering using AI. While we are excited to launch AI‑centric products, we are also anxious about how AI will disrupt long‑held beliefs of how we build software.
Openly admits both enthusiasm and anxiety about AI‑driven transformation, bringing a balanced perspective that validates concerns raised by other speakers.
His admission of anxiety created a space for the panel to discuss cultural and organisational challenges, leading to deeper dialogue on change management and workforce reskilling.
Speaker: Dattatri Salagame (Bosch)
AI is not a challenge for us – we already integrated it into cars in 2019 and now we are using it to tighten operations. The technology itself is not the obstacle; it’s how we apply it.
Contrasts with Bosch’s expressed anxiety, presenting a counter‑narrative that AI adoption can be seamless when embedded early, thereby highlighting differing maturity levels among companies.
His confident stance prompted a brief comparative reflection on readiness across firms and underscored the importance of early adoption, influencing the subsequent rapid‑fire round.
Speaker: Anshuman Awasthi (Mercedes‑Benz)
Overall Assessment

The discussion was steered from a ceremonial opening toward a nuanced debate on responsible AI by a series of pivotal remarks. Georg Enzweiler’s macro‑level concerns set the ethical agenda; Thomas Kuhn supplied the technical backbone of trust and data sovereignty; Upadhyay illustrated large‑scale Indian implementations; Sinemus reinforced a people‑first policy vision; Sindhu and Salagame articulated the practical governance and cultural challenges within enterprises; and Awasthi’s confident counterpoint highlighted divergent maturity levels. Together, these comments shifted the conversation from showcasing capabilities to interrogating how AI can be trustworthy, inclusive, and socially beneficial, shaping a dialogue that balanced ambition with responsibility.

Follow-up Questions
How can we ensure that AI growth is inclusive and minimizes negative effects for people and the planet?
Addressing inclusivity and environmental impact is crucial for sustainable AI deployment and aligns with the summit’s motto of welfare for all.
Speaker: Georg Enzweiler
What kind of effect will AI have on labor markets?
Understanding labor market impacts is essential for policy making, workforce reskilling, and mitigating potential job displacement.
Speaker: Georg Enzweiler
How can we achieve trustworthy AI responses, especially in safety‑relevant environments such as medical diagnostics and traffic?
Trustworthiness determines whether AI can be safely adopted in critical sectors; developing uncertainty metrics and validation methods is a research priority.
Speaker: Dr. Thomas Kuhn
How can corporate knowledge be preserved when senior employees retire, using AI‑based virtual colleagues?
Knowledge loss threatens SME competitiveness; AI‑driven knowledge capture could sustain expertise across generations.
Speaker: Dr. Thomas Kuhn
How can data spaces enable secure, rule‑based sharing of sensitive company data for AI training?
Data governance and privacy are barriers to cross‑company AI collaboration; data spaces could provide a technical and legal framework.
Speaker: Dr. Thomas Kuhn
What types of cyber‑fraud and telecom fraud occur in Germany, and how do they compare with India’s experience?
Sharing fraud patterns can inform joint AI‑driven detection systems and improve cross‑border security.
Speaker: Dr. Rajkumar Upadhyay
How can India and Germany collaborate on AI for smart manufacturing standards, cross‑border industrial data flows, and energy‑efficiency safeguards?
Joint standards and data‑exchange mechanisms are needed to scale AI‑enabled Industry 4.0 across both economies.
Speaker: Dr. Rajkumar Upadhyay
How can India and Germany work together on AI‑driven agriculture to boost productivity, yield, and farmer income?
Agriculture is a key economic pillar for both countries; AI can address sustainability and food‑security challenges.
Speaker: Dr. Rajkumar Upadhyay
How can India and Germany cooperate on AI for cybersecurity, given the massive data rates (e.g., 10 TB/s) and real‑time threat detection needs?
Protecting critical infrastructure requires AI solutions that can process high‑volume data while preserving privacy.
Speaker: Dr. Rajkumar Upadhyay
How can India and Germany jointly develop quantum‑communication technologies and prepare for post‑quantum encryption challenges?
Quantum‑safe communication is a strategic priority; collaboration can accelerate research and standardisation.
Speaker: Dr. Rajkumar Upadhyay
How can AI be used at scale to detect and block telecom spoof calls and other telecom fraud within milliseconds?
Real‑time AI‑based fraud detection can protect billions of users and financial transactions.
Speaker: Dr. Rajkumar Upadhyay
How can AI‑driven financial risk indicators (FRI) be enhanced to prevent fraudulent money transfers before they occur?
Early detection of financial fraud can reduce losses and increase trust in digital payment ecosystems.
Speaker: Dr. Rajkumar Upadhyay
How should AI be regulated to balance economic development with social cohesion, fairness, and trust?
Effective regulation is needed to ensure AI benefits are widely shared and do not exacerbate inequality.
Speaker: Prof. Dr. Kristina Sinemus
How can bridges be built between India’s scale and Germany’s safeguards (regulation, data protection, quality assurance) in AI deployment?
Combining India’s market size with German standards could create a robust, trustworthy AI ecosystem.
Speaker: Prof. Dr. Kristina Sinemus
What keeps CEOs awake at night regarding AI integration and transformation of their businesses?
Identifying executive concerns helps focus research and support on the most critical challenges for industry adoption.
Speaker: Anandi Iyer (directed to CEOs Dattatri Salagame, Sindhu Gangadharan, Prashant Doreswamy, Anshuman Awasthi)
How does SAP plan to responsibly roll out AI innovations to its captive client base while addressing client anxieties about change?
Understanding SAP’s approach to responsible AI deployment can inform best practices for large enterprise software providers.
Speaker: Anandi Iyer (to Sindhu Gangadharan)
Is it challenging to embed AI into standardized, legacy manufacturing operations at Mercedes‑Benz, and how is this being addressed?
Legacy systems are common in manufacturing; insights into overcoming integration hurdles are valuable for broader industry uptake.
Speaker: Anandi Iyer (to Anshuman Awasthi)
Is the heightened German interest in India offensive or defensive, and what are its strategic implications?
Clarifying motivations can shape the nature of bilateral collaborations and address geopolitical sensitivities.
Speaker: Anandi Iyer (to Dr. Thomas Kuhn)
How does the demographic dividend, inclusion, productivity, skilling, and reskilling factor into SAP’s vision for India, especially regarding campus expansion and talent ratios?
Linking talent development to AI strategy is essential for sustaining long‑term innovation capacity.
Speaker: Anandi Iyer (to Sindhu Gangadharan)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Empowering India & the Global South Through AI Literacy

Empowering India & the Global South Through AI Literacy

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by highlighting the urgency of AI literacy for every student, parent and child in India as a foundation for active participation in AI development [1][4-5]. Moderator Bhanu framed the dialogue as a forward-looking effort to shape futures through AI literacy, noting that AI is already embedded in phones, homes and classrooms and that the debate now is how to use it productively [14-18].


Shabana explained that AI can transform education by delivering personalized learning-custom lessons, tailored pathways and individualized feedback-that mitigates the high student-teacher ratios common in the Global South [30-35][36-38]. For teachers, AI serves as an assistant that streamlines lesson-plan creation, diagnostic testing and assessment grading, thereby raising productivity and pedagogical quality [40-43]. Multilingual voice capabilities further lower barriers for underserved learners and enable parents to engage more meaningfully with their children’s education [45-47]. By aggregating data from assessments, attendance and programs, AI can generate actionable insights such as dropout risk and resource planning, underscoring the need for AI literacy across all stakeholder groups [48-50].


Tanushree illustrated the impact on learners with two government-school students: Shraddha moved from using AI for fun to cross-checking difficult subjects and gaining confidence, while Poonam shifted from viewing AI as a buzzword to using it for project ideas, reflecting a broader pattern of curiosity turning into confidence, language support, gap reduction and ethical awareness [74-80][84-86][82-86].


Chitra described teachers’ spectrum of awareness-from fear to hope-and how the AI Samarth curriculum builds both technical skill sets and a positive mindset, creating purposeful AI integration in classrooms [112-124][130-136]. She emphasized that teacher training must provide an emotionally safe environment, noting that the cascade model has already fostered polite, reflective classroom interactions and strengthened teachers’ confidence to translate AI literacy to students [198-207][210-212].


Krishnan concluded that the summit has begun to democratize AI, citing the Indian government’s policy to teach AI from class three onward and urging that AI education span all disciplines to prepare the next generation, while warning that AI is not a silver bullet but a tool that requires inclusive, responsible experimentation [229-236][242-246][242-244]. The panel agreed that coordinated AI-literacy programs are essential to ensure equitable, ethical and effective AI use in Indian and Global South education systems [155-158].


Keypoints

Major discussion points


AI literacy is essential for every education stakeholder – AI can personalize learning for students, assist teachers in lesson-planning and assessment, and lower barriers to quality education for underserved groups, while education bodies can use AI-driven data insights for planning and risk-identification[29-38][40-46].


Ground-level programmes (e.g., AI Samarth) are turning curiosity into confidence – Real-world stories from government schools in Odisha show students using AI as a learning companion, gaining confidence, bridging language gaps, reducing learning gaps, and receiving ethics training[70-86].


Teacher training must balance hope and fear and build confidence – Teachers’ preparedness involves both awareness of AI tools and their emotional mindset; programmes that create a safe, supportive environment help teachers move from anxiety to purposeful, confident use of AI in the classroom[112-124][130-138].


A structured AI-literacy curriculum is needed to embed foundational concepts and critical thinking – The AI Samarth curriculum focuses on four pillars: understanding AI and its applications, basic technical concepts (data, vision, NLP), societal and environmental impacts, and practical prompting skills; critical thinking is reinforced by teaching verification of AI outputs[169-190][193-194].


Policy and a broader inclusive vision are crucial for scaling AI education – National policy to teach AI from class 3 onward, combined with a push for AI awareness across all disciplines and regions of the Global South, is seen as a way to democratize AI and ensure no one is left behind[229-247].


Overall purpose / goal of the discussion


The panel aimed to articulate why AI literacy must become a universal foundation in Indian and Global-South education, to share concrete experiences from large-scale programmes, to outline how curricula and teacher-training can foster responsible, equitable AI use, and to align these efforts with national policy and a vision of inclusive, future-ready learning.


Overall tone and its evolution


– The conversation opens with a formal, optimistic introduction emphasizing the importance of AI literacy.


– It then shifts to anecdotal, hopeful storytelling (students’ experiences, teacher empowerment) that underscores tangible impact.


– Mid-discussion, speakers acknowledge challenges-fear, over-reliance, and the need for critical thinking-introducing a more cautious, reflective tone.


– The closing remarks adopt an inclusive, rally-calling tone, stressing democratization, policy support, and collective responsibility while maintaining optimism about AI’s potential.


Speakers

Shri S. Krishnan – Secretary, Ministry of Electronics and Information Technology (MeitY), Government of India. Expertise: technology policy, AI governance. [S1][S2]


Bhanu Potta – Moderator of the AI literacy panel; associated with Central Square Foundation. Expertise: AI literacy, education.


Tanushree Narain Sharma – Co-founder and CEO of Transform Schools. Expertise: education innovation, AI-enabled learning programs (AI Samarth).


Dr. Shabana Bhattacharya – Senior Project Scientist, Vardhani School of Data Science and AI, IIT Madras; post-PhD fellow. Expertise: AI research, ed-tech, AI curriculum design.


Chitra Ravi – Founder of Chrysalis, an education organization; experienced in ICT curriculum development. Expertise: teacher training, AI integration in schools.


Ramya Venkataraman – Associated with CENTA; former McKinsey consultant who bootstrapped the education practice for the region. Expertise: education strategy, AI in education, Global South. [S9]


Speaker 1 – Representative of Central Square Foundation, delivering the opening remarks and outlining the AI literacy initiative. Expertise: philanthropy in school education, AI literacy programs.


Additional speakers:


Gauri – Mentioned in the opening remarks; role not specified.


Full session reportComprehensive analysis and detailed insights

The session opened with Speaker 1 thanking the NDIA mission and summit for foregrounding AI literacy and introducing the Central Square Foundation (CSF), a philanthropy focused on school education and ed-tech in India. CSF has launched a large-scale AI-literacy programme called the AI Summit, developed with the Wadwani School of AI, with the explicit aim of equipping every student, parent and child in India to become active contributors to AI rather than passive users[1-7].


Moderator Bhanu Potta framed the dialogue as a forward-looking effort to “build futures through AI literacy for India and the Global South”. She stressed that AI is already embedded in phones, homes and classrooms, declaring that the debate over whether AI should be in schools is now moot and that the focus must shift to how it can be used productively[14-18]. She also highlighted the gender balance of the panel, noting that previous panels had been male-dominated and that she was privileged to moderate four distinguished women in education[10-13].


Dr Shabana Bhattacharya examined the “transformative bets” AI can place on education by analysing its impact on four stakeholder groups: learners, teachers, parents and education organisations. For learners, she argued that AI’s greatest value lies in personalisation – delivering customised lessons, tailored learning pathways, individualised assessments and feedback that can compensate for the high student-teacher ratios typical of the Global South[30-37]. Regarding teachers, she maintained that AI will act as an assistant, enhancing productivity and pedagogy through tools that help design lesson plans, run diagnostic tests and generate and evaluate assessments[40-44]. She further noted that AI’s voice-based and multilingual capabilities lower barriers to quality content for underserved classes and enable parents to engage more meaningfully with their children’s learning[45-47]. Finally, she described how AI can aggregate data from assessments, attendance and programme implementation to produce actionable insights such as dropout-risk identification and resource planning, underscoring the need for AI literacy across all stakeholder groups[48-50].


These points reinforced the panel’s shared conviction that AI literacy must be universal. Speaker 1 positioned AI literacy as a foundation for active participation[4-5], Dr Shabana highlighted that AI is already present in classrooms and that responsible use requires understanding its mechanisms[29-38][50-51], and Shri Krishnan later echoed the policy imperative that no one be left behind, citing the Indian government’s decision to teach AI from class 3 onward[229-244].


Bhanu introduced Shri S. Krishnan, who greeted the panel with a brief “Namaste Krishnan ji”, apologized for “photobombing” the session, and expressed pleasure at joining the discussion[228-229].


Tanushree Narain Sharma illustrated the impact of AI literacy on learners through two concrete stories from government schools in Odisha. Shraddha, a class-nine girl, described moving from using AI “just for fun” to employing it as a companion for cross-checking difficult subjects and seeking teacher clarification, a shift she characterised as “curiosity converting into confidence”[74-80]. Poonam, another class-nine student, recounted how AI moved from being a buzzword to a source of project ideas and preparation material, thereby reducing learning gaps, fostering ethical awareness about bias and responsible use, and supporting language development[82-86]. Tanushree summarised four overarching outcomes of the AI Samarth programme: curiosity → confidence, language support, reduction of learning gaps, and teaching ethics/bias[82-86].


Bhanu used these anecdotes to highlight three themes: the stakeholder groups touched by AI, the equity goal of reaching the “last child”, and the necessity of AI education for all actors involved[51]. She then turned to the perspective of government-school teachers, asking Tanushree how they were responding to AI in classrooms and homes[52-55].


Chitra Ravi responded by describing teachers’ preparedness as a spectrum of awareness and sentiment. On the awareness axis, teachers range from those who have never heard the term to those who are already “dabbling” with AI on platforms such as WhatsApp[115-117]. On the sentiment axis, emotions swing between hope (which can lead to over-stimulation) and fear (which can cause resistance)[121-124]. The AI Samarth curriculum, built with CSF and the Wadwani School, seeks to create an equilibrium by providing concrete skill-sets and fostering a positive mindset[125-138]. Chitra warned of a tendency toward over-utilisation, where teachers generate lesson plans with large language models (LLMs) without knowing how to validate them[144-145]. She also noted that the programme has already reached roughly 200 000 students and is on track to impact nine million, illustrating the scale of the effort[107-110][112-114].


Returning to curriculum design, Bhanu asked Dr Shabana to outline how the curriculum can ensure ethical judgement and critical thinking among children and teachers[166-168]. Dr Shabana explained that the AI Samarth curriculum rests on four pillars: (1) understanding what AI is and recognising its everyday applications; (2) grasping basic technical concepts such as data, computer vision and natural-language processing; (3) appreciating societal and environmental impacts, including bias, fairness and the computational footprint; and (4) developing practical prompting skills for generative tools like ChatGPT[169-179][180-190]. She stressed that critical thinking must be embedded by teaching learners to double-check AI outputs against reliable sources and to avoid blind reliance[193-194], a point later reinforced by Bhanu’s suggestion that students first attempt an answer independently, then use AI to review and improve it-a “critical-thinking loop”[194-195].


Chitra expanded on how teacher training operationalises these ideas. She argued that a safe, emotionally supportive environment is essential for teachers to move from anxiety to confidence, noting that many teachers now report increased politeness in classroom interactions, a subtle influence she attributes to LLM feedback that praises “brilliant questions”[198-207]. The cascade model-where trained teachers subsequently train their peers and students-has helped embed confidence and agency, allowing teachers to translate AI literacy into classroom practice while remaining cautious about the novelty of the skill set[210-212][216-222].


In his concluding remarks, Shri S. Krishnan positioned the summit as a historic step toward democratising AI. He praised the effort to bring diverse concerns into the room and reiterated that inclusive AI education is vital for enabling citizens to participate in and benefit from technology[228-236]. While acknowledging that AI is not a silver bullet and admitting “I’m not an expert on education”, he urged experimentation and stressed that AI should augment, not replace, teachers and other human resources[242-244]. He highlighted the Indian policy mandate to introduce AI from class 3 onward and called for AI education across all disciplines, arguing that the next generation must understand AI’s relevance to fields ranging from art history to engineering[242-246].


Key challenges identified


* Teachers may produce AI-generated lesson plans without proper validation, highlighting the need for systematic verification mechanisms[144-145].


* Scaling the AI Samarth programme to reach the “last child” confronts infrastructure gaps, as roughly 40 % of classrooms still lack computer labs[103-105].


* Aligning ambitious policy goals with on-the-ground capacity requires coordinated resource allocation and governance structures[229-244].


Key take-aways


* AI literacy must be universal, providing foundational concepts before role-specific skilling[4-5][170-179].


* AI-driven personalisation, multilingual delivery and data-analytics can act as equalisers, expanding quality education to low-resource and remote schools[30-38][45-47][70-86].


* Teachers experience a hope-fear spectrum; effective programmes create emotionally safe training environments that build confidence and agency while preserving the teacher’s facilitator role[112-124][130-138].


* A structured curriculum should cover AI fundamentals, data roles, societal/environmental impacts and practical prompting, with embedded ethics, bias awareness and critical-thinking loops[169-190][193-194].


* Large-scale initiatives such as AI Samarth (targeting nine million students) and national policy mandating AI education from class 3 are essential for inclusive, future-ready learning across the Global South[107-110][228-244].


* Real-world stories, like those of Shraddha and Poonam, demonstrate how AI can transform curiosity into confidence, support language development, reduce learning gaps and teach ethics/bias mitigation[74-80][82-86].


Unresolved issues for further investigation


* Mechanisms for systematic validation of AI-generated educational content against curriculum standards and pedagogical quality[144-145].


* Strategies to prevent over-reliance on AI for tasks such as lesson-plan creation without adequate teacher verification[193-194].


* Resource allocation and infrastructure solutions needed to reach the “last child” in remote or under-connected regions, especially given that many schools still lack basic computer facilities[103-105].


* Governance models for ongoing ethics oversight, bias mitigation and monitoring of AI’s environmental impact in educational settings[180-186].


* Effective policies and accountability frameworks to ensure AI literacy reaches all intended stakeholders, aligning top-down mandates with grassroots implementation[229-244].


Follow-up questions


* How do teachers across the broader Global South perceive AI adoption?


* What is the measurable impact of AI Samarth on learning outcomes and gap reduction?


* How effective are multilingual AI tools for language learning?


* What safeguards can prevent over-utilisation of AI-generated materials?


* Which pedagogical frameworks best protect critical thinking?


* What are the environmental implications of scaling AI in schools?


* How does AI-mediated politeness affect classroom dynamics?


These insights collectively map a roadmap for scaling AI literacy in India and the Global South: align curriculum design with universal foundations and role-specific skilling; invest in teacher-training that balances hope and fear while fostering safe, ethical practice; leverage AI’s personalisation and multilingual strengths to reach underserved learners; and embed robust validation, governance and policy mechanisms to ensure equitable, responsible adoption, thereby advancing the summit’s overarching goal of inclusive AI education for the Global South.


Session transcriptComplete transcript of the session
Speaker 1

Good evening everyone it is indeed a pleasure to be here and thank you to the NDIA mission and the NDIA summit for acknowledging this as an important topic to include within the panel discussions scheduled during the NDIA summit I also would like to thank the panel for the summit, so maybe we should just close with a bang, and I see Bhanu sitting with five women, so good luck to them. But I am here on behalf of the Central Square Foundation. Central Square Foundation is a philanthropy working in school education in India, and one of the focus areas for us is edtech. And now that edtech is getting powered with AI, it becomes very important for us to ensure that we impart AI literacy to every student, parent, and child in India, so they are able to become active contributors to AI rather than just being passive recipients.

And that is the fundamental premise of this discussion. CSF has been working on a program, a large -scale program on AI literacy called AI Summit, and this is the first program with a curriculum for AI literacy that has been built in partnership with the Wadwan, the Wadwani School of AI, which is also represented on this panel. So with that I will hand over to Bhanu to take the proceedings forward. Thank you.

Bhanu Potta

Thank you Gauri. Good evening everybody. Like Gauri said we are in the last few panels of the summit and I have the distinguished privilege of moderating four stalwart women in the education space. I don’t think there’s any other, all other panels were the other way around where there were a lot of men and very few women. So it’s my privilege to moderate all of you. I will try to keep it as collogical as possible but you can also trouble yourself between with questions and stuff. So today’s dialogue is really about building futures through AI literacy for India and the Global South. AI is already in our hands. It’s in our phones, it is in our homes, it is in our classrooms whether we like it or not.

The conversation today is really about what features, futures do we want to build of the… tool which is coming into our hands, right? I think the discussion about should AI be in the classroom or should AI be a learning tool is already passed. It is going to be there and it is for us to figure out how to use it productively, right? So with that as a backdrop, we will move forward into the conversation and quickly talk about each one of you here. Shabana, senior project scientist from the Vardhani School of Data Science and AI, IIT Madras. Ramya, we all know Ramya, CENTA and a lot of other things as well before that. Tanushree, co -founder and CEO of Transform Schools and Chitra, founder of Chrysalis, right?

I would actually start the conversation with Shabana putting you on the spot, right? So your journey really in computer science really started with the trigger with your 11th grade computer science teacher. part in you in the classroom, right? In a rural school in the backwaters of Kerala, right? Nice place to be there. But from that time to now the role which you play as a post -PhD fellow in IT Madras, working in the ed tech space with all the things you do in AI, both on the usage side, design side, and the regulatory side, I would really like to ask you what, according to you, are the transformative bets AI can bring in

Dr. Shabana Bhattacharya

Okay. So if you want to analyze the transformative bets, the major transformation that AI can bring into the classroom, I feel that we should look at the impact that AI has on the key stakeholders in education, namely the students, the teachers, parents, and the education bodies and organizations at large. So if you look at learners, I think the biggest value add is in terms of personalization, right? Because research has already shown that one -on -one human tutoring results in better learning gains as compared to a traditional classroom -based learning. But we know that in India and also across the global south, the student -teacher ratios are quite large, which actually prevents personalization from happening. So this was the case before, but with AI in picture, this can change, and in fact this is changing.

Because AI can help students with personalized learning in terms of customized lessons that is tailored according to the student’s specific conceptual gaps or misconceptions. Or AI can also suggest tailored learning pathways. It can also provide individualized assessments and also tailored feedback pertaining to the performance of the students in the assessments. So that way AI can play a big role in adding quality to the learning process of the students, which definitely helps in bettering. So this was the case before, but with AI in picture, So this was the case before, but with AI in picture, this can change, and in fact this is changing. this can change, and in fact this is changing. this can change, and in fact this is changing.

Now coming to teachers, again, so we always say that, you know, AI is not going to replace teachers, but it is going to be a better assistant. Its role is going to be an assistant, which can help in better productivity, teaching productivity, as well as the quality of pedagogy is what it brings onto the table. So, for example, teachers can use AI -based tools to create better lesson plans, to run diagnostic tests that can help inform the misconceptions or learning gaps for students in the class and tailor instructions based on that. It can help in, you know, kind of producing required kind of assessments and also even evaluating them. So that way, the value add is in terms of improved productivity and also informed pedagogy.

And so another significant transformation that AI brings is that it lowers the barrier to quality education, especially for the underserved classes. So with the voice -based capabilities and the multilingual. capabilities of AI. Now, quality learning content is available in regional languages to learners, which in fact, you know, not only affects the teachers and learners, but it also affects the parents, in the sense that now parents can have a more engaged contribution to their child’s learning. So that way, it opens a lot of opportunities. And now coming to educational bodies or organizations at large, so we know that data is collected across multiple points. Including assessments, attendance systems, and program implementations, etc. Now with AI, it is possible to collect all of these data points, combine and analyze them to give actionable insights, such as, you know, identifying risks of dropouts, identifying high risk students, perform better resource planning, etc.

So that way, AI is already transforming education in a big way. And it is going to be, it has a lot of potential to, you know, play a greater role in education as a whole. now so this kind of emphasizes the need for AI literacy because you know AI is already a part of classroom and all the stakeholders are kind of already interacting with AI so there is the need for the for everybody right not just the students or teachers but even parents and generally for people to have a better understanding of the AI concepts how these tools work in order to use them responsibly

Bhanu Potta

thank you so I hear three things you said stakeholders student teacher and maybe even the parent like a teacher and the second thing you said is that getting AI to the last child in our population which is equity and accessibility the third thing which you said is for all the use cases which you talked about to be utilized there has to be AI education in the form of where it is right that brings us to the stakeholders, right? So Tanushree, you across the last seven plus years have impacted over 30 million students, right? So my question to you is, from the government school systems point of view, and when you go on the ground to students, right, especially the government schools primarily, right?

How are they responding to AI in and around them in the classrooms, be it at homes or wherever they’re interacting? What are you seeing on the ground?

Tanushree Narain Sharma

Thanks. Thanks for that question. And thank you for inviting transfer schools on this panel. So I think in past seven years, with our experience of working with the government schools, and as an organization, we work on improving learning and life outcomes. But with the program, which is AI Samarth, we have seen one emerging thing, that a child is able to learn and learn. And that is, we have seen that a child is able to learn and learn. And that is, we have seen that a child is able to learn and learn. And that is, we have seen that a child is able to learn and learn. And that is, we have seen that a child and learn.

And that is, we have seen that a child is able to learn and learn. And that is, we have seen that a child is able to learn and learn. And that is, we have seen that a child is able to learn and learn. And that is, we have seen that a child is able to learn and learn. And that is, we have seen that a child is able to learn and learn. And that is, we have seen that a child is able to learn and learn. And that is, we have And I have two great examples to share from our home state, which is Odisha. It’s from the class nine. They’re both from the class nine grades.

One, there’s a girl whose name is Shraddha. And so she, when we asked her that, you know, how’s her experience in a class where she’s getting these lessons from AI, Samarth. So she said that I used to use AI tool as something, you know, just in a fun way. But now I’m able to cross check my difficult subjects or the topics. And if I’m still facing any difficulties, I’m able to go back to my teachers and cross check again. So she’s using AI as a companion as well. So what I see here is that curiosity is converging. Converted into confidence. That is one big thing. There’s another child whose name is Poonam. she said that for us when we saw AI around us everybody was the AI was a buzzword you know we were everybody was talking about it but it was for all entertainment because we were saying that okay you can you can gamify it you can make a cartoon movie about it all of that but when I see AI now is that I can get my project ideas from it I can actually you know do my preparation from it so and it is very useful to see that because they are the first generation learners in a government school so it’s it’s a great I would say pleasure to see that happening in a government school and I mean AI with AI Samarth we are reaching the end of the world and we are reaching the end of the world and we are reaching the end of the world and we are reaching the end of the world and we are reaching the end of the world and we are reaching the end of the world and we are reaching the end of the world and we are talking about point talking about point talking about point nine million students in all and what we nine million students in all and what we nine million students in all and what we have seen as a pattern overall is three things.

One, which I mentioned earlier, is curiosity converting into confidence. Second, that AI is able to support learning, sorry, language, improving into languages. And second, reducing the learning gaps students have. And third is, which is the most important thing which AI Samarth does, is that teaching them about the ethics, the biases, and therefore it’s a responsible engagement. So that’s the

Bhanu Potta

Thanks, Tanushree. It’s lovely that you talk about the story of Shraddha and Poonam.

Tanushree Narain Sharma

Thank you.

Bhanu Potta

the journey from curiosity to confidence. And along the journey, being able to use digital tools in the form of AI and exercise their agency as a learner and not lose it.

Tanushree Narain Sharma

That’s correct.

Bhanu Potta

So that’s a beautiful story. And I think, you know, 0 .9 million Shraddhas and Poonams.

Tanushree Narain Sharma

Yes.

Bhanu Potta

Thank you. And onward, upward and onward. And more than that, right? So that moves me to another dimension, right? So, Chitra, you’ve been, you founded Chrysalis over 25 years back. And you founded it out of the pain of being a mother of two daughters and their disillusionment in the school system at that point in time.

Chitra Ravi

My disillusionment.

Bhanu Potta

Your disillusionment. They were too young to be disillusioned. And the interesting connection here is that you actually started off as a full stack ICT curriculum embed provider from grade 1 to grade 12 for CBN. and you have seen that whole wave, right? And we still have over 40 % of our classrooms which still don’t have computer labs. And now you’re at the start of another wave with AI, right? So I think you’re looking at the second wave in your own journey, Chrysalis, you and your team and other people working with you, right? So my question to you is how are teachers in the government schools and the low -fee private schools in particular, not the middle and upper, but the low -fee private schools, today, you know, transacting around integrating AI into their life, work, and fun?

Chitra Ravi

Yeah, thank you, Bhanu. That is a very important question to be thinking about. And I can speak from a pre -AI Samarth and a post -AI Samarth point of view. So very grateful to CSF and everybody who’s, you know, really pushed us into this wave with a very… Very, very… purposeful and meaningful work, and it’s giving us a lot of insights into this community of teachers and students. So if you look at preparedness when we entered, and I think we are now close to 200 ,000 students that we will be, you know, who would have been exposed to AI literacy, we are talking about the preparedness being, I look at preparedness from two points of view. One is awareness about AI, the skill sets, or whatever it means to handle a tool.

And the other is, I think, the sentiment and mindset that every teacher or every student has, right? So from an awareness point of view, again, the spectrum is, I don’t even, I just hear this buzzword, and I don’t even want to go near it on one side. And on the other side, yeah, I’m dabbling with it all the time on my WhatsApp. I know it. And then there’s always this confusion between the thin line that AI and, you know, an algorithmic. provides. So I think AI Samad has really empowered them on certain skill sets that has gotten them to understand what is AI and how I can be putting that to use. That’s from a skill set point of view.

But for me, I think more interesting is the sentiment and mindset point of view. If you look at that, it’s a spectrum between hope and fear. And hope leading to over, you know, stimulation, right? And fear, resistance and not really getting there. So I think that’s a very important spectrum. And I think a program like AI literacy and Samad kind of brings in an equilibrium. I think that’s very important because, and I completely, you know, I can relate to what Tanushree was saying, we see so many children and so many teachers who are now us, hey, you know what? I thought everything was bad about AI. Like I was so fearful. I thought it’s going to replace me as a teacher.

I now understand that if I hold the agency and I know what is what, and it’s not rocket science. I think that’s the first demystifying thing, right? I mean, it’s not anything that I cannot be conversant with. And we are talking about government teachers, affordable private school teachers who have found a confidence like she was mentioning. And that has made them approach AI in a very purposeful way, right? They are integrating it. I think the way we built the curriculum, thanks to the Wadwani school and CSF, and we’ve also made some meaningful contribution to that curriculum and content development. We are seeing that the use cases we build in, there is a very, very high relevance to what the teacher is doing, which is very important.

It’s not just literacy about what is what. but how it can be purposefully used by the teacher. So that’s a big winner for us. The second, when it comes to sentiments, we’ve heard children, actually children and teachers in particular, tell us that, you know, I had a negative approach and today I walk in with so much positivity. I know this can be engaging. This can be purposefully done. And even that over, I don’t know what to call it, but I think over -utilization, you know, where they would just generate lesson plans because somebody asked the coordinator or the, you know, educational officer asked them. They just go and dabble with the LLM, CHAP, GPTA and then churn out lesson plans, not even knowing what to validate, how to validate.

And that has also been kind of equalized. So I think it’s a beautiful, in my opinion, an equalizer. And of course, over a period of time, it will become a leveler. I look at it from that point of view. And there’s much more. work to be done, but I think it’s a beautiful start where there are so many positive stories in the field that’s giving us a lot of hope. I think even for players like us, you know, organizations like us, I wouldn’t say we were not in that spectrum between hope and fear. It’s not only about the teachers. We are all in this phase of, you know, is it going to be the thing tomorrow or is it going to be, you know, what is our position in this world?

And I think this is kind of now getting to some equilibrium. That’s what I would say.

Bhanu Potta

What I hear you say is that, you know, tens of thousands of teachers are finding their balance in the classroom with confidence and agency, right? And I think that’s a good state to be in. And we have a lot more work to do. India is a big country and Global South is even bigger, right? That brings me to Tanushree. Sorry, Ramya. So Ramya, your journey really started off with the time you spent in McKinsey and you are actually kind of bootstrap the education practice for this region in McKinsey. And then from that horizontal view, you actually became a falcon and then you dove into teachers and the problem of teachers and there comes CENTA, right?

So over the past few years, like 11 years in CENTA, from across 100 countries, 100 plus countries, teachers have engaged with you. And now you’re at a place where you’re seeing AI come into the mix, right? So I would ask you to kind of be that falcon, fly back out of India and look at the global south and kind of talk about what are you seeing, what are you hearing from teachers in AI, not just from India and what you’re doing with AI but also in other countries where there isn’t any such thing happening.

Ramya Venkataraman

I mean like in a remote part of Jharkhand we’ve had a teacher saying that I was worried that my students are getting into this world faster than I am and now with this literacy I’m able to catch up with my students which is a very nice thing of her to say but those are some of the perspectives to start with I don’t know if I answered because I was not sure what I was supposed to do as a falcon coming out

Bhanu Potta

Thank you, thank you Ramya I think good conversations, three things to pick right, one is the stakeholders being touched and their journey from curiosity to confidence for some actors and then for a lot of other actors finding the balance between the hope and fear spectrum, right, so I would like to kind of take the conversation a little bit down into the implementation and design of all of what we’re working, right, I mean that’s where the rubber really meets the road right, so I would go back to Dr. Shabana and Shabana you have been a part of the curriculum design and the curriculum review and the endless debates of how should we help a rural school student in Orissa understand computer vision and what metaphor would we use that and those are fascinating conversations we had in those times.

But now looking back at the journey and looking back at what we’re hearing from the field, not just ASM, but there are a lot of other literacy programs happening. But I would really like you to kind of, you know, focus a little bit on moving towards what is it that you would like to see in the design, which kind of ensures ethical judgment and critical thinking among all the stakeholders, the child as well as the teacher.

Dr. Shabana Bhattacharya

Dr. Shabana Bhattacharya Okay. So first, I think we should start with the child. Dr. Shabana Bhattacharya Yes. so when we look at the available AI literacy curriculums around the world we see that many of them deal with foundational AI literacy along with AI skilling but AI skilling is more about how to practically use AI which is more role or context dependent so for example AI skill for a software engineer could be different from what AI skilling would be for a teacher but AI literacy refers to the foundational set of AI concepts that is kind of universal that is something that everybody should know about so as a part of the AI Samad curriculum so we collaborated together the Badwani school and CSF collaborated together to come up with a wonderful curriculum which I feel is like very grounded and it kind of focuses on the key AI foundational concepts that everybody should be aware of and later you could build on top of it to do further skilling which is more tailored towards each role So to talk about the AI Samarth curriculum, we have four key pillars.

The first being understanding what is AI and what are the applications of AI, like looking at, you know, the everyday applications and identifying the AI component, because we all interact with AI in some form or the other without realizing that, you know, it is a form of intelligence that we are interacting with. So understanding that is very important. Understanding what is AI is important. And, you know, also, you know, kind of having an awareness about some of the key technical aspects, like what is data? What is the role of data in training an AI system? And, you know, about vision, about NLP. So these are the basic AI concepts, the technical concepts that, you know, students and teachers and universally everybody should be aware of.

And then once we have an idea about the software. The role of data in training AI systems, that naturally leads us to understand about the societal impacts, the environmental impacts, et cetera. So, for example, so we. we know about the issues of bias and fairness in AI systems, right? So we know that, okay, an AI system is trained based on the data. So depending upon the kind of data that was used to train the systems, we can have, you know, these issues. And these arise whenever the systems are used in practice. So the third pillar is definitely about the societal impact, the environmental impacts, what happens when we require the computations at scale. And finally, about how to practically interact with these systems, right?

How to write effective prompts, because we all interact with, almost all of us interact with generative AI tools such as the chat GPT, right? So then how to actually frame an effective prompt to get the desired result. So that is also very important. So I feel that these kind of cover the basic fundamental AI concepts that is universal, that everybody should be aware of,

Bhanu Potta

So just one more. One minute on. how do we in all of this safeguard critical thinking

Dr. Shabana Bhattacharya

okay so coming to critical thinking so so when we know that an AI system is trained using data we know that you know the kind of research that we would get on a query would depend on the kind of data that the system has seen so which would make a student or a teacher or any user you know be vigilant about using the AI the the results from the AI tools directly so we have to be careful about double -checking cross -checking with valid sources and not trusting the result you know directly and also coming to the critical thinking part it is more about you know not over relying on AI tools and using your judgment to kind of validate the results to verify like I said with verified sources whether the results are you know valid or not so which is also important

Bhanu Potta

I think just correcting all the results is a very important thing to do and I think on that there was also a lot of exercise which was done across the curriculum to sprinkle various good ways of using AI where we basically kind of said it’s not about asking the AI for the answer but actually doing your own exercise and submitting your answer and then asking AI to review and improve which is that critical thinking loop which we reinforced across multiple places. So I’d move forward a little bit to Chitra now. So Chitra, I would like to take you down into a teacher training scenario where your team has trained a lot of teachers on AI literacy via the curriculum and then they were cascading forward, right?

So at that level, right, what is it that you and your team have done in terms of building confidence for the teachers who are in the training program to carry that back into their classrooms and deliver it to the children?

Chitra Ravi

I think we… always believed that teacher training is more about building an emotionally safe environment whatever topic we are training teachers on and with AI the hope and fear spectrum, this becomes even more relevant. So I think both in the curriculum the way we’ve done it and in the way we facilitate and train the teachers

Bhanu Potta

Namaste Krishnan ji Pleasure to have you here We welcome Shri Krishnan ji from MIT Thank you for joining us today sir

Chitra Ravi

Chitra So I think we definitely need to look at how the confidence is built. In a light hearted way I also want to say a lot of teachers have now started saying we are becoming more polite and classroom Any other questions? idea why? It’s nothing to do with the training but to do with LLMs. Because chat GPT always tells them, hey, you’ve asked a brilliant question. So that is becoming contagious. And the teachers are today like, you know, we’ve seen that happen. So that is why I wanted to bring that was there is a lot of role modeling that happens when there is a cascading, right? And I’m simply saying an LLM cascading also is bringing that politeness.

I hope we don’t become artificially polite, but then I’m hoping that some of these things rubs off in the language of teachers. So I think that hope and confidence is what trainers are modeling. And that I think is being carried on by the teachers when they do the cascading to those students. And one other thing in the literacy, I mean the Samarth model, I feel is the capacity we are building in the teachers is actually strong because there is the responsibility of the teacher to go back and train the teacher. And see, they are all teachers. You know, when we analyze why teachers resist training, it’s because they think that they are the deliverers and then to receive becomes difficult for us.

So I think I love this AI Samarth model, the literacy model we have built, thanks to CSF and, you know, the whole think tank there, is that yes, I’m going to learn and some of us are actually going to translate it to the students. And so that rub off is really, really working well. So I think the preparedness or I would say the teachers are very gently handling this whole thing because for them it is a new topic. It’s not like I’ve done math teaching for 25 years and I’ve found all my children getting into IIT and all of that. This is a new topic, this is a new skill, and they are very, very careful when they are doing it.

So there’s a lot of deep respect that they have in terms of translating this into their own training, Banu. So I think in many ways this has been an eye -opener into how usual teacher training and how this AI literacy teacher training is bringing very new insights into how teacher training can happen. So I think that’s very, very critical.

Bhanu Potta

Thank you, Chitra. We would now request Shri Krishnanji to kind of address us, Secretary Mighty.

Shri S. Krishnan

I’m sorry to literally photobomb this session. It was not my intention. Ram is a very old friend. And when she asked if I could stop by, I just thought I would. Primarily, more than anything else, to thank all of you for participating. in probably a historic summit, one that finally democratized AI. In that sense, I think what we have managed to do is to actually bring people into the room. And more than even people, we have managed to bring people’s concerns into the room. And I think it’s very fitting that we are talking about education today, which is so critical in every sense of the word. Partly because I think if we have to achieve anything we need to achieve, we need people to do it and people who have the skills and who have the abilities and who have the competencies to actually do what needs to be done with technology and make sure that it works for them and make sure also that they’re able to participate and they’re able to actually take advantage of it.

The other part of it is, of course, while we are making them capable, to what extent can we use technology? I think it’s nobody’s case that we have enough teachers or teachers’ assistants or we have the kind of technology that we need. We need the kind of resources. we want for education. If technology helps us to multiply that rather than substitute it and if technology helps us to actually enable our teachers to deliver better, nothing like it. And I think those are the kind of applications that we seriously need to look for. The Expo is replete with those examples. Incidentally it’s open tomorrow and it’s extended up to 8pm today so if you haven’t gone and taken a look at it, let me make a pitch.

Please do go take a look. I think a huge number of social applications of what technology can do and how it can work for people. I think all of that is something which would interest many of you but more than anything else I’m not an expert on education and I would be the last one to suggest that we should, I mean this is a silver bullet and it will solve all the problems. It may, it may not. But I think we need to sort of experiment. with it but more than anything else I think one thing I do know that students of all ages and students in all disciplines need to be aware of the potential of the technology and see what it can do for them and I that in itself I think would be significant the government of India has already made a policy call that they would teach about AI from class three onwards and I’m sure that many states would also do the same thing and that would form part of the curriculum and likewise in practically every institute or every university I think AI needs to be taught across all disciplines I mean not necessarily only to the computer science people in the IITs but to just about everybody because again the jobs in AI are not just of the guys who build the models I mean those are the nerdy PhDs there are probably about 300 of them across the world who’ll do it the rest of us are not going to do that the rest of us are not going to be Sam Altman or Peria Amadei or Dennis none of those I mean, all of them we saw yesterday.

The rest of us are going to be more journeymen, more people who will actually figure out ways in which this technology will affect us so that people need to know, people need to understand, even in an area like art history, what is it that the technology can do to their own discipline and how they can sort of leverage it. So if the Central Scale Foundation and educationists and everybody focuses on this area and is able to actually train the next generation to do this, train the next generation to figure out what jobs can work for them, that’s going to make all the difference. I or none of us are in an age group where, I mean, Rom and I are probably about, we are contemporaries, so we are too gray and too old now for this to make a huge difference for us.

But this is a technology for the next generation, and they should not be lost in this. And that is something which, I think we need to keep a sense of what is going on. to the message and the whole message of this summit is inclusion and bringing everyone in and we have to also ensure through the education process that no one is left behind so thank you very much for having me and i wish you all the best again a plug for the expo try and take a look and thank you all for joining us here today thanks thank you you you

Related ResourcesKnowledge base sources related to the discussion topics (34)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“Speaker 1 introduced the Central Square Foundation (CSF), a philanthropy focused on school education and ed‑tech in India.”

The knowledge base lists Speaker 1 as a Central Square Foundation representative, confirming CSF’s role as a philanthropy working on school education and educational technology in India [S2].

Confirmedhigh

“Moderator Bhanu Potta highlighted the gender balance of the panel, noting previous panels had been male‑dominated and that she was moderating four distinguished women in education.”

Gender-balanced panels are specifically highlighted in the knowledge base, which notes the importance of gender balance in such discussions [S107].

Additional Contextmedium

“AI is already embedded in phones, homes and classrooms, making the debate over whether AI should be in schools moot.”

The knowledge base remarks that AI is not limited to the classroom and is present in many everyday spaces, underscoring its pervasive integration beyond formal education settings [S18].

Additional Contexthigh

“AI’s greatest value for learners lies in personalisation – delivering customised lessons, tailored learning pathways, individualised assessments and feedback that can compensate for the high student‑teacher ratios typical of the Global South.”

AI is described as capable of providing personalised learning experiences tailored to individual students’ needs, pace and language, which is especially valuable for supporting overwhelmed teachers and high student-teacher ratios in the Global South [S66].

Additional Contextmedium

“AI will act as an assistant for teachers, enhancing productivity and pedagogy through tools that help design lesson plans, run diagnostic tests and generate and evaluate assessments.”

The knowledge base emphasizes the need for teachers to acquire AI literacy and professional development to effectively use AI tools in lesson planning, diagnostics and assessment, confirming the assistant role described [S15] and [S112].

External Sources (112)
S1
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — -S. Krishnan- Role/Title: Secretary of METI (Ministry of Electronics and Information Technology)
S2
Empowering India &amp; the Global South Through AI Literacy — -Shri S. Krishnan: Secretary, Ministry of Electronics and Information Technology (MeitY), Government of India
S3
Panel Discussion AI &amp; Cybersecurity _ India AI Impact Summit — Sorry, could I make a quick announcement to have all the panelists and the speakers on the stage for a quick photo? Mr. …
S4
Empowering India &amp; the Global South Through AI Literacy — – Bhanu Potta- Dr. Shabana Bhattacharya – Dr. Shabana Bhattacharya- Tanushree Narain Sharma- Bhanu Potta – Tanushree N…
S5
Empowering India &amp; the Global South Through AI Literacy — – Dr. Shabana Bhattacharya- Tanushree Narain Sharma – Dr. Shabana Bhattacharya- Tanushree Narain Sharma- Bhanu Potta -…
S6
https://dig.watch/event/india-ai-impact-summit-2026/empowering-india-the-global-south-through-ai-literacy — And so another significant transformation that AI brings is that it lowers the barrier to quality education, especially …
S7
Empowering India &amp; the Global South Through AI Literacy — Dr. Shabana Bhattacharya Okay. So first, I think we should start with the child. Dr. Shabana Bhattacharya Yes. so when …
S8
Empowering India &amp; the Global South Through AI Literacy — – Dr. Shabana Bhattacharya- Chitra Ravi
S9
Empowering India &amp; the Global South Through AI Literacy — -Ramya Venkataraman: Associated with CENTA; previously worked at McKinsey where she bootstrapped the education practice …
S10
https://dig.watch/event/india-ai-impact-summit-2026/empowering-india-the-global-south-through-ai-literacy — The conversation today is really about what features, futures do we want to build of the… tool which is coming into ou…
S11
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S12
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S13
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S14
We are the AI Generation — Doreen Bogdan Martin: Thank you. Good morning and welcome to Geneva for the AI for Good Global Summit 2025. I want to th…
S15
Driving Enterprise Impact Through Scalable AI Adoption — Yeah, I think I’m interested to hear your answer. But from my side, I think it’s a tool like a calculator. I think also …
S16
IGF 2024 Global Youth Summit — Umut Pajaro Velasquez: Okay. everyone on good day or good evening wherever you are. When it comes to decisions on how …
S17
Comprehensive Report: China’s AI Plus Economy Initiative – A Strategic Discussion on Artificial Intelligence Development and Implementation — Yeah, I think I just want to add some echo to Professor Gong’s comments. I think it’s not necessarily a negative effect,…
S18
Education meets AI — Additionally, the speakers emphasized the need for personalized learning and adaptive teaching methods. They discussed t…
S19
Education, Inclusion, Literacy: Musts for Positive AI Future | IGF 2023 Launch / Award Event #27 — However, on the other hand, there is a lack of data that supports the notion that personalised learning actually increas…
S20
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — “It can deal with multilinguality and voice.”[51]. “There’s firstly a lot of opportunity to bridge some of these inequit…
S21
Panel Discussion AI in Healthcare India AI Impact Summit — “One of the big barriers is multilingual.”[1]. “Maybe use cases, and I briefly hit on this before, but I think certainly…
S22
Building Public Interest AI Catalytic Funding for Equitable Compute Access — “computer capability collaboration connectivity compliance and context”[3]. “From these discussions, there were six foun…
S23
AI That Empowers Safety Growth and Social Inclusion in Action — “So we’ve engaged with member states and different stakeholders about their priorities, and let me bring to your attenti…
S24
AI and Human Connection: Navigating Trust and Reality in a Fragmented World — He advocates for always validating everything AI produces and encourages experimental use of AI technology to understand…
S25
WS #110 AI Innovation Responsible Development Ethical Imperatives — Ricardo Israel Robles Pelayo: Thank you very much. Good afternoon, everyone. It is an honor to be here and share a refle…
S26
9821st meeting — Malta:Thank you, President. I begin by thanking you for convening and chairing this high-level briefing on this highly t…
S27
Operationalizing data free flow with trust | IGF 2023 WS #197 — Addressing these challenges requires collaboration between policymakers and stakeholders. Working together, we can devel…
S28
Leaders TalkX: Ethical Dimensions of the Information Society — An ethical framework must incorporate diverse cultural insights and encourage participation from all societal sectors. T…
S29
WSIS Action Lines C4 and C7:E-employment: Emerging technologies in the world of work: Addressing challenges through digital skills — Structured frameworks are essential for defining what AI literacy means and how it should be implemented in educational …
S30
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — ### Country Experiences and Perspectives ### Country Implementation Examples **India – Community-Driven Development** …
S31
AI for Safer Workplaces &amp; Smarter Industries Transforming Risk into Real-Time Intelligence — Piyush advocates for a shift from traditional learning-focused education to one that emphasizes creation and practical a…
S32
AI (and) education: Convergences between Chinese and European pedagogical practices — ### Critical Thinking and Human-Centered Skills Norman Sze: Thank you for introduction. It’s my honor to join this foru…
S33
Artificial General Intelligence and the Future of Responsible Governance — Satunas highlights that education and public critical‑thinking skills are as essential as compute investments for prepar…
S34
Artificial intelligence (AI) and cyber diplomacy — The conversation expanded to highlight the universal need for digital literacy and capacity building in AI, urging gover…
S35
Policy Network on Artificial Intelligence | IGF 2023 — The importance of providing education to a diverse range of demographics, from school children to the elderly, was also …
S36
Building Scalable AI Through Global South Partnerships — And the third key aspect is putting humans at the center of this process to make sure that this is a technology that wor…
S37
Democratizing AI: Open foundations and shared resources for global impact — The speakers called for international participation in their initiatives, mentioning online forums and collaboration opp…
S38
Open Forum: A Primer on AI — In summary, the widespread adoption of AI presents opportunities and challenges. While it can boost equality, address cl…
S39
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Key to this trajectory are collaborative and inclusive policy governance, culturally attuned ethical frameworks, and bro…
S40
WSIS Action Lines C4 and C7:E-employment: Emerging technologies in the world of work: Addressing challenges through digital skills — The strong consensus on key principles—particularly the need for partnerships, human-centred AI integration, and adaptiv…
S41
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — Another important point emphasized in the analysis is the significance of involving users and technical experts in the p…
S42
Empowering India &amp; the Global South Through AI Literacy — One, which I mentioned earlier, is curiosity converting into confidence. Second, that AI is able to support learning, so…
S43
Education, Inclusion, Literacy: Musts for Positive AI Future | IGF 2023 Launch / Award Event #27 — A focus for education should be familiarity with media and information so that people don’t have the feeling there’s a h…
S44
How to ensure cultural and linguistic diversity in the digital and AI worlds? — Xianhong Hu:Thank you very much Mr. Ambassador. Good morning everyone. First of all please allow me, I’d like to be able…
S45
AI (and) education: Convergences between Chinese and European pedagogical practices — Norman Sze: Thank you for introduction. It’s my honor to join this forum and share insight from perspective of professio…
S46
Developing capacities for bottom-up AI in the Global South: What role for the international community? — **Amandeep Singh Gill**, UN Tech Envoy, provided the institutional perspective and outlined the Secretary-General’s upco…
S47
Open Forum #76 Digital Literacy As a Precondition for Achieving Universal a — Policymakers should adopt a tiered approach that starts with foundational skills through community-based programs and pr…
S48
AI for Social Empowerment_ Driving Change and Inclusion — Education and Skills System Overhaul:Investment requires fundamental reimagining rather than incremental improvement. Cu…
S49
Education meets AI — Another important point highlighted is the need for research and investment in education, similar to the approach taken …
S50
Why apprenticeship and storytelling are the future of learning in the AI Era — AI, through approaches such as apprenticeship models and storytelling, can help swing the ‘learning pendulum’ back. It c…
S51
WS #110 AI Innovation Responsible Development Ethical Imperatives — Ricardo Israel Robles Pelayo: Thank you very much. Good afternoon, everyone. It is an honor to be here and share a refle…
S52
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — The roadmap is built upon core principles including “human and planetary welfare, accountability and transparency, inclu…
S53
Open Forum #33 Building an International AI Cooperation Ecosystem — Ricardo Pelayo: Hi, good afternoon. It’s an honor to share with you this reflection on building an ecosystem of innovati…
S54
Keynote ‘I’ to the Power of AI An 8-Year-Old on Aspiring India Impacting the World — “Thanks to the full stack AI sovereign model now in place, Sarvam AI, I’m able to translate my book into 22 different In…
S56
Open Forum #17 AI Regulation Insights From Parliaments — AI governance requires ongoing education for all stakeholders – politicians, policymakers, and the general public. This …
S57
AI for Good Impact Initiative — Education sector needs proactive involvement from all stakeholders
S58
AI &amp; Child Rights: Implementing UNICEF Policy Guidance | IGF 2023 WS #469 — Joy Nakhayenze:Thank you. The most important aspect for us is the funding of such projects. First, the government should…
S59
Pre 8: IGF Youth Track: AI empowering education through dialogue to implementation – Follow-up to the AI Action Summit declaration from youth — Multi-stakeholder approach must include all education stakeholders from design phase, especially youth, teachers, and pa…
S60
WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape — ### Infrastructure and Capacity Constraints Despite progress in policy development, the question of how to operationali…
S61
WS #294 AI Sandboxes Responsible Innovation in Developing Countries — Jai Ganesh Udayasankaran: Thanks, Sophie. I just wanted to quickly add what was shared by the speaker from data sphere. …
S62
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — And that’s what we’re doing. And that’s what we’re doing. And that’s what we’re doing. And that’s what we’re doing. prio…
S63
Building the Next Wave of AI_ Responsible Frameworks &amp; Standards — What is interesting is India is uniquely positioned in this global AI discourse. Most global AI frameworks are designed …
S64
Education, Inclusion, Literacy: Musts for Positive AI Future | IGF 2023 Launch / Award Event #27 — Despite the concerns raised, chat GPT emerges as a promising tool for learning. It has the potential to save time by gen…
S65
IGF 2024 Global Youth Summit — AI has the potential to tailor education to each student’s specific requirements. This personalization can enhance the l…
S66
High Level Session 3: AI &amp; the Future of Work — Education and Skills Development Moorosi argues that AI can address educational challenges by providing personalized le…
S67
Responsible AI for Children Safe Playful and Empowering Learning — All right, colleagues, we need to come to a close because people need to move to the next session. We’re designing for s…
S68
Empowering India &amp; the Global South Through AI Literacy — I hope we don’t become artificially polite, but then I’m hoping that some of these things rubs off in the language of te…
S69
Launch / Award Event #52 Intelligent Society Development &amp; Governance Research — Development | Sociocultural | Online education The research documents real-world applications of AI across multiple sec…
S70
https://dig.watch/event/india-ai-impact-summit-2026/empowering-india-the-global-south-through-ai-literacy — One, which I mentioned earlier, is curiosity converting into confidence. Second, that AI is able to support learning, so…
S71
Education meets AI — In conclusion, the analysis provides a comprehensive overview of the key points related to digital and AI skills in educ…
S72
AI for Safer Workplaces &amp; Smarter Industries Transforming Risk into Real-Time Intelligence — Piyush advocates for a shift from traditional learning-focused education to one that emphasizes creation and practical a…
S73
AI (and) education: Convergences between Chinese and European pedagogical practices — ### Critical Thinking and Human-Centered Skills Norman Sze: Thank you for introduction. It’s my honor to join this foru…
S74
WS #232 Innovative Approaches to Teaching AI Fairness &amp; Governance — Tayma argues that educators need to adapt their teaching goals in the AI era. She suggests focusing on developing critic…
S75
Artificial General Intelligence and the Future of Responsible Governance — Satunas highlights that education and public critical‑thinking skills are as essential as compute investments for prepar…
S76
Driving Enterprise Impact Through Scalable AI Adoption — Educational institutions need to adapt curricula to emphasize critical thinking, question-asking, and evaluation skills …
S77
From principles to practice: Governing advanced AI in action — Multi-signal approach, both at the national and the global level, to ensure that no one will be left behind
S78
Comprehensive Report: China’s AI Plus Economy Initiative – A Strategic Discussion on Artificial Intelligence Development and Implementation — This comment expanded the education discussion beyond formal systems to include organic, curiosity-driven learning. It r…
S79
Building Scalable AI Through Global South Partnerships — And the third key aspect is putting humans at the center of this process to make sure that this is a technology that wor…
S80
WS #100 Integrating the Global South in Global AI Governance — Jill: Thank you, for the opportunity and also for the question, by the way. So, IEEE, as you say, is a standards organi…
S81
Defying Cognitive Atrophy in the Age of AI: A World Economic Forum Stakeholder Dialogue — The discussion began with a cautiously optimistic tone, acknowledging both opportunities and risks. However, the tone be…
S82
Opening address of the co-chairs of the AI Governance Dialogue — The tone is consistently formal, diplomatic, and optimistic throughout. It maintains a ceremonial quality appropriate fo…
S83
Opening — The overall tone was formal yet optimistic. Speakers acknowledged the serious challenges posed by rapid technological ch…
S84
How Multilingual AI Bridges the Gap to Inclusive Access — The tone was consistently collaborative, optimistic, and mission-driven throughout the conversation. Speakers demonstrat…
S85
Empowering education through connectivity ( Giga – UNICEF and ITU joint initiative) — Cooperation and partnerships between international organizations were commended, with one speaker referring to a project…
S86
Fostering hybrid curriculum for inclusive learning environments — The COVID-19 pandemic precipitated a significant educational crisis, resulting in the unprecedented closure of schools g…
S87
How to believe in the future? — This provides an opportunity for authentic, credible impact stories
S88
Open Forum #68 WSIS+20 Review and SDGs: A Collaborative Global Dialogue — This reframing is profound because it shifts the evaluation criteria from policy rhetoric to tangible human impact. The …
S89
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — The discussion maintained a thoughtful but somewhat cautious tone throughout, with speakers acknowledging both opportuni…
S90
Webinar session — The discussion maintained a diplomatic and constructive tone throughout, with participants demonstrating nuanced thinkin…
S91
Strengthening Corporate Accountability on Inclusive, Trustworthy, and Rights-based Approach to Ethical Digital Transformation — The discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while ack…
S92
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — The tone was collaborative and solution-oriented throughout, with participants acknowledging both the urgency and comple…
S93
When Code and Creativity Collide: AI’s Transformation of Music and Creative Expression — The tone was thoughtful and forward-looking, with both speakers showing cautious optimism rather than fear. Harvey Mason…
S94
Welfare for All Ensuring Equitable AI in the Worlds Democracies — The conversation maintained an optimistic and collaborative tone throughout, with participants sharing practical solutio…
S95
AI for Democracy_ Reimagining Governance in the Age of Intelligence — The discussion maintained a consistently thoughtful and cautiously optimistic tone throughout. Speakers acknowledged ser…
S96
Bridging the AI innovation gap — The tone is consistently inspirational and collaborative throughout. The speaker maintains an optimistic, forward-lookin…
S97
Safeguarding Children with Responsible AI — The discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in B…
S98
Session — – Stephanie: Participant mentioning civil society organizations Jovan Kurbalija: Thank you. Happy New Year. Good. L…
S99
Artificial intelligence (AI) – UN Security Council — The global focus on Artificial Intelligence (AI) capacity-building efforts has been a significant topic of discussion am…
S100
Powering AI Global Leaders Session AI Impact Summit India — “And, you know, this is literacy in the sense of, you know, reading and writing and arithmetic and AI literacy.”[33]. “S…
S101
Fireside Chat Intel Tata Electronics CDAC &amp; Asia Group _ India AI Impact Summit — “And then there will be a mass -scale deployment of AI across the board.”[6]. “that will make a large scale impact on th…
S102
Responsible AI in India Leadership Ethics &amp; Global Impact part1_2 — The tone was professional, collaborative, and pragmatically optimistic throughout. Speakers maintained a solution-orient…
S103
Building Indias Digital and Industrial Future with AI — The discussion maintained a collaborative and forward-looking tone throughout, with industry experts, regulators, and po…
S104
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — The tone was consistently optimistic and collaborative throughout the discussion. Panelists demonstrated mutual respect …
S105
AI in schools: The reality is messier than the solutions — As the school year is in full swing, the issue of AI in schools and education keeps coming up everywhere. Teachers share…
S106
Knowledge in the Age of AI: World Economic Forum Town Hall Discussion — And so it certainly should exist as part of the classroom and as part of schooling. But like I said, it can become a cru…
S107
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — The discussion was moderated by Chris Odu and featured panellists including Binty Mansaray (digital security auditor), A…
S108
Celebrating 20 Years of Multistakeholder Engagement: WSIS Forum, IGF, and the Road Ahead — Contri highlights the persistent gender imbalance in internet governance forums, noting that panels are still predominan…
S109
The WSIS welcome Part I: Meet the Movers Behind It — Noteworthy observations from the session included an acknowledgment of the gender imbalance on the panel, which was reco…
S110
Towards Parity in Power / DAVOS 2025 — Alicia Bárcena Ibarra: I mean, it was very difficult. Actually, she got the majority of votes, 35 million votes, whic…
S111
Teachers see AI as an educational tool — Teachers have longworriedabout ChatGPT enabling students to cheat, with its ability to produce essays and solve problems…
S112
Rethinking learning: Hope, solutions, and wisdom with AI in the classroom — But teachers need support. They need professional development around AI literacy, reasonable class sizes that allow for …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
1 argument28 words per minute227 words479 seconds
Argument 1
AI literacy as a foundation for students, parents, and teachers to become active contributors rather than passive recipients (Speaker 1)
EXPLANATION
Speaker 1 stresses that as edtech becomes AI‑driven, every learner, parent and teacher must acquire AI literacy so they can shape and use the technology rather than merely consume it. This framing sets the premise for the whole discussion.
EVIDENCE
She explains that because edtech is now powered by AI, it is crucial to impart AI literacy to every student, parent, and child in India so they become active contributors instead of passive recipients [4].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need to teach AI as a core tool and empower all stakeholders is echoed in discussions about AI as a calculator-like tool for education [S15] and the broader call for empowerment and equity in AI initiatives [S12]; foundational knowledge for future workforces is also highlighted in [S1].
MAJOR DISCUSSION POINT
Need for AI Literacy Across All Stakeholders
AGREED WITH
Dr. Shabana Bhattacharya
D
Dr. Shabana Bhattacharya
8 arguments169 words per minute1327 words470 seconds
Argument 1
AI is already embedded in classrooms; understanding its workings is essential for responsible use (Dr. Shabana Bhattacharya)
EXPLANATION
Dr Bhattacharya points out that AI is already present in everyday devices and learning environments, making it unavoidable for educators and learners. Therefore, understanding how AI works is a prerequisite for its responsible deployment.
EVIDENCE
She notes that AI is in our phones, homes and classrooms, whether we like it or not, and that the debate about whether AI should be in the classroom is already settled, shifting focus to productive use [16-19].
MAJOR DISCUSSION POINT
Need for AI Literacy Across All Stakeholders
AGREED WITH
Speaker 1, Shri S. Krishnan
Argument 2
Foundational AI concepts must be taught universally before role‑specific skilling (Dr. Shabana Bhattacharya)
EXPLANATION
She distinguishes AI literacy – a set of universal foundational concepts – from AI skilling, which varies by role. The curriculum therefore starts with core AI ideas that everyone should know before moving to specialised training.
EVIDENCE
She explains that AI literacy refers to a universal set of foundational concepts that everybody should be aware of, whereas AI skilling is role-specific, and the curriculum is built around this distinction [170-179].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of AI literacy curricula distinguish universal foundational concepts from role-specific skilling, supporting this view [S2].
MAJOR DISCUSSION POINT
Need for AI Literacy Across All Stakeholders
AGREED WITH
Speaker 1
Argument 3
AI enables personalized lessons, tailored pathways, and individualized feedback, improving learning outcomes (Dr. Shabana Bhattacharya)
EXPLANATION
Dr Bhattacharya argues that AI can overcome large student‑teacher ratios by delivering customized content, adaptive learning pathways and real‑time feedback, thereby raising learning gains. Personalisation is presented as a key transformation for learners.
EVIDENCE
She cites research showing one-on-one tutoring outperforms traditional classrooms, and explains that AI can provide customized lessons, tailored pathways, individualized assessments and feedback to improve learning quality [30-37].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Reports on AI-driven personalized and adaptive learning describe these benefits and challenges, confirming the claim [S18]; further discussion of the impact of personalized AI learning appears in [S19].
MAJOR DISCUSSION POINT
AI‑Driven Personalization and Equity in Learning
AGREED WITH
Tanushree Narain Sharma
Argument 4
Multilingual and voice‑based AI lowers barriers, bringing quality content to underserved learners (Dr. Shabana Bhattacharya)
EXPLANATION
She highlights that AI’s voice and multilingual capabilities make high‑quality learning material accessible in regional languages, expanding reach to students and parents who previously lacked such resources. This reduces equity gaps in education.
EVIDENCE
She states that voice-based and multilingual AI makes quality learning content available in regional languages, enabling greater parental engagement and broader access for underserved learners [45-47].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Evidence that multilingual and voice capabilities can bridge equity gaps in education is provided in [S20] and [S21]; similar observations about voice-based AI lowering barriers are noted in the curriculum overview [S2].
MAJOR DISCUSSION POINT
AI‑Driven Personalization and Equity in Learning
AGREED WITH
Tanushree Narain Sharma, Chitra Ravi, Shri S. Krishnan
Argument 5
AI assists teachers in lesson planning, diagnostics, and assessment, boosting productivity and pedagogy (Dr. Shabana Bhattacharya)
EXPLANATION
According to Dr Bhattacharya, AI functions as a teaching assistant, helping educators design lesson plans, run diagnostic tests, create and evaluate assessments, which enhances both efficiency and instructional quality.
EVIDENCE
She gives examples of teachers using AI-based tools to create lesson plans, run diagnostic tests to identify misconceptions, and produce as well as evaluate assessments, thereby improving productivity and pedagogy [41-44].
MAJOR DISCUSSION POINT
Teacher Preparedness, Confidence, and the Hope‑Fear Spectrum
Argument 6
Curriculum built on four pillars: AI fundamentals, data role, societal/environmental impact, and effective prompting (Dr. Shabana Bhattacharya)
EXPLANATION
She outlines the AI Samarth curriculum’s four pillars: understanding AI and its applications, grasping data’s role in training models, appreciating societal and environmental implications, and learning effective prompting for generative tools. This structure aims to provide a comprehensive foundation.
EVIDENCE
She describes the four pillars-what AI is and its applications, the role of data, societal and environmental impacts, and how to write effective prompts for generative AI-forming the backbone of the curriculum [170-190].
MAJOR DISCUSSION POINT
Designing AI Literacy Curriculum with Ethics and Critical Thinking
Argument 7
Critical thinking is fostered by encouraging verification of AI outputs against reliable sources and using AI as a reviewer rather than a source of answers (Dr. Shabana Bhattacharya)
EXPLANATION
She stresses that learners must not accept AI‑generated answers at face value; instead they should cross‑check with trusted sources and use AI to review their own work, cultivating a critical mindset.
EVIDENCE
She advises users to double-check AI results with valid sources, avoid over-reliance, and use AI as a reviewer rather than a primary answer provider, thereby reinforcing critical thinking [193-194].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Guidelines urging users to validate AI-generated content and keep a human in the loop align with this recommendation [S24].
MAJOR DISCUSSION POINT
Designing AI Literacy Curriculum with Ethics and Critical Thinking
AGREED WITH
Chitra Ravi
Argument 8
Embedding ethics, bias awareness, and responsible engagement is essential for all stakeholders (Dr. Shabana Bhattacharya)
EXPLANATION
Dr Bhattacharya notes that AI systems can inherit bias from training data, leading to fairness and societal concerns. Hence, ethics and bias awareness must be integral to AI literacy for students, teachers and parents.
EVIDENCE
She discusses how AI trained on particular data can exhibit bias and fairness issues, and stresses the need to understand these societal impacts as part of AI education [181-185].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for safe, trustworthy AI systems and inclusive ethical frameworks reinforce the importance of ethics and bias awareness in AI education [S23] and [S28].
MAJOR DISCUSSION POINT
Designing AI Literacy Curriculum with Ethics and Critical Thinking
T
Tanushree Narain Sharma
5 arguments209 words per minute661 words189 seconds
Argument 1
AI Samarth converts student curiosity into confidence, reduces learning gaps, and supports language development (Tanushree Narain Sharma)
EXPLANATION
Tanushree describes how the AI Samarth programme turns students’ initial curiosity into confidence, helps bridge learning gaps and improves language skills, especially in government schools. She presents this as a key transformative outcome.
EVIDENCE
She shares observations from Odisha where students like Shraddha and Poonam use AI to check difficult topics, generate project ideas and gain confidence, summarising three patterns: curiosity to confidence, language improvement, and reduced learning gaps [70-85].
MAJOR DISCUSSION POINT
AI‑Driven Personalization and Equity in Learning
AGREED WITH
Dr. Shabana Bhattacharya, Chitra Ravi, Shri S. Krishnan
Argument 2
Teachers experience a spectrum from anxiety to optimism; AI literacy programs help balance this (Tanushree Narain Sharma)
EXPLANATION
She notes that teachers initially feel fear or anxiety about AI but, through exposure to AI Samarth, move toward optimism and confidence, illustrating the hope‑fear spectrum.
EVIDENCE
She remarks that the program converts curiosity into confidence and that teachers shift from apprehension to a positive stance, indicating a movement along the hope-fear spectrum [82-85].
MAJOR DISCUSSION POINT
Teacher Preparedness, Confidence, and the Hope‑Fear Spectrum
AGREED WITH
Chitra Ravi
Argument 3
Student Shraddha uses AI as a learning companion, turning curiosity into confidence (Tanushree Narain Sharma)
EXPLANATION
Tanushree recounts Shraddha’s experience of initially using AI for fun, then relying on it to clarify difficult subjects and verify with teachers, which boosted her confidence.
EVIDENCE
She describes Shraddha saying she first used AI for fun, later to cross-check difficult topics and seek teacher help, turning curiosity into confidence [73-79].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A field note describing Shraddha’s shift from casual to purposeful AI use and the confidence it generated directly supports this story [S6].
MAJOR DISCUSSION POINT
Real‑World Impact Stories and Student Empowerment
Argument 4
Student Poonam leverages AI for project ideas and preparation, illustrating practical benefits (Tanushree Narain Sharma)
EXPLANATION
Poonam’s story shows a government‑school student moving from seeing AI as a buzzword to using it for concrete academic tasks like generating project ideas and preparing coursework, demonstrating real‑world utility.
EVIDENCE
She reports Poonam explaining that AI helped her generate project ideas and prepare, shifting her perception from entertainment to a useful learning tool [80-86].
MAJOR DISCUSSION POINT
Real‑World Impact Stories and Student Empowerment
Argument 5
AI Samarth program aims to reach nine million students, demonstrating large‑scale impact (Tanushree Narain Sharma)
EXPLANATION
Tanushree mentions the scale ambition of the AI Samarth initiative, targeting nine million learners, underscoring the programme’s breadth and potential national impact.
EVIDENCE
She states that the programme is reaching “nine million students” and outlines the pattern of outcomes observed across that scale [71-84].
MAJOR DISCUSSION POINT
Real‑World Impact Stories and Student Empowerment
C
Chitra Ravi
4 arguments158 words per minute1304 words495 seconds
Argument 1
AI acts as an equalizer, expanding access to quality education for low‑resource schools (Chitra Ravi)
EXPLANATION
Chitra argues that AI serves as an equalizer, providing high‑quality, relevant content to teachers and students in low‑fee private and government schools that lack traditional resources, thereby narrowing the education gap.
EVIDENCE
She notes that AI Samarth has empowered teachers with relevant use-cases, that AI is an equalizer and will become a leveler for low-resource classrooms, and references the 40 % of classrooms still lacking computer labs [108-112][146-148].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Research highlighting multilingual, voice-driven AI as a means to close equity gaps in education corroborates this equalizer role [S20]; broader equity goals are also discussed in [S12].
MAJOR DISCUSSION POINT
AI‑Driven Personalization and Equity in Learning
AGREED WITH
Dr. Shabana Bhattacharya, Tanushree Narain Sharma, Shri S. Krishnan
Argument 2
Teachers transition from fear of replacement to hopeful, purposeful use of AI, gaining confidence and agency (Chitra Ravi)
EXPLANATION
Chitra describes a spectrum of teacher sentiment from fear to hope, explaining how AI literacy demystifies the technology, builds confidence and encourages purposeful integration in teaching practice.
EVIDENCE
She outlines the hope-fear spectrum, recounts teachers moving from fear of replacement to confidence after understanding AI, and emphasizes the importance of agency and purposeful use [120-130].
MAJOR DISCUSSION POINT
Teacher Preparedness, Confidence, and the Hope‑Fear Spectrum
AGREED WITH
Tanushree Narain Sharma
Argument 3
Teacher training creates an emotionally safe environment, modeling confidence and ethical use of AI (Chitra Ravi)
EXPLANATION
She stresses that effective AI teacher training must provide an emotionally safe space, allowing educators to explore AI without anxiety and to model ethical, confident usage for their students.
EVIDENCE
She states that teacher training is about building an emotionally safe environment, especially given the hope-fear spectrum, and that this approach helps teachers gain confidence and model ethical AI use [198-199][200-203].
MAJOR DISCUSSION POINT
Designing AI Literacy Curriculum with Ethics and Critical Thinking
AGREED WITH
Dr. Shabana Bhattacharya
Argument 4
Teachers report increased politeness and positive classroom interactions after engaging with LLMs (Chitra Ravi)
EXPLANATION
Chitra observes that interaction with large language models has made teachers more polite and courteous in the classroom, suggesting subtle behavioural shifts stemming from AI exposure.
EVIDENCE
She notes that many teachers now say they are becoming more polite because LLMs like ChatGPT compliment their questions, and this politeness is spreading among teachers [202-205].
MAJOR DISCUSSION POINT
Real‑World Impact Stories and Student Empowerment
R
Ramya Venkataraman
2 arguments175 words per minute85 words29 seconds
Argument 1
In remote Jharkhand, teachers felt they were falling behind but regained confidence through AI literacy (Ramya Venkataraman)
EXPLANATION
Ramya shares a concrete example from a remote Jharkhand school where a teacher feared students were advancing faster than she was, but AI literacy helped her catch up, illustrating the confidence boost AI training can provide.
EVIDENCE
She recounts a teacher saying she was worried her students were moving ahead faster than she was, and that AI literacy enabled her to catch up [165].
MAJOR DISCUSSION POINT
Teacher Preparedness, Confidence, and the Hope‑Fear Spectrum
Argument 2
AI literacy initiatives are being observed across 100+ countries, highlighting diverse teacher experiences in the Global South (Ramya Venkataraman)
EXPLANATION
Ramya notes that through her work with CENTA, teachers from over a hundred countries are engaging with AI, underscoring the global reach and varied contexts of AI literacy efforts.
EVIDENCE
She mentions that over the past years, teachers from 100 + countries have engaged with AI through CENTA, indicating a broad international footprint [164-166].
MAJOR DISCUSSION POINT
Scaling AI Literacy in the Global South and Policy Support
S
Shri S. Krishnan
3 arguments162 words per minute861 words317 seconds
Argument 1
Inclusive AI education is critical to ensure no one is left behind, as highlighted by national policy (Shri S. Krishnan)
EXPLANATION
Shri Krishnan emphasizes that democratizing AI through education is essential for inclusion, and cites the Indian government’s policy to teach AI from Class 3 onward as a concrete step toward that goal.
EVIDENCE
He remarks that the summit is democratizing AI, bringing concerns into the room, and cites the government’s policy to introduce AI education from Class 3 across states, aiming for inclusive participation [229-232][242-244].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Policy-level calls for empowering teachers, learners, and parents through AI literacy echo this inclusive stance [S12] and the duty of education systems to teach AI tools [S15]; foundational AI knowledge for inclusive growth is noted in [S1].
MAJOR DISCUSSION POINT
Need for AI Literacy Across All Stakeholders
AGREED WITH
Dr. Shabana Bhattacharya, Tanushree Narain Sharma, Chitra Ravi
Argument 2
Indian government policy mandates AI education from Class 3 onward, aiming for nationwide inclusion (Shri S. Krishnan)
EXPLANATION
He points out the official policy decision that AI will be part of the curriculum starting in the third grade, reinforcing the commitment to embed AI literacy at the foundational level of schooling.
EVIDENCE
He states that the government of India has already made a policy call to teach AI from Class 3 onward, which will be incorporated into state curricula [242-244].
MAJOR DISCUSSION POINT
Scaling AI Literacy in the Global South and Policy Support
Argument 3
Democratizing AI through education ensures a skilled future workforce across disciplines, not just computer science (Shri S. Krishnan)
EXPLANATION
He argues that AI knowledge should be spread across all academic streams, because future jobs will require AI awareness beyond technical fields, and this broad-based education will empower the next generation.
EVIDENCE
He explains that AI needs to be taught in every discipline, not only computer science, so that a wide range of professionals can understand and leverage the technology, highlighting the need for a skilled workforce across domains [232-246].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of AI’s impact on the next-generation workforce stress the need for broad AI education beyond technical fields [S1]; the view of AI as an essential tool for all learners is also reflected in discussions of AI as a calculator-like educational tool [S15].
MAJOR DISCUSSION POINT
Scaling AI Literacy in the Global South and Policy Support
Agreements
Agreement Points
AI literacy is essential for all stakeholders (students, teachers, parents, and the broader society)
Speakers: Speaker 1, Dr. Shabana Bhattacharya, Shri S. Krishnan
AI literacy as a foundation for students, parents, and teachers to become active contributors rather than passive recipients (Speaker 1) AI is already embedded in classrooms; understanding its workings is essential for responsible use (Dr. Shabana Bhattacharya) Inclusive AI education is critical to ensure no one is left behind, as highlighted by national policy (Shri S. Krishnan)
All three speakers stress that AI literacy must be provided to every learner, parent and educator so that they can actively shape and use AI rather than merely consume it, noting that AI is already present in everyday learning environments and that inclusive education is needed to avoid anyone being left behind [4][50-51][229-244].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple policy briefs stress universal AI education as a prerequisite for responsible adoption, highlighting the need for broad-based information dissemination and capacity building across society [S38][S39][S56][S57][S59].
AI can act as an equalizer, improving equity and reaching underserved learners
Speakers: Dr. Shabana Bhattacharya, Tanushree Narain Sharma, Chitra Ravi, Shri S. Krishnan
Multilingual and voice‑based AI lowers barriers, bringing quality content to underserved learners (Dr. Shabana Bhattacharya) AI Samarth converts student curiosity into confidence, reduces learning gaps, and supports language development (Tanushree Narain Sharma) AI acts as an equalizer, expanding access to quality education for low‑resource schools (Chitra Ravi) Inclusive AI education is critical to ensure no one is left behind, as highlighted by national policy (Shri S. Krishnan)
The speakers highlight that AI’s multilingual, voice-based, and adaptive capabilities can bring high-quality learning resources to remote, low-resource, and multilingual contexts, thereby narrowing digital and educational divides and fostering inclusive participation [45-47][82-85][108-112][146-148][229-232].
POLICY CONTEXT (KNOWLEDGE BASE)
Inclusive AI frameworks call for reaching underserved populations and promoting gender and geographic equity, echoing WSIS Action Lines and inclusive development agendas [S39][S40][S48][S62].
AI enables personalized learning and improves educational outcomes
Speakers: Dr. Shabana Bhattacharya, Tanushree Narain Sharma
AI enables personalized lessons, tailored pathways, and individualized feedback, improving learning outcomes (Dr. Shabana Bhattacharya) AI Samarth reduces learning gaps and supports language development (Tanushree Narain Sharma)
Both speakers argue that AI-driven personalization-through customized lessons, adaptive pathways, and feedback-helps overcome large student-teacher ratios and closes learning gaps, leading to better learning gains [30-37][82-85].
POLICY CONTEXT (KNOWLEDGE BASE)
Research on AI-driven apprenticeship and storytelling models highlights personalised pathways and outcome gains, while language-support initiatives demonstrate concrete learning improvements [S50][S42][S49].
Critical thinking, verification, and ethics must be embedded in AI literacy
Speakers: Dr. Shabana Bhattacharya, Chitra Ravi
Critical thinking is fostered by encouraging verification of AI outputs against reliable sources and using AI as a reviewer rather than a source of answers (Dr. Shabana Bhattacharya) Teacher training creates an emotionally safe environment, modeling confidence and ethical use of AI (Chitra Ravi)
Both emphasize that AI education should teach learners to double-check AI-generated information, avoid over-reliance, and engage with AI ethically within a supportive, emotionally safe training environment [193-194][198-203].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy discussions repeatedly flag bias mitigation, verification of AI-generated content, and ethical safeguards as core curriculum components [S38][S51][S43][S50].
Teachers move along a hope‑fear spectrum; AI literacy builds confidence and agency
Speakers: Tanushree Narain Sharma, Chitra Ravi
Teachers experience a spectrum from anxiety to optimism; AI literacy programs help balance this (Tanushree Narain Sharma) Teachers transition from fear of replacement to hopeful, purposeful use of AI, gaining confidence and agency (Chitra Ravi)
Both note that educators initially fear AI but, through exposure to AI literacy programmes, shift toward optimism, confidence, and purposeful integration in their teaching practice [82-85][120-130].
POLICY CONTEXT (KNOWLEDGE BASE)
Initiatives in the Global South describe a transition from curiosity to confidence for educators, and national policy alignments aim to boost teacher agency through AI training [S42][S55][S56].
AI literacy curricula should start with universal foundational concepts before role‑specific skilling
Speakers: Speaker 1, Dr. Shabana Bhattacharya
AI literacy as a foundation for students, parents, and teachers to become active contributors rather than passive recipients (Speaker 1) Foundational AI concepts must be taught universally before role‑specific skilling (Dr. Shabana Bhattacharya)
Both stress that AI education must begin with core, universal AI concepts for all learners before moving to specialised, role-based skill development [4][170-179].
POLICY CONTEXT (KNOWLEDGE BASE)
Tiered digital-literacy models advocate beginning with universal foundations before advancing to specialised skills, a view echoed in capacity-building roadmaps for the Global South [S47][S42][S48].
Similar Viewpoints
Both argue that AI literacy should be universal and foundational, providing a base for all stakeholders before any specialised training [4][170-179].
Speakers: Speaker 1, Dr. Shabana Bhattacharya
AI literacy as a foundation for students, parents, and teachers to become active contributors rather than passive recipients (Speaker 1) Foundational AI concepts must be taught universally before role‑specific skilling (Dr. Shabana Bhattacharya)
Both highlight AI’s role in improving equity by delivering content in regional languages and closing learning gaps for disadvantaged students [45-47][82-85].
Speakers: Dr. Shabana Bhattacharya, Tanushree Narain Sharma
Multilingual and voice‑based AI lowers barriers, bringing quality content to underserved learners (Dr. Shabana Bhattacharya) AI Samarth reduces learning gaps, supports language development and builds confidence (Tanushree Narain Sharma)
Both describe a hope‑fear continuum among teachers that AI literacy helps shift toward confidence and purposeful adoption [82-85][120-130].
Speakers: Tanushree Narain Sharma, Chitra Ravi
Teachers experience a spectrum from anxiety to optimism; AI literacy programs help balance this (Tanushree Narain Sharma) Teachers transition from fear of replacement to hopeful, purposeful use of AI, gaining confidence and agency (Chitra Ravi)
Both stress that AI education must embed ethical practice, critical verification, and a safe learning environment for teachers and students [193-194][198-203].
Speakers: Dr. Shabana Bhattacharya, Chitra Ravi
Critical thinking is fostered by encouraging verification of AI outputs against reliable sources and using AI as a reviewer rather than a source of answers (Dr. Shabana Bhattacharya) Teacher training creates an emotionally safe environment, modeling confidence and ethical use of AI (Chitra Ravi)
Both underline the necessity of inclusive, widespread AI education to prevent exclusion of any group, aligning philanthropic and policy perspectives [4][229-244].
Speakers: Shri S. Krishnan, Speaker 1
Inclusive AI education is critical to ensure no one is left behind, as highlighted by national policy (Shri S. Krishnan) AI literacy as a foundation for students, parents, and teachers to become active contributors rather than passive recipients (Speaker 1)
Unexpected Consensus
Alignment between national policy (AI from Class 3) and philanthropic emphasis on universal AI literacy
Speakers: Shri S. Krishnan, Speaker 1
Inclusive AI education is critical to ensure no one is left behind, as highlighted by national policy (Shri S. Krishnan) AI literacy as a foundation for students, parents, and teachers to become active contributors rather than passive recipients (Speaker 1)
It is notable that a government policy mandating AI education from the third grade aligns directly with a civil-society organization’s call for AI literacy for all stakeholders, showing cross-sector consensus on early, inclusive AI education [4][242-244].
POLICY CONTEXT (KNOWLEDGE BASE)
National AI strategies that embed AI education from early grades are explicitly aligned with philanthropic calls for universal literacy, as reflected in policy roadmaps and capacity-building recommendations [S55][S52][S56].
Overall Assessment

The participants show strong consensus that AI literacy must be universal, foundational, and inclusive, serving as an equalizer that personalizes learning, reduces gaps, and empowers teachers and students alike. They also agree on embedding critical thinking, ethics, and a supportive training environment, while recognizing the emotional journey of teachers from fear to confidence.

High consensus across speakers, indicating a coordinated vision that can facilitate policy alignment, large‑scale programme design, and resource mobilisation for AI‑enabled education in the Global South.

Differences
Different Viewpoints
Approach to AI-generated teaching materials – over‑utilisation without validation versus emphasis on critical verification
Speakers: Chitra Ravi, Dr. Shabana Bhattacharya
Teachers may churn out lesson plans using LLMs without knowing how to validate them (Chitra Ravi) Users must double‑check AI outputs with reliable sources and avoid over‑reliance (Dr. Shabana Bhattacharya)
Chitra observes that teachers sometimes generate lesson plans by simply prompting an LLM and lack validation skills [144-145], while Dr. Shabana stresses the need for learners to cross-check AI results with trusted sources and not accept answers at face value [193-194]. This reflects a disagreement on how AI tools should be used in classroom preparation.
POLICY CONTEXT (KNOWLEDGE BASE)
Guidelines warn against unverified AI content and stress ethical verification processes before classroom deployment [S38][S51][S50].
Design of AI literacy programmes – foundational universal curriculum versus curiosity‑to‑confidence, language and gap‑reduction focus
Speakers: Dr. Shabana Bhattacharya, Tanushree Narain Sharma
AI literacy should start with universal foundational concepts before role‑specific skilling (Dr. Shabana Bhattacharya) AI Samarth converts curiosity into confidence, improves language skills and reduces learning gaps (Tanushree Narain Sharma)
Dr. Shabana outlines a curriculum built on four universal pillars covering AI basics, data, societal impact and prompting [170-179], whereas Tanushree highlights outcomes such as curiosity turning into confidence, language support and gap reduction without detailing a foundational curriculum [70-85]. Both aim for AI literacy but differ on the primary design emphasis.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates contrast a universal, tiered curriculum with models that prioritize curiosity-driven confidence, multilingual support, and learning-gap reduction, as discussed in AI-literacy pilots for the Global South [S42][S44][S47].
Readiness for AI integration – policy‑driven inclusion versus on‑ground resource constraints
Speakers: Shri S. Krishnan, Chitra Ravi
National policy mandates AI education from Class 3 to ensure inclusive participation (Shri S. Krishnan) 40 % of classrooms still lack computer labs, indicating significant infrastructure gaps (Chitra Ravi)
Shri Krishnan cites government policy to teach AI from Class 3 onward as a step toward inclusive AI education [242-244], while Chitra points out that many schools still lack basic computer infrastructure, limiting immediate AI deployment [103-105]. This creates tension between policy aspirations and practical implementation capacity.
POLICY CONTEXT (KNOWLEDGE BASE)
While policies promote inclusive AI adoption, on-the-ground reports highlight infrastructure gaps, limited funding, and capacity constraints that impede implementation [S60][S58][S40].
Unexpected Differences
Perception of AI’s inevitability versus its limitations
Speakers: Bhanu Potta, Shri S. Krishnan
AI is already in our hands and the debate about its presence in classrooms is settled (Bhanu Potta) AI is not a silver bullet and may not solve all problems; experimentation is needed (Shri S. Krishnan)
Bhanu asserts that AI’s presence in phones, homes and classrooms makes the question of ‘should AI be in the classroom’ moot [15-19], while Shri Krishnan cautions that AI is not a panacea and its impact is uncertain, urging experimentation [242-244]. This contrast between inevitability and caution was not anticipated given the overall consensus on AI’s importance.
POLICY CONTEXT (KNOWLEDGE BASE)
Policy dialogues acknowledge AI’s transformative potential but also stress its technical, ethical, and societal limits, urging balanced narratives rather than deterministic views [S38][S51][S52].
Overall Assessment

The panel largely concurs on the importance of AI literacy and its potential to personalize learning, lower equity gaps and empower teachers. Disagreements surface around implementation details: how to balance AI‑generated content with critical verification, whether curricula should start with universal foundations or outcome‑driven pathways, and the gap between policy ambitions and on‑ground infrastructure. An unexpected tension appears between viewing AI as an unavoidable, already‑present tool and treating it as a technology that still requires careful experimentation.

Moderate – while there is strong shared vision, the divergences concern practical approaches and readiness, implying that coordinated policy, resource investment, and clear pedagogical guidelines are needed to translate consensus into effective action.

Partial Agreements
All three agree on the necessity of AI literacy for all stakeholders, but differ on delivery: Speaker 1 promotes a large‑scale programme (AI Summit) [4-7], Dr. Shabana proposes a curriculum with four pillars [170-179], and Shri Krishnan emphasizes top‑down policy mandates from Class 3 onward [242-244].
Speakers: Speaker 1, Dr. Shabana Bhattacharya, Shri S. Krishnan
AI literacy is essential for students, parents and teachers (Speaker 1) AI literacy provides foundational concepts for responsible use (Dr. Shabana Bhattacharya) Inclusive AI education is critical to avoid anyone being left behind (Shri S. Krishnan)
All agree that AI should boost confidence and effectiveness of teachers and learners, yet their methods vary: Chitra stresses emotionally safe teacher training [198-199], Tanushree focuses on curiosity‑to‑confidence pathways [70-85], and Dr. Shabana highlights AI tools for lesson planning and diagnostics [41-44].
Speakers: Chitra Ravi, Tanushree Narain Sharma, Dr. Shabana Bhattacharya
Building teacher confidence and agency through AI (Chitra Ravi) Transforming student curiosity into confidence and reducing gaps (Tanushree Narain Sharma) AI as a teaching assistant improving productivity and pedagogy (Dr. Shabana Bhattacharya)
Takeaways
Key takeaways
AI literacy must be universal – students, teachers, parents and broader education stakeholders need foundational AI knowledge to become active contributors rather than passive users. AI can deliver personalized learning, multilingual content and data‑driven insights, acting as an equalizer that expands quality education to underserved and low‑resource schools. Teachers experience a hope‑fear spectrum; effective AI literacy programs shift sentiment toward confidence and purposeful agency while preserving the teacher’s role as a facilitator. A well‑structured curriculum should cover four pillars – AI fundamentals, the role of data, societal/environmental impacts and effective prompting – with embedded ethics, bias awareness and critical‑thinking practices. Scaling initiatives such as AI Samarth (targeting millions of students) and national policy mandating AI education from Class 3 are critical for inclusive, future‑ready learning across the Global South. Real‑world stories (e.g., students Shraddha and Poonam) illustrate how AI transforms curiosity into confidence, supports language development and aids project work.
Resolutions and action items
Continue and expand the AI Samarth program to reach its goal of nine million students, with ongoing teacher‑training cascades. CSF and the Wadwani School of AI will maintain collaboration to refine and disseminate the AI literacy curriculum across government and low‑fee private schools. Incorporate explicit critical‑thinking exercises into the curriculum – e.g., students first answer independently, then use AI for review and improvement. Develop teacher‑training modules that create an emotionally safe environment and address the hope‑fear spectrum, emphasizing AI as an assistant not a replacement. Encourage schools to adopt the four‑pillar curriculum framework and integrate AI concepts from Class 3 onward, aligning with Indian government policy. Monitor and share best‑practice case studies (like Shraddha and Poonam) to demonstrate impact and guide further rollout.
Unresolved issues
Specific mechanisms for large‑scale validation of AI‑generated content and systematic cross‑checking against reliable sources were not defined. How to prevent over‑reliance on AI for tasks such as lesson‑plan generation without adequate teacher verification remains an open concern. Details on resource allocation, infrastructure needs (e.g., internet, devices) for remote schools, especially in regions like Jharkhand, were not fully addressed. A clear governance model for ongoing ethics oversight, bias mitigation and environmental impact monitoring across the diverse implementations was not established. The role of policy enforcement and accountability mechanisms to ensure AI literacy reaches all intended stakeholders was not concretely outlined.
Suggested compromises
Balancing the hope‑fear spectrum by providing an equilibrium – offering enough AI exposure to build confidence while maintaining safeguards against over‑use. Positioning AI as a supportive assistant for teachers rather than a replacement, thereby addressing teacher anxieties while leveraging productivity gains. Encouraging responsible AI use through a blend of empowerment (skill building) and caution (critical‑thinking loops), mitigating both under‑utilization and over‑utilization. Adopting a phased rollout that combines immediate curriculum deployment with ongoing teacher training and feedback loops to adjust content based on field experience.
Thought Provoking Comments
AI is already in our hands… The conversation today is really about what futures we want to build with the tool that is coming into our hands. I think the discussion about whether AI should be in the classroom or be a learning tool is already passed.
Sets the premise that AI integration in education is inevitable, shifting the debate from ‘if’ to ‘how’ we use it, thereby framing the entire discussion around purposeful design rather than resistance.
This comment pivoted the conversation from debating AI’s presence to exploring its purposeful implementation, prompting panelists to discuss concrete use‑cases, equity, and curriculum design.
Speaker: Bhanu Potta
Personalization is the biggest value add… AI can provide customized lessons, tailored learning pathways, individualized assessments and feedback, and it can lower the barrier to quality education for underserved classes through voice‑based and multilingual capabilities.
Highlights AI’s potential to address systemic challenges in Indian education—large student‑teacher ratios and language diversity—linking technology to equity and inclusion.
Her points steered the dialogue toward equity and the need for AI literacy across all stakeholders (students, teachers, parents), influencing subsequent remarks about reaching the ‘last child’ and the importance of multilingual content.
Speaker: Dr. Shabana Bhattacharya
We saw a child named Shraddha who moved from using AI just for fun to using it as a companion to cross‑check difficult subjects, and another child Poonam who now gets project ideas and preparation help from AI. This shows curiosity converting into confidence, and AI Samarth also teaches ethics and bias.
Provides concrete, relatable examples that illustrate how AI can transform learner agency and embed ethical awareness, moving the conversation from abstract benefits to lived experiences.
Her anecdotes humanized the discussion, prompting the panel to explore the journey from curiosity to confidence and reinforcing the need for ethical AI literacy in curricula.
Speaker: Tanushree Narain Sharma
There is a spectrum between hope and fear among teachers. AI literacy brings an equilibrium, but there is a risk of over‑utilization where teachers generate lesson plans with LLMs without validating them.
Introduces the nuanced emotional landscape of teachers, acknowledging both optimism and anxiety, and flags a practical pitfall of uncritical reliance on AI outputs.
This comment deepened the conversation about teacher training, leading to discussions on building confidence, establishing safe learning environments, and embedding critical thinking safeguards.
Speaker: Chitra Ravi
AI should be taught from class three onwards and across all disciplines, not just to computer‑science students. It’s not a silver bullet, but we must experiment and ensure inclusion so no one is left behind.
Offers a policy‑level perspective, advocating for early, interdisciplinary AI education and emphasizing inclusion, thereby broadening the scope beyond K‑12 to higher education and workforce readiness.
His remarks expanded the dialogue to national policy and systemic implementation, reinforcing earlier points about universal AI literacy and prompting the panel to consider long‑term, cross‑sector strategies.
Speaker: Shri S. Krishnan
We should not ask AI for the answer outright; instead, do our own exercise, submit the answer, and then ask AI to review and improve – a critical‑thinking loop embedded in the curriculum.
Proposes a concrete pedagogical method that embeds critical thinking into AI use, addressing concerns about over‑reliance and ensuring students develop verification skills.
This suggestion guided the later part of the discussion toward curriculum design specifics, influencing how panelists described their training approaches and the emphasis on validation and critical engagement.
Speaker: Bhanu Potta
Overall Assessment

The discussion was shaped by a series of pivotal remarks that moved the conversation from abstract acceptance of AI to concrete strategies for equitable, ethical, and effective integration. Bhanu’s framing of AI’s inevitability set the stage, while Dr. Shabana’s focus on personalization and equity highlighted systemic opportunities. Tanushree’s student stories personalized the impact, and Chitra’s articulation of the hope‑fear spectrum introduced the emotional and practical challenges teachers face. Shri Krishnan’s policy‑level call for early, interdisciplinary AI education broadened the vision to national scale. Finally, Bhanu’s concrete pedagogical loop reinforced the need for critical thinking. Together, these comments redirected the dialogue toward actionable curriculum design, teacher training, and inclusive policy, deepening the analysis and steering the panel toward a shared vision of responsible AI literacy across the Global South.

Follow-up Questions
What are teachers’ experiences and perspectives on AI adoption in classrooms across the broader Global South beyond India?
Ramya’s brief response indicated uncertainty, highlighting a need for deeper insight into diverse regional contexts to inform scalable AI literacy initiatives.
Speaker: Ramya Venkataraman
How effective are AI literacy programs like AI Samarth in improving student learning outcomes, confidence, and reducing learning gaps?
Both highlighted observed shifts from curiosity to confidence and noted reductions in learning gaps, suggesting a need for systematic impact evaluation.
Speaker: Tanushree Narain Sharma; Chitra Ravi
What is the effectiveness of AI tools in delivering multilingual educational content and supporting language learning for underserved students?
Discussion of AI’s multilingual capabilities and language improvement points to a research gap in measuring efficacy across languages.
Speaker: Dr. Shabana Bhattacharya; Tanushree Narain Sharma
How can the risk of over‑utilization of AI (e.g., teachers generating lesson plans without validation) be mitigated?
Chitra warned about teachers producing AI‑generated lesson plans without proper validation, indicating a need for guidelines and safeguards.
Speaker: Chitra Ravi
What frameworks or pedagogical strategies best safeguard critical thinking when students and teachers interact with generative AI?
Both emphasized the importance of cross‑checking AI outputs and avoiding blind trust, calling for structured critical‑thinking curricula.
Speaker: Dr. Shabana Bhattacharya; Bhanu Potta
What are the environmental impacts of scaling AI usage in education, and how can they be minimized?
She noted AI’s computational and environmental costs, suggesting research into sustainable AI deployment in schools.
Speaker: Dr. Shabana Bhattacharya
How can teacher‑training programs balance the hope‑fear spectrum and create emotionally safe environments for AI adoption?
Chitra highlighted the emotional dimensions of AI adoption, indicating a need to study effective training approaches that address teacher anxieties.
Speaker: Chitra Ravi
How does AI literacy influence career pathways and job readiness across non‑technical disciplines?
Krishnan stressed AI education for all fields, implying research on its impact on diverse career trajectories.
Speaker: Shri S. Krishnan
How should AI literacy curricula be differentiated for various stakeholder roles (students, teachers, administrators) while preserving core foundational concepts?
She distinguished between foundational AI literacy and role‑specific AI skilling, suggesting a need for adaptable curriculum design.
Speaker: Dr. Shabana Bhattacharya
What validation mechanisms are needed to ensure AI‑generated educational content aligns with curriculum standards and pedagogical quality?
Concern about unvalidated AI lesson plans points to a requirement for systematic content review processes.
Speaker: Chitra Ravi
What are the social effects of AI‑mediated politeness (e.g., LLM feedback) on teacher communication styles and classroom dynamics?
She observed teachers becoming more polite due to AI interactions, indicating a novel area for sociolinguistic research.
Speaker: Chitra Ravi
What strategies can effectively reach the “last child” in remote or infrastructure‑limited areas to ensure equitable AI literacy?
Both discussed equity and scaling challenges, highlighting the need for solutions addressing connectivity and resource constraints.
Speaker: Dr. Shabana Bhattacharya; Chitra Ravi
How can AI analytics on assessment, attendance, and other school data be used to predict and prevent student dropouts?
She mentioned AI’s potential for risk identification, suggesting research into predictive models and intervention frameworks.
Speaker: Dr. Shabana Bhattacharya
What policies and safeguards are required to protect student data privacy and ethical use of AI in classrooms?
Discussion of data collection and ethical concerns indicates a need for comprehensive policy research on privacy and responsible AI use.
Speaker: Dr. Shabana Bhattacharya; Shri S. Krishnan

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Indias Roadmap to an AGI-Enabled Future

Indias Roadmap to an AGI-Enabled Future

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel convened to discuss how India can build a sovereign artificial intelligence ecosystem by integrating energy, compute, and talent pillars [2-7]. Ghanshyam Prasad highlighted that data-centre capacity is expanding from tens of megawatts to gigawatt-scale projects, with around 16 GW of load already projected and a need to manage highly variable demand and “N+1+1” reliability requirements [25-33][34-36]. He noted that renewable generation has surpassed 250 GW, with 40 GW added in ten months, and that India is diversifying grid connections to neighboring countries to support future AI-driven loads [57-61][69-74]. To meet the anticipated surge, Prasad outlined plans for 100 GW of hydro-pump storage and a phased nuclear programme targeting 100 GW by 2047, while emphasizing rapid transmission-line deployment compared with global benchmarks [75-78][78-80].


Tarun Dua traced E2E Networks’ evolution from an outsourcing model to a vision of India becoming a global cloud-infrastructure hub, arguing that the country must now build for itself and the world [84-98]. Parth Sarthi explained that scaling laws make AI progress an engineering problem, requiring more GPUs, data, and electricity, and that India’s mission has already scaled to over 38 000 GPUs, with plans to exceed 50 000 [131-138][155-162][183-188]. He further argued that leveraging India’s 1.4 billion-person talent pool to create domain-specific reinforcement-learning environments can offset the hardware gap and enable voice-first models [171-180][184-186].


In response to a question on compute demand, Tarun estimated that serving the top thousand organisations would need at least 128 000 GPUs in India, and that the infrastructure could also serve global workloads because reasoning models tolerate modest latency [194-204][211-217]. Professor Jayadeva identified a shortage of PhD-bound researchers, citing career-path uncertainty, gender bias, and weak industry-university incentives as barriers, and called for goal-directed research collaborations such as the VLSI Design Tools program [108-118][307-340].


During the audience Q&A, participants raised concerns about the lack of IoT and smart-meter data for training AI, to which Prasad replied that the RDSS programme is deploying over 30 million smart meters and developing indigenous SCADA systems to generate secure, domestic data streams [367-384][400-401]; he also stressed that while generation and transmission are world-class, distribution-level automation remains a bottleneck that will improve as smart-meter roll-out progresses [368-370][373-379]. On manufacturing, Tarun argued that domestic IP creation and a large, interconnected market will gradually reduce reliance on imported microprocessors, and that scaling of semiconductor design capacity is already underway in India [430-447][457-466]. Professor Jayadeva added that most AI workloads now run on GPUs rather than traditional microprocessors, and that India’s semiconductor design ecosystem, exemplified by the SCL plant and VLSI programs, is positioned to support future AI hardware needs [478-486][490-494].


The panel concluded that, while significant challenges remain across power reliability, compute capacity, talent development, and data infrastructure, coordinated government missions, industry initiatives, and academic collaborations are aligning to create a self-sufficient AI ecosystem in India [190-193][496-500].


Keypoints


Major discussion points


Energy infrastructure must scale to power AI-driven data centres.


Shri Ghanshyam Prasad highlighted the rapid growth from 10 MW to gigawatt-scale data centres ([25-34]), the need for “N + 1 + 1” reliability and backup diesel generators ([35-41]), India’s expanding renewable capacity (now >250 GW) and its ability to meet green-power demand ([53-61]), the importance of diversifying data-centre locations near renewable hubs ([66-68]), regional inter-connections with neighbouring countries ([70-75]), and the role of storage (hydro-pump and battery) and future nuclear (SMR) to balance the load ([76-78][84-87]).


Compute resources and GPU scaling are critical for sovereign AI models.


Tarun Dua estimated that serving the top 1 000 organisations would require at least 128 000 GPUs in India today and projected a leap-frog trajectory similar to 5G ([194-204][207-208]). Parth Sarthi explained the empirical “scaling laws” that make AI an engineering problem driven by compute, data and electricity, and argued that India can compete by leveraging large-scale RL environments rather than only raw GPU count ([131-148][155-162]).


A robust talent pipeline and research ecosystem are essential.


Professor Jayadeva stressed the shortage of motivated PhD-level researchers, the need for clear career pathways, industry-university collaboration, and better IPR frameworks to translate research into products ([108-118]). He also described cultural and financial deterrents for students (long PhD duration, parental expectations) and suggested alternative research degrees (MS-Research) to broaden participation ([248-267]).


Generating domain-specific data through a physical IoT layer is a missing piece.


An audience member pointed out the lack of end-to-end sensor networks and digital twins for power assets ([351-360]). Ghanshyam Prasad responded that smart-meter rollout (≈3 crore installed, 2.5 crore pending) will create the necessary data, enable indigenous SCADA systems, and keep the data sovereign ([367-384]). Parth added that India-mission datasets (AI Kosh) and expanding GPU capacity will support building such domain-rich data sets locally ([400-408]).


India aims to become a sovereign AI hub through an integrated ecosystem.


The opening remarks framed the mission to build “frontier models for India” by owning power, hardware, talent and research ([2-7]). Parth’s discussion of scaling laws and the shift from “building models in the West” to leveraging India’s massive talent pool and multilingual environments reinforced this vision ([130-138]). Later comments on indigenising hardware, micro-processor design, and coordinated market development highlighted the need for parallel progress across energy, compute, research and manufacturing ([430-435][478-485]).


Overall purpose / goal


The session was convened to map “India’s Path to an AGI-Enabling Ecosystem,” identifying how the country can achieve sovereign, frontier-level artificial intelligence by aligning three pillars-energy, compute, and research-while also addressing manufacturing, policy, and talent development. The goal was to surface challenges, propose coordinated solutions, and galvanise stakeholders (government, industry, academia) around a shared roadmap.


Overall tone and its evolution


Opening: Formal, aspirational, and forward-looking, emphasizing national ambition and pride ([2-7]).


Technical segments: More analytical and cautionary, with speakers detailing concrete infrastructure bottlenecks and resource gaps (energy reliability, GPU shortages, talent deficits).


Audience interaction: Shift to a more probing, critical tone, raising practical concerns about data collection, IoT, and manufacturing.


Closing: Constructive and optimistic, reaffirming commitment to collaborative action and highlighting existing progress. Throughout, the tone remained respectful and solution-oriented, moving from high-level vision to granular problem-solving and back to a unifying call to action.


Speakers


Professor Jayadeva – GSV Chair Professor and former Head of Department of Electrical Engineering, IIT Delhi; expertise in VLSI, machine learning, optimization, and low-complexity models. [S2]


Suvrat Bhoosha – Co-founder and moderator at Chariot; researcher and founder focused on building sovereign frontier AI models for India. [S4]


Parth Sarthi – Co-founder, Chariot; PhD (Stanford), former Stanford professor, former Google Gemini DeepMind researcher, inventor of the Raptor retrieval-augmented generation technique. [S6][S7]


Tarun Dua – Founder and Managing Director, E2E Networks; specialist in enterprise-grade cloud infrastructure, large-scale GPU clusters and data-center services. [S9][S8]


Shri Ghanshyam Prasad – Chairperson, Central Electricity Authority, India; former member of the G20 Energy Transition Working Group and first Executive Director of BIMSEC Energy Centre; expertise in power generation, transmission, and market development. [S11][S10]


Audience – Members of the audience (e.g., Pradeep Subramaniam) who asked questions on IoT, data collection, and physical-layer infrastructure. [S12]


Additional speakers:


Pradeep Subramaniam – Audience participant from the physical-world/R&D sector; raised queries about IoT sensors, digital twins, and secure data hosting for AI. (derived from transcript)


Full session reportComprehensive analysis and detailed insights

Opening & agenda (Suvrat Bhoosha) – Suvrat Bhoosha opened the session, welcoming researchers, founders and policy-makers and stating the NDIA mission to build “frontier models for India”. He emphasized that true sovereign AI must own the power, compute and talent, not depend on imported models, hardware or infrastructure [2-4][5-7]. The one-hour programme was outlined: opening remarks, deep-dives by each speaker, a panel discussion and a Q&A [9-13].


Energy outlook (Shri Ghanshyam Prasad)


Projected AI load: Prasad noted that AI-driven data-centres are expanding from 10 MW facilities to gigawatt-scale sites in Mumbai, Vizag, Chennai and elsewhere, with roughly 16 GW of load already projected for India [27-33].


Reliability & provisioning: He explained the “N + 1 + 1” reliability requirement – dual geographically separated power supplies plus two layers of diesel-generator backup – and the 1.7-times rule for power provisioning, meaning a 1 000 MW data-centre in Mumbai will need about 1.7 × its rated capacity in supply [35-36][49-51][28-30].


Concrete data-centre examples: The Adani-Google data-centre in Noida (total 50 MW, 10 MW commissioned, 40 MW pipeline) and the planned >1 000 MW data-centre in Mumbai illustrate the scale of upcoming power demand [28-30].


Renewable growth: India’s renewable capacity has risen from ~2 GW in 2010 to >250 GW today, with >40 GW added in the last ten months, positioning the country to meet future green-power needs [57-61].


Location diversification: He urged locating data-centres near renewable-rich states (Rajasthan, Gujarat, Maharashtra, Karnataka, Telangana, Andhra Pradesh) and creating multiple “landing points” to spread load [62-68].


Regional inter-connections: Existing links with Nepal, Bhutan, Bangladesh, Myanmar and forthcoming connections to the UAE, Saudi Arabia, Singapore and Sri Lanka will further stabilise the grid [69-74].


Planning horizon: India is shifting from static five-year plans to dynamic six-month transmission planning and annual resource-adequacy updates to cope with 7-10 % annual power-demand growth (versus 1-2 % in many other countries) [69-74][84-87].


Funding for energy transition: Shri Prasad highlighted the ANRF (Anusandan) fund of ₹1 lakh crore and ₹20 000 crore allocated for CCUS (Carbon Capture, Utilisation and Storage) to bridge technology gaps and support research-to-commercial pipelines [70-73].


AI-use-case competition: He mentioned a recent AI-driven use-case competition in the power sector, which selected several startups for pilot deployments, demonstrating the push to translate research into operational solutions [71-73].


Smart-meter rollout: The RDSS (Reform-Link) programme is funding the deployment of ≈3 crore smart meters installed with ≈25 crore meters in the pipeline; all smart-meter data will be stored domestically, reinforcing data-sovereignty [367-384][373-383][400-401].


Indigenous SCADA & cyber-security: Prasad stressed the need for indigenous SCADA systems and robust cyber-security as the smart-meter network expands [373-383].


Physical-layer data collection: In response to audience concerns, he noted that the India Energy Stack committee is deliberating on end-to-end IoT and digital-twin use-cases for the grid [380-382].


Hydro-pump storage & nuclear: India plans 100 GW of hydro-pump storage within a decade and a 100 GW nuclear programme by 2047, with an intermediate 22 GW of Small Modular Reactors (SMRs) targeted for the early 2030s [76-78][84-87].


SMR co-location: While Tarun Dua suggested co-locating data-centre campuses with SMRs to cut transmission fees and eliminate on-site diesel generators, Prasad cautioned that regulatory clearances and safety-zone buffers of 1-5 km could extend SMR roll-out beyond the optimistic 3-5-year horizon [239-246].


Compute infrastructure (Tarun Dua)


GPU scale estimate: Dua estimated that serving the top 1 000 organisations would need ≈128 000 GPUs, placing India about 18 months behind the global compute race, but he argued India can “leap-frog” as it did with 4G/5G [194-204][207-208].


Global workload potential: He noted that the compute infrastructure built for India can also serve global workloads because reasoning models tolerate modest latency increases [211-217].


SMR co-location vision: He advocated building data-centre campuses alongside SMR plants, citing modular 220 MW increments and reduced reliance on diesel generators [238-246].


Domestic micro-processor IP: Dua asserted that most micro-processor IP is already Indian; the remaining bottleneck is volume manufacturing, which will improve as domestic demand grows [430-447].


Talent & research ecosystem (Professor Jayadeva)


Manpower shortage: He highlighted that many bright Indian students still pursue PhDs abroad because domestic career pathways are unclear, compounded by cultural pressures and gender bias [108-118][115-117][126-127].


Alternative pathways: Jayadeva advocated two-year MS-Research programmes, whose enrolments have tripled in recent years, as a way to broaden participation [267-270].


Industry-university collaboration: He warned that current IP-sharing arrangements are a “bone of contention” and called for joint labs (e.g., the Berkeley-Cadence model) to accelerate translation of ideas into products [307-340][324-328].


Chip-design strength: He emphasized India’s long-standing VLSI Design Tools & Technology Programme (since 1996), the chip-design hubs in Bangalore, Pune, Hyderabad and Noida, and the large memory-chip manufacturing investment in Gujarat[478-486][490-494].


Design vs fabrication: While design costs dominate, fabrication capacity is expanding, reducing reliance on imports [478-486].


Frontier-model building (Parth Sarthi)


Scaling laws: Sarthi explained the empirical “scaling laws” (e.g., OpenAI’s ten-fold compute rule) and the Chinchilla refinement that ties data and compute together, making AI progress an engineering problem [131-144].


GPU fleet: The India-mission has provisioned 38 000-50 000 GPUs, with plans to expand beyond this range [183-188][190-193].


Talent-driven efficiency: He argued that India can offset raw-GPU scarcity by leveraging its 1.4 billion-person talent pool to create domain-specific reinforcement-learning environments (e.g., agricultural-loan assessment in Tamil, legal-aid reasoning in Hindi) that run on ordinary CPUs and modest GPUs [155-162][171-180].


Multilingual models: The mission is developing voice-first, speech-reasoning models that exploit India’s multilingual strengths [183-188][190-193].


AI Kosh initiative: Sarthi highlighted AI Kosh, an India-mission effort to build indigenous datasets for training frontier models, ensuring that both data and models remain within India’s borders [402-405].


Audience Q&A


IoT & physical-layer data: Audience member Pradeep Subramaniam pointed out the lack of an end-to-end IoT and digital-twin layer for the grid, noting that without granular sensor data AI models cannot be trained effectively [351-360]. Prasad replied that while generation and transmission are world-class, distribution-level automation lags; the India Energy Stack committee is working on such use-cases [380-382].


Smart-meter & SCADA: The RDSS-funded smart-meter rollout (≈3 crore installed, ≈25 crore pending) and the development of indigenous SCADA systems were reiterated, with an assurance that all data will be stored domestically [367-384][400-401].


Vending-machine IoT concern: An audience member raised that low-cost physical assets (e.g., vending machines) lack IoT sensors, hampering AI adoption. The panel responded that government-enabled sensor platforms are under consideration within the India Energy Stack framework [351-360][380-382].


Semiconductor manufacturing: Prasad highlighted ongoing efforts to indigenise power-electronics (IGBT) aiming for 100 % domestic content[469-475], while Dua emphasized that scaling volume manufacturing will follow domestic market growth [430-447].


Closing remarks (Suvrat Bhoosha) – Suvrat reiterated the need for an integrated ecosystem that couples energy, compute, talent and data-sovereignty to build indigenous frontier models [190-193][496-503]. He thanked the participants, noted the presence of Indian Air Mission delegates, and outlined concrete next steps: dynamic six-month transmission planning, accelerated smart-meter deployment, leveraging ANRF and RDSS funds for research-to-commercial pipelines, expanding GPU capacity beyond the current 38-50 k, exploring SMR-data-centre co-location, fostering industry-university research roadmaps, and scaling domestic semiconductor design and power-electronics production. While challenges remain-SMR timelines, physical prototyping infrastructure, and making PhD pathways financially attractive-the consensus across government, industry and academia suggests that coordinated policy and investment can realise India’s vision of a sovereign, AGI-enabled ecosystem.


Session transcriptComplete transcript of the session
Suvrat Bhoosha

Researchers, founders and policy makers. At Chariot, we are proud to be one of the companies mandated to build frontier models for India under the NDIA mission to build sovereign frontier models for the country. But as we embark on this journey, we must recognize a fundamental truth. Building true frontier intelligence from India is a monumental ecosystem play. We cannot simply import models and talents, run them on borrowed infrastructure, and call them our own. If we want to solve India -scale problems at population scale, we must own the power, the hardware, and the talent and the research that drives them. That is the thesis of today’s session, India’s Path to an AGI -Enabling Ecosystem, to bridge the gap between energy, infrastructure, and research.

We have brought together the absolute pioneers of this field. Before we begin, let me quickly share our roadmap for the next one hour. We will start by inviting each of our distinguished speakers to share opening remarks on their respective domains. After that, we will dive into the topic of the next one hour. We will then move into a panel discussion. And then finally we will open the floor for your questions. To guide through this we have assembled the absolute pioneers of energy, compute and research pillars. Today we are joined by Shri Ghanshyam Prasad, Chairperson of Central Electricity Authority. Mr. Shri Tarun Dua , Founder and Managing Director of E2E Networks. Professor Jayadeva, GSV Chair in Formal HOD of Electrical Engineering at IIT Delhi.

And finally my co -founder at Chariot, Mr. Parth Sarthi. To build this ecosystem from ground up starting with the very power that makes this revolution possible, energy. To speak on the sheer scale of this transition and to help us answer critical questions such as what we expect AI’s true energy demand in the country to be and how are we preparing and modelling our national grid to meet it, I would like to introduce a true veteran of the power sector, Shri Ghanshyam Prasad ji. Shri Ghanshyam Prasad ji, presently holds the post of Chairperson of Central Electricity Authority. With an illustrious career spanning over 35 years, his expertise covers generation, transmission and power market development. Having served as part of the G20 Energy Transition Working Group and the first Executive Director of BIMSEC Energy Centre, his global perspective and visionary leadership are ensuring our grid is ready for the AI era.

Please join me in welcoming Shri Ghanshyam Prasad to the dais for his opening remarks.

Shri Ghanshyam Prasad

I think the speaker wanted me to speak on some of the key challenges that is likely to happen in the era when we are transiting from the present situation to AI -enabled or AI -driven power system. We all know that the moment we talk about AI, which means that it is supposed to be data -intensive and it is to be a power juggler, and we are talking about the data centers which will try to enable it as we go along. In India, we are now transiting from smaller data centers. Earlier, we used to have a small data center. We have 10 megawatt, 50 megawatt data centers to now gigawatt scale data centers at many places. particularly in Mumbai, Vizag, Chennai and all other places.

So far, we have a visibility of around 16 gigawatt of such data centers coming in across India. The challenge remains a few, particularly if I see from the perspective of serving a large load which earlier we thought that it is going to be almost like a constant load, but practically it is not. And if it is not, then how do we manage such type of a variable load? So far, we were struggling with only variable sources, that is solar, wind, etc. Now, we are going to have something from the load side as well. A large load getting integrated into the DESCOM system and which is also going to be used for the solar system. To have a nature which is going to be variable.

second is the kind of reliability that is that it demands into the system the reliability is we talk about n plus one plus one now which means that the same data center will have to be supplied from two different sources and they have to be slightly differently located as well second is even if the supply fails then it has to be backed up simultaneously by two I’ll say two steps that is DG sets and each DG set will have to be backed up by another DG sets so we have four layer almost four layer of security of supply it’s definitely challenging for a country like India which is now expanding and growing to provide such kind of a reliability but still we are geared to meet this kind of a challenge.

Some of you who have not seen Adani data centers, I’ll request you. It’s very close to Delhi. It’s in Noida, which is coming up. That is 50 megawatt data center being built by Adani and would be operated by Google. 10 megawatt has already been commissioned and rest 40 megawatt is in the pipeline. You can see the structure that is coming and the kind of challenges it is facing. But still, I must congratulate UP Discom who have been able to provide this kind of a reliable supply to that data centers. But this is 50 megawatt. But very soon you will find a data center coming in Mumbai area, which will be of a thousand megawatt. YJ, which may be even more than thousand megawatt.

Thousand megawatt. But the moment I say it. means the supply that will be required to this particular center will be at least 1 .7 times. That’s the near thumb rule. It may require I think sir is saying that it will be required 2 times at least. But data center which I visited has been designed for 1 .7 times of the data center’s capacity. So the challenge is first is how do I maintain a variable load? How do I meet the N plus 1 plus 1 criteria of supply? Some of you researchers who are sitting here probably must be aware about these kind of things before we try to design this kind of a thing. What is further more going to be more challenging is because these data centers are also planning to go green.

That means had they taken a mixed supply probably the challenge of DISCOM would have been slightly lesser. But if you want to classify yourself exactly green data centers, then that means I need to ensure you that only green power flows into your data center, which means a combination of solar, wind, battery, hydro, hydro pump storage, or any such type of a combination, which we’ll be able to ensure to you. And that means I need to ensure a transmission line from such sources to your place so that at least there is no interruptions in the supply of green power being provided to you. But let me assure you that India is geared up for that kind of a challenge because we have started the journey of energy transition somewhere in 2011 or 2010, wherein we started.

We started with a meager figure of somewhere around 2 gigawatt. Now we are more than 50 % in terms of renewable in the country. surpassing 250 gigawatt and which has majority share coming from the solar and then from wind and storage is now kicking into the system. Last year, we surpassed 30 gigawatt in a single year. This year, in just 10 months, starting from April till January, we have already crossed 40 ,000 megawatt, which means that probably in this particular year, we will have more than 50 gigawatt coming in in a single year. So even if the data centers or the AI -driven systems demand green power, I think the country is geared up to that kind of a challenge. Further, what we are suggesting to the data centers is, please try to have a diversity.

Diversity means don’t have at a single location. Try to be as close to REOs. Try locations as possible. slightly away from the main town and diversified locations. So far we have two landing points in the country. It is Mumbai and Chennai, but we are trying to diversify that as well. So we can have multiple landing points in the country so that the data centers can come at multiple locations and so that at least the challenges of the DISCOMs will get diversified. Coming back to the further stability, and since I have been asked for the international scenario as well, so India is also connected to Nepal, Bhutan, Bangladesh, and to some extent to Myanmar. And we are also promoting to very soon connect with UAE and Saudi Arabia and Singapore.

And even Sri Lanka. So if the moment we are going towards the other countries, that means we will try to have both. not only powered, but we’ll try to have the communication network as well. And there are situations which is emerging that maybe these countries will like to have the data centers in India and get supported through that kind of a systems. But all these definitely as we go along will require huge amount of balancing power and storage capacity. So right now we are depending on two major sources. Either it is hydro pump storage or coming from the battery energy storage systems. Recently we launched a report which gives us a confidence that we’ll try to have somewhere around 100 gigawatt of hydro pump storage coming within next 10 years, which is going to be a very good support to meeting or meeting the 24 hours supply to, or the supply to these data centers.

So we’re going to have to wait and see what happens. similarly because we have to cut down our carbon footprint so we are also trying to have a roadmap for 100 gigawatt of nuclear which is targeted to 2047 but there is a visibility even as of now that we go in a fleet mode and we will be trying to achieve somewhere around 22 gigawatt by 2032 or 2034 and then moving up further with more technology kicking in and more expertise being gained particularly from the private sectors and all other sectors so holistically if I see we have huge amount of challenges but to meet those kind of a challenge we have good mix of resources in the country and the country is geared up to meet those kind of challenges the country is also able to make the transmission lines in a record time as compared to anywhere in the world if you see we are able to provide connectivity in 24 months to 36 months time frame in the country as compared to anywhere if you take US etc they take around 10 years to give grant a connectivity that’s the kind of waiting list that they have if you take even European countries they also take more than 5 years for building the transmission lines so at the end I will only say that we are totally geared up for any kind of innovations all the youngsters are welcome from across the world to set up their systems here and I can assure you that the country is fully equipped and fully geared up to support you thank you so much applause

Suvrat Bhoosha

Thank you sir for setting the stage with those vital insights on our energy readiness I just quickly ask the panel to get together for a group photograph applause Thank you. Thank you. Thank you. By delivering enterprise -grade cloud infrastructure at significantly lower costs, he is democratizing AI and empowering over 10 ,000 innovators with advanced H100, H200, and B200 GPU infrastructure. His work is building the foundational infrastructure that enables our sovereign AI ambitions. I would now like to invite Tarun to the dais for his opening

Tarun Dua

Yeah, thanks, SUvrat. So, like, thank you for this opportunity to be a part of this August panel. So, building infrastructure is something we have been doing since 2009. So, 2009, when we began our journey as E2E Networks, like, most of the… So, there is an incident even 10 years before that. So, when… Yeah. We had a startup plan, like I think somewhere around 2006 era or something like that, or even before that. So, we were discussing, three or four friends who were working in the IT industry, oh, we are going to make a website, and this is what the website is going to do, and this is how the website will make money. So, the fourth guy asked the question, but who is paying for building the website?

So, the idea was that, like, it is always someone in the West who is outsourcing the development of the website to you guys, and you are building the website for them, not for India. So, that was the era, once upon a time in India, where we used to do everything for the world, not for ourselves. So, the second stage was when we started doing things for ourselves. Now, the third stage is what we are doing today as a country. We are saying that, like, Like not only are we going to do things for ourselves, we are going to do things for the world. And we become the innovation hub and the innovation capital of building cloud infrastructure for the world.

So with that, I would like to kind of like once again hand over the stage back to Suvrat.

Suvrat Bhoosha

Thank you so much, Tarun, for sharing how E2E got started and the vision behind starting E2E networks. But raw compute and energy are just untapped potential without human ingenuity and the mathematical rigor to harness them. It is my deep honor to introduce Professor Jayadeva, the GSV Chair Professor and former Head of Department of Electrical Engineering at IIT Delhi. An alumnus of the same department, Professor Jayadeva. Jayadeva is a trailblazer whose internationally recognized work bridges theoretical mathematics and practical AI. His group was amongst the first to fabricate an SVM -based AD converter on chip. His recent work on minimal complexity machines provides astounding model size reductions of up to 300 times. His contributions to optimization and machine learning are vital to building highly efficient indigenous models.

Sir, we would look forward to your opening thoughts on how India can contribute to the research and talent pipeline for building artificial intelligence models from the country.

Professor Jayadeva

VLSI and as it turns out there are a host of issues that need if you ask me serious discussion and brainstorming. Primary among them is the issue of manpower. The entire development at one time if you remember Silicon Valley the word IC used to actually jokingly be referred to as Indians and Chinese. So the intellectual innovation that came to build Silicon Valley and most of the entities there that are known today came out from Indian universities, came out from the IITs and all a few decades back. Question is what would it take for example to build that same kind of ecosystem here and you need to have a critical mass of very smart researchers doing work within the country.

And we have to supply the reasons for them wanting to do that. first of that first amongst that is what’s the career connect for a student wanting to pursue his or her PhD or any other research degree for that matter at a university here most of our so I should just put a disclaimer all of my comments are my own personal comments and not representative of IIT Delhi before I continue further but if you ask a student today well a lot of them come to us for recommendation letters and in most cases the first choice wanting to do a research degree would be a university outside that has to change and it is changing but it’s changing slowly what are the reasons for that think of a student who decides they are excited by research wants to do it within the country wants to do it at university here what’s the career after that that connect is directly visible if you look at a research lab you look at a university research lab in the US or elsewhere that connect is missing in most places not because there are no industry driven projects or so on but the nature of those projects is different in many of the successful examples that I can discuss when we have that panel itself the instances are where the university has embedded their researchers within let’s say university along with other students along with other researchers and those who are working for their PhD are already for example employees working in the university environment the scalability of research is very difficult within the industry it’s expensive to explore ideas because maybe out of 10 or even 50 ideas that you explore it’s very difficult to explore one becomes successful and ends up returning revenue to you it is far cheaper to do that exploration within the university environment we have to find models that allow universities and industry to work together but also to find ways so that the biggest bottleneck of IPR sharing which is really the bone of contention or really the key point in most MOUs that you sign this particular aspect is handled more seamlessly and in a simplified fashion the other difficulty is of course with regard to the way the entire ecosystem is configured there is a deterrent amongst many Indian parents from their parents to the children in a sense why don’t you finish your current degree first join a job and worry about a higher degree or PhD later the difficulty with research is it is best done when people are not doing the research when people are in their prime when they are overflowing with new ideas Because once they’re in a job, they get saddled with other responsibilities, you know, familial, others, and so on.

And it never ends up being the same story, let’s say, a few years down the line. This particularly hits women candidates harder because there’s also pressure, you know, although I don’t want to make it a generic statement, but there’s a pressure amongst many of them from their parents to get settled early. So we find, as a consequence, fewer women in research, in engineering research, let’s say, particularly, as compared, let’s say, to male candidates. And finally, the incentive in terms of what people get if they join a research career and eventually join industry or elsewhere, that incentive needs to be made far sharper and far clearer today. Okay. If a student joins an industry today after their undergraduate…

Thank you. degree and works there for a while. Many of them continue doing research in the industrial setting. But as I said, exploration is costly within the industry itself. And so unless the student has a clear -cut motivation to do outstanding research early on so that the industry or whatever career option offers them a significant incentive to do that, I think the scalability will be missing. So I’ll stop. I think I probably have taken more time than I should have, but we can discuss.

Suvrat Bhoosha

Thank you, Professor. I think your vision for preparing the next generation of researchers and what it takes to incentivize them is exactly what this ecosystem needs to thrive. Finally, I’d like to introduce my co -founder at Chariot, Mr. Parth Sarthi. Parth Sarthi went to Stanford to do his PhD in engineering and he was a professor at Stanford. He did his undergraduate and master’s degree in computer science. and more recently was working at the Google Gemini DeepMind team on the DeepThink project. He was the inventor of Raptor, which is currently the state -of -the -art technique in retrieval augmented generation based on which all retrieval augmented generation pipelines today operate on. I’d love for Parth to speak on what it takes to build sovereign frontier models and the differences that he has seen building these models out in the West versus what it takes to build these models from India.

Thank you.

Parth Sarthi

Thank you. India under the India mission has 38 ,000 I think scaling to more than 50 ,000 GPUs which is so much more than you know what we had a year ago two years ago thanks to the India mission and I’m sure the scaling up will continue have many more GPUs now but if you look at the West you know their companies with much more GPUs with deals for many Blackwell and ruin chips coming in right so I was at Google DeepMind I worked there on Gemini deep think the reasoning capabilities of one of the most research resource which yeah labs in the world and and that this number is of GPUs is going to go up but why does any of this matter right why is there a GPU race after all why can’t we just write better algorithms and make better models so the answer to this is in my opinion one of the most important empirical discoveries in the history of computer science is scaling laws so the GPT papers were impressive you know GPT -2 could write paragraphs GPT -3 could write essays they were really good work but the GPT papers were the tinder.

The match, the thing that actually started this whole AI revolution and lived in the entire industry were scaling laws. So in January of 2020 Jared Camplin and some colleagues at OpenAI including Dario who went down to start Anthropic published a paper called Scaling Laws for Neural Language Models and what they found was really simple. So if you take a neural network’s loss, its error rate how wrong it is against the amount of compute used to train on it on a log -log scale you basically get a straight line. A very clean smooth power law. A straight line that spans 7 orders of magnitude. What that means in really simple languages, every time you 10x your compute, your model gets measurably, predictably better.

Not randomly, not sometimes, every single time. The exponent they found was roughly 0 .07 so which means for every doubling of the parameters you see the loss drop by 5%. This sounds small but at a log loss scale across many many doublings if these, you know, these gains compound enormously. GPT -2 to 3 was a 100x increase, 3 to 4 was another 100x, and each jump, you know, produced a leap in capability, right? And then the DeepMind’s Chinchilla paper, which corrected it, said you need to roughly scale your data and compute equally. So the reason why this was so consequential was that, you know, this turned intelligence into an engineering problem, right? Not a science problem. You don’t really need a breakthrough.

You need, you know, more GPUs, more data, more electricity. You need money and whoever has the most money, right? So you could call the race right there. You know, if the scaling laws hold, they have held for five orders of magnitude. So then there’s a spending competition, right? And this was the dark picture. A lot of people would ask me, you know, why are you leaving DeepMind? I come back to India to build against this kind of backdrop. And the reason is this. So about a year and a half ago, something changed. We had the reasoning revolution that hit. You know, there was the O1 model, the O3 models, and they showed there was a difference.

So there was a different way to actually make these models smarter. So, you know, this word reasoning gets thrown out a lot. Let me explain what it is in some simple language. In the old paradigm, you would pre -train these models by making them bigger and training it on more data, which is pre -training. And, you know, the models will see the strillions of text, and at inference time, they would just generate it really fast by, you know, just one at a time with no ability to sort of correct for its mistakes. And these reasoning models, they started working differently. They could, you could give it a problem, a math problem, a coding challenge, a logic puzzle, and you could let it think.

So it would generate a long chain of thought. It would think for a bit, and then it would try an approach, maybe, you know, backtrack to a different approach and eventually, you know, reach a final answer. So this result was a new scaling law and where you could actually, you know, spend more RL training compute. And now we’re even seeing that this new type of RL compute is actually even exceeding the amount of compute spent during pre -training. so this was a reset and if you look at, and let me explain why so if you look at RL training, right the majority of your compute is not actually in gradient impedance it’s not actually in the training, it’s in this models trying different things out in different rollouts, and this is basically inference, and this is this doesn’t really need to happen on your you know, top of the line 100 ,000 GPUs in one building with NVLink and InfiniBand this RL inference, you know the sampling can be synchronous, you can generate asynchronous, so you can generate rollouts on one set of machines, collect them you can make them distributed and so on you can make them run on older GPUs on, across multiple locations and now we have hundreds of, you know, techniques coming out every day to make this work, right and just doing RL is one step, the other part, and this is the, I think the main thing why I do think, you know, India will succeed is environments, RL environments are where majority of the training happen, you know, a math environment has math problems a coding environment has coding problems where the math the model tries, gets feedback, and improves.

And the key observation is that these environments, you know, it can scale with humans and CPUs and not necessarily GPUs. And GPUs are important, but they’re not the most important thing, right? So building a math environment requires mathematicians. Building a coding environment requires software engineers. Building a medical environment, you know, could require doctors defining clinical scenarios. And this is human expertise, right? It scales with people and ordinary compute, which we have a lot of in this country, right? So this is the bet I made. You know, India has 1 .4 billion people. We have domain experts in every field, medicine, law, agriculture, finance, education. We can work in so many languages. We can build environments for problems that a lot of labs in the West don’t even know exist, like agricultural loan assessment in Tamil, legal aid reasoning in Hindi, and so on.

You know, these are problems that affect hundreds of millions of people. Then we can build RL environments for them that don’t exist anywhere in the world, right? But with the, you know, India emission grant, we have a lot of compute to actually build this frontier if we’re smart, smarter about these environments, right? And if you look at India, India is a voice -first country. And that’s why at Charity, we’re building a voice -native speech reasoning model, right? Reasoning over speech for all the reasons I just described are in -train, environment -driven, and in print scale. So I think, you know, the race to AGI sort of has begun. We have the right environments, the right algorithms, the right focus, and this distributed setup.

Now, I think, with the support of a mission that’s already scaling up so many GPUs, I think we can go ahead and

Suvrat Bhoosha

Thank you so much, Parth, for sharing what you think is the roadmap for building intelligence from India. With this, our distinguished speakers now assembled. I think let’s dive straight into the panel discussion. Thank you. Okay So I have a few set of questions that I’ve prepared for all our panelists but people please feel free to interrupt and if somebody can go around with a mic asking questions please do so So I’ll ask my first question to Tarun I think we’ve all spoken about large GPU clusters of how they’re growing in size I would love to understand your perspective of where you see India’s compute requirements are today, where do you forecast them going to be and where do you think the demand for the same is coming from?

Tarun Dua

Sure So a number of things So like if we just look at the compute requirements of say top 15 or 1000 So If we just look at the compute requirements of say top 1500 or 2000 or 2500 or even 5000 organisations, so are there enough teams that can utilise say 16 to 128 GPUs? Just looking at top 1000 organisations and say that like do they need at least 128 new GPUs every year? I think the answer is most likely yes. More likely the answer is that initially we need 128 GPUs and eventually we are going to use at least 1000 GPUs where there are multiple teams within an organisation trying to solve multiple problems and so it’s not just that GPUs are used only for training and inference, they are also used for data cataloguing, they are also used for like many different types of inference which is like available straight out of the box.

I think it’s a good question. I think it’s a good question. I think it’s a good question. I think it’s a good question. So net net the compute environment required by each of these organizations is going to be of the size of at least 1024 and that’s the representative of like the mid segment and the SME and the higher education and research and like literally there are so many different types of organizations apart from like for -profit companies. So net net if we were just to look at like say thousand organizations wanting 128 GPUs each you’re looking at like India needing at least 128 ,000 GPUs and we are not there yet. So which means that like there is a journey ahead of us in terms of building the infrastructure and having faith and the confidence that yes like India may be lagging maybe 18 months behind the rest of the world but that lag will keep coming down and at some point of time we leapfrog.

Like we did with 5G and 4G. So when that leapfrog happens, those compute requirements would explode even further. So I think it is safe to say that like India is a country with 20 % of world’s population and currently having capacity of processing about 3 % of world’s data will sometime in the future leapfrog to processing not 20 but like maybe 40 -50 % of world’s data by becoming the data center and the compute capital of the world. So those are my thoughts around that.

Suvrat Bhoosha

No, absolutely. Thank you so much, Tarunn, for sharing that. So I think at a bare minimum what you’re saying is like the 128 ,000 GPU infrastructure that we.

Tarun Dua

That’s today’s requirement just in India alone. And we just don’t serve India alone. Like when we build compute infrastructure, we serve the whole world because this is not a super latency sensitive like a website or a CDN kind of an environment. So reasoning models, they think. And when you add another 200 milliseconds to the thinking process, it does not like really kind of like add a whole lot of latency to what the people are experiencing. So in that sense, we can actually serve the compute for the world. So which means that we can build a lot more than what just India needs alone.

Suvrat Bhoosha

Makes sense. And so that’s an excellent segue to my next question which I’ll direct to Mr. Shri Gansham Prasadji. Sir, when we talk about these kinds of compute infrastructure that is needed for the country, how do you forecast like what the energy consumption of modern day data centers would be compared to our overall energy requirements for the country? And like how does our country, for example, be prepared to meet that over a 12, 24, 36 month time horizon?

Shri Ghanshyam Prasad

See, we have already, as I mentioned in my opening remark, we have already factored it right now demand equivalent to 16 gigawatt which we are projecting for the data centers. But the philosophy of planning we have changed. It’s now in India. And we are trying to upgrade our systems and planning systems every year. It has been made dynamic. Earlier you used to hear something like five year plans, right? Those days have gone. So we are upgrading our transmission every six months, that plans. And the resource adequacy plans is being upgraded every year. And even when I was speaking in Singapore where US and all other regulators were there and they said how are you able to really manage this in six months and one year.

So I said it’s a computing environment that has gone in India and we have really cashed up and we are able to do this. And that has really helped. If you see whatever error we make in the planning process or the projections we are able to correct it within no time. And that has led us to do a course correction immediately whenever we have this kind of plan. Second is the growing demand that the country has right now, which is phenomenal. I will say it is much much higher than any other countries across the world most of the European countries you will find that they are growing at either they are stagnant or growing at 1 % or 1 .5 % or 2 % at the most we are growing at 7 to 8 % and some year we have even grown at rate of around 10 % so meeting that kind of demand unless you are resilient and you are able to do it in real time frame probably you will not be able to sustain that kind of a thing and the kind of further expectations that is there with the customers probably you need to have that kind of jump.

Suvrat Bhoosha

And sir I think like one follow up question which I have actually both to you and Tarun is that like these modern data centers like the energy densities are hitting quite high levels right so one thing that we hear is that do we move data centers close to where the energy generation is happening so when we talk about this new upcoming like data center hubs you talked about you know like sort of Mumbai being one of the hubs for where these data centers are being created but like according to my naive opinion there’s a lot of energy production that’s happening in states like Rajasthan like how do you foresee this environment that you see data centers moving close to where the energy hubs are like would you be interested in building like the center close to these regions I would love to hear both of your perspective on the same.

Tarun Dua

so I am really looking forward to like the SMR nuclear reactors being made available as quickly as possible in the data center campuses and see nuclear power is again like I am shilling for nuclear power for no reason so nuclear power is like also very reliable so you can actually run it for like all together for like 8 years 10 years now several advantages to that is like you are not transporting on the grid so you don’t have to pay the transit fee which is very very reasonable in India but like again every cost saved is that savings can be passed on to the end customer and similarly you also don’t need diesel generators to be there on site you can just have a slightly larger battery energy storage systems along with nuclear and you can build a data centers of the future so that is something that i’m really looking forward to but i think like it could be like three to five years away so those are my thoughts about like wherever you are putting data centers you can put the power over there as long as there is availability of sufficient amount of land because nuclear power requires like some free land around that facility and another advantage of nuclear power is that once you have set up like some land for nuclear power you can like modularly increase the size so let’s say you start with 220 megawatts then you can add like in chunks of 220 megawatts which is the most dominant design of the smr or like even the bsr designs that are there so that’s what i think about it.

Shri Ghanshyam Prasad

I think what Tarun said is very right, but the visibility that I see in SMR may not be 3 to 4 years. So maybe slightly longer period, I am not very sure about it. And because I have been talking to most of the people who are going to be into the business of nuclear, because so far we have only NPCIL, Nuclear Power Corporation of India Limited. But all others are also slightly apprehensive, that probably that may take slightly longer time. But again, what is required, what he rightly said is you will be requiring a containment zone. And that containment zone vary anywhere between 1 kilometer to 5 kilometers, depending on the capacity that you are going to have in the nuclear space.

That means again you will be moving away from the main crowded places, right? Because you require a containment zone wherein no habitations are allowed. second is you rightly mentioned that we are trying to say that you should go to as close to the resource center as possible because you need green power if you really need green power then you should have that kind of a closest because if you take let’s say if your target is somewhere from Rajasthan or Gujarat we require a huge amount of transmission lines and we are trying to optimize on the transmission system itself so let it be at the generation place and Maharashtra I mean good thing for India is we have 8 to 9 states which are very rich in renewables starting from Gujarat Rajasthan, Maharashtra, Karnataka, Telangana, Andhra I mean so all these and so you have multiple choices it’s not that you have only one choice where you need to put it similarly if you see the IT hubs which is getting created they are also scattered around the country so that and last point I said is we are trying to have multiple landing points again so the moment you have a multiple landing for example for Singapore Vizag or Paradeep or Gopalpur could be another choice so we are looking for an alternative and Singapore probably is likely to be connected with Vizag so similarly for the western side as well so you need to have the diversity of this and that is how you will be able to successfully meet your demand

Suvrat Bhoosha

No, makes sense. Thank you so much for sharing those points of view I will move on sir to Professor Jayadeva when we talk about the talent pipeline for the country we would love to hear your perspective sir on what you think is sort of the undergraduate readiness of our workforce for training and deploying these AI workloads and what is your point of view on a lot of people in our country moving abroad to do higher education or moving abroad to do higher for better work opportunities compared to sort of the PhD education system in the country thank you like what would your perspective be on you know empowering more of our children to sort of continue PhD opportunities to continue grad school opportunities in India versus sort of doing that in other institutions around the world ?

Professor Jayadeva

Were actually employees of a firm working full time in the department. This company had stationed them in the department and said, well, work for your PhD, but you have to work on areas or these problems that are relevant. They were, of course, discussing with many other students in the department who were also in that lab. And then, of course, there were professors part of that team. That kind of success story is, you know, I would say rare. And if one finds a way to replicate those examples in numbers, I think the story will change dramatically. It takes a leap of faith. Most HR managers are averse to letting their employees work full time at a university.

Well, if you’re working there, you know, you’re not on site and therefore you’re on some kind of leap. In this case, we created a way so that they could logon. So VPN and work as if they were on site. So it’s kind of. site for themselves. The other problem of course is people have to join research careers early. They have to take that plunge early on. That’s when they are most productive. That’s when they can churn out new ideas quickly. And I think while the government is doing a great deal to make that happen I think we need more examples from the industry trying to do that, trying to bridge that gap. So if that happens in my view, the story will dramatically change.

Suvrat Bhoosha

How do students today look at PhD as a career path right out of college outside of the other opportunities they may have?

Professor Jayadeva

So the duration of PhD is the primary deterrent. It’s 5 years. And so there’s a social deterrent as well. I have heard from students you know when they get back home PhD student some neighbor will make a comment well you are still studying is it because you are still at college still at university haven’t got out aren’t in a job so it’s you know that mindset will change in my view only if one you get paid more I mean if they are actually employees let’s say working that changes the fellowships I think are far more lucrative and that can only happen with industry help in my view and but there is a there is a via media there is a path in between we have something called MS research which is like a research degree that takes about 2 years plus numbers there have actually tripled in the last 3 to 4 years so number of PhD enrollments I would say is now static it dropped after COVID but in this MS research degree those numbers have actually and you can get a job and you can get a job and you can get a job rippled in the last 3 years I am saying for our department so I think we have to find you know we have to really brainstorm I think that that dialogue hasn’t happened in sufficient measure to be able to answer your question.

If

Shri Ghanshyam Prasad

f I may supplement I think professor is saying what is the practical case but government is slightly thinking in a different manner now and you must have heard about ANRF that is Anusandan foundation that has been created with an outlay of 1 lakh crore rupees and this is going to be across the country across all the segments all the sectors which will be almost under the principal scientific advisor of the country very recently we also had a meeting with him and you And very recently you must have seen that we have got something like 20 ,000 crores under CCUS, carbon capture and utilization in storage sequestration. So these are some of the projects which are now being identified.

What are the gaps that India has in terms of technological things which other countries have or can we surpass them? So with that objective, this fund has been created and it is likely that the industry and this kind of an organization and even what we are thinking in the power sector is can we have a university or maybe a cluster of such this thing. Already one has been experimented in Gandhinagar which is doing a good job. That all those people who are trying to do something. Can they do some kind of innovations? Can they be supported through some kind of a fund? And then… the industry takes over. So the gap that earlier used to be there that a PhD he does a paper or a professor he does a paper or even his promotion is linked to the paper publication.

So that kind of a situation will have to be slightly modified and you need to really take whatever you do, whatever PhD that you do or whatever research you do, it has to be taken forward from there so that what we are thinking is that it’s a concept to commercialization. So you have to take it to that level and then only it has to flow. Very recently we also had a good competition of AI use case in power sector. I think only two months back and we have identified few companies who are really trying to have that kind of an ideas and we have already assigned them some tasks that okay you do it on a nomination basis.

So that’s the kind of you find that. So there are a lot of good changes. that is that the change in mindset of the government and trying to support this kind of activities that is going to happen.

Tarun Dua

Sir I would like to add something over here. So these are great ideas that like research should be promoted and supported in India. Now academia does a very good job of identifying pure problems which need to be solved which advances the human knowledge. We in industry see the build versus buy decisions like almost every day. And also we kind of like look at all the road maps of okay what needs to be done and what amount of time. So give you a few examples like basically like if you just look at say things like optoelectronic networks co -packaging of optics with electronics. So those kind of problems are very well known. So to go from 100 Gbps to 1 .6 Gbps there is a certain time frame in which it has to be done.

And at a certain volume of production that it has to be done. So, which means that resources have to be deployed in a manner that it produces goal directed research in a certain time frame. So, what is considered as like a good outcome is something that we in industry can help define but most of the time we don’t always have the kind of money to deploy behind those goal directed research and also we do not have our own use cases for kind of like selling out that much to be able to support that volume of research. So, that’s my suggestion that task people like us who make build versus buy decisions to at least create the roadmaps that okay this would be good to have if we can do it in this much time frame.

If we don’t do it in this much time frame somebody else in the world will go and do it. So, that is something we can help with.

Shri Ghanshyam Prasad

Yeah, absolutely. And this is the basic idea. This is the basic idea with which we are trying to have this. Just I’ll give you one example. In fact, we are facing huge amount of challenge in research. It’s what you see right now in the country. we have only two companies in the world and they are really taking us on ride in terms of supply chain in terms of prices etc etc then we said ok nothing doing let’s can we have our own industry coming up in India so we have now lined up L &T and Power Grid Corporation of India both of them are contributing 300 crores each to go in for that the gaps so you will find this kind of situation we have already tried to identify something like 76 elements in the power sector which needs immediate attention so you need to go aggressive now on this kind of thing similarly other sectors as well IT sectors, METI is trying to do in mining areas, in critical mineral area so you will find all the ministries have now waken up to take up this kind of a challenge.

Thank you.

Professor Jayadeva

I just wanted to react in a different way to some of these comments so it’s not always that you know research needs to be abnegated issue. A lot of research is applied, a lot of research that happens within universities, IIT and so on. A lot of it actually is with industry. But more often than not, the industry funded projects tend to be kind of at arm’s length. It’s like kind of saying, look, here is the problem and if you can find a solution. Sir, we need both types of research. If we only do goal directed, we will never innovate really well. No, I just wanted to say it differently. So, the point I was trying to emphasize is not about either necessarily short term or long term or medium term.

You need to have a mix of all three. Certainly new ideas come forth at all possible levels. Okay. The difference that you know eventually an idea makes is well ideas don’t make money companies make money or you know organizations make money so the key is translation it is difficult to create an ecosystem within a university that’s efficient and let’s say I would just say efficient at translation on the industry side translation is much simpler they’re geared up for production as an example if you ask a student to write production level code it’s not going to happen it’s not feasible and that’s why one has to rethink the nature of this partnership it’s not about funding it’s about trying to work on these problems together like I gave you an example what happens or used to happen say at Berkeley Cadence labs Cadence and you know set up a lab at Berkeley and they had researchers from both sides working together and they had researchers from both sides working together and they had researchers from both sides working together and they had researchers from both sides working together and they had place.

Now it might be a new idea comes, you know you come across a new idea, might be something that is groundbreaking, will take time to scale and you want to look at that separately. There are problems that would give an edge, would give an edge to a company today and they need to be solved in the next six months. Those are also problems that people need to work on and look at. And sometimes there are things that come simply out of the discussion, something a company has been doing for the last ten years, turns out as a far more efficient way that you could deploy in the next six months. So all three happen. Right now I would say the dialogue is at arm’s length.

And that if it changes, I would say funding is less the key than really that, you know, making that dialogue happen because when that starts happening, you will also see excited students wanting to say, look, I know that I will find a career. That doesn’t take any money from the government. It’s fully sponsored supported by industry or sponsored by us. It’s at IT Delhi. It’s called the VLSI Design Tools and Technology Program. It was started in 1996. And till today all the students are sponsored either by projects or by industry. And many of them have led to patents and other things going on. Two of the gold medalists of that program decided to forego all their placement offers.

They had like three or four offers in hand off campus, on campus. And these gold medalists decided to stay back and continue a PhD because they realized all these companies want them. They are really good at what they do. They will get a job and they wanted to see that chip come out. They wanted to see that develop. They wanted to test it out and see the outcome of that. That level of excitement really happens when these are live projects. with involvement from the industry or whoever else, it could be even a government entity, public sector but you need the end users enmeshed with problem discussion and solutions.

Suvrat Bhoosha

No, absolutely. Thank you so much for sharing that sir and everybody. I’d like to invite Parth to share a personal story something that sir just said about people who move to the US don’t often come back and then also on the same side that you know like while you were studying you decided to sort of take a break and sort of join Google DeepMind part time. So what was that thought process like? It was very similar to what sir described as a project, as the passion of working on a life project and what was sort of your reasons for moving back?

Parth Sarthi

Thanks Avrith. I think I think the thought process there was you know I was doing my undergrad and my masters and at some point I wanted to go on also and do a PhD and perhaps be in academia that was definitely one of the considerations I had because I got into research pretty early on even in my undergrad career. I think the excitement around AI and sort of showing that even a lot of PhDs and professors at my university were going on and then building out companies and showing that this research that’s been done for so many years now is actually starting in the 80s but now is actually paying off these dividends and leading to this new technological revolution as I think Professor actually said sometimes a lot of these ideas take a while to actually materialize and we were seeing that materialization sort of happen in the Bay Area there and then so that was at that time you know again AI is one thing that required a lot of computes a lot of these big industrial labs had that compute which you know universities had some of it but didn’t at this scale and you know as I said scaling laws were happening so you wanted that scale that was my reason to sort of be a deep mind to see that scale and then but really I mean we need that sort of same infrastructure in India and we need the same research and people in India so that was what sort of drove me back to here because now with the mission support we have similar compute in India and actually we were seeing that you know these scaling laws show you can scale up but you know there are new innovations that India sort of needs and there are I mean there are so many smart people here so now that we have the compute we have the people for me just made a lot of sense to be back here and you know build the same thing from India.

Suvrat Bhoosha

No thanks for sharing that Parth. So I’ll open the floor for questions. There are mics here if people in the audience want to ask. Hi,

Audience

My name is Pradeep Subramaniam. I come from the physical world. So, AI, I have been recently building an agentic AI, but I come from the physical world, R &D, technology, etc. So, my question is to Ganshyamji and to Parth, actually. So, if you build any infrastructure, the physical layer, right, in terms of IoT sensors and the one which is collecting data is the most important part. So, what I was finding in this whole discussion was data centers, infrastructure, but nobody talked about the IoT part or the physical collection of data, right? For example, the electricity plants that you have, whether it is at the power generation, at the distribution, transmission. hardly any IoT based systems or SCADA legacy systems, right?

They are not connected end to end in terms of building a digital twin of this electric system, right? We built something like this for the Haryana government, but it’s not scaled to the full extent, right? So where is the role of India building the ecosystem for the physical layer, which can generate so much amount of data, which can help build this AGI, right? So while infrastructure is good, how do we create? China does that, right? China has used cases which are full of physical layers, which are there. We in India tend to, for example, UPI we build, it did not require much of a physical layer, so we could easily build, right? I think the catch is building the physical layer.

What are we doing for that? For example, in your area, sir.

Shri Ghanshyam Prasad

Yeah, thank you. Thank you for raising this particular. concerns of the industry and this is definitely an issue and let me be honest on this. We have very good infrastructure particularly coming from the generation side and till transmission and going up to the low dispatch centers. Till that absolutely we are at par with the world but when it comes to the actual concentration and link with the customers that means the distribution and the customer link probably we are still lacking behind. So that is the physical and this is the practical situation wherein we are at present and you all must be hearing about the issues of the distribution licenses and their financial viability. So until this they are financially viable probably they will not kick in into the area of automation.

Audience

My question is why is the government not supporting to help create this data?

Shri Ghanshyam Prasad

I am coming there only. So we realize this right It’s not that we didn’t realize in the government That this particular segment Of the entire value chain of the power sector Requires some kind of a support We had been supporting this particular segment Earlier as well And in the recent one it is the RDSS Program that has kicked in And this is This is a program which is reform link program So if you are able to Achieve certain goals you will be given the money Or else you will not be given the money And this is supporting in two ways Two very very important ways One is the infrastructure that is required For ensuring reliability of supply And second is the automation systems That means We need smart meters Until this you have a communicable meters You will not be able to do that kind of a smartness Into the entire value chain Of the product So as I said we had this missing link We had come up to the State load dispatch centers But going from the state load dispatch Centers and connecting with the Customers you needed this kind of a smartness and that is how we introduce smart meters and it has rolled out and I think so far more than around 3 crores of meters have already been installed in the country with 25 crores already in the pipeline.

So hopefully we will be able to reach this kind of a number in next say 2 years time frame or maybe in 3 years time frame. What that it leads to? Then it leads to the SCADA system being developed in this particular segment as well. Isn’t it? And right now we do have the SCADA system but it is coming from the other side of the fence. So we have shortlisted a few companies and we are trying to work with them so that we have our own indigenous SCADA systems which is supporting the entire value chain. You all are knowing about the cyber security concerns and we do in a similar manner. And so we want this kind of things to be developed in India.

as well. Now what does this mean? The moment you have the automation in this particular segment, use amount of data is long going to be generated. How do you use this data? So that is how I said that we already had one round of discussion with the startups and some of the AI and driven companies and let me tell you their enthusiasm level and they say, sir give me one year time frame I am going to map all your assets across the country. I mean that is the kind of enthusiasm in these youngsters and we really salute this particular group and that is how my distribution team in ministry, they are working with this kind of people and so that how quickly we are able to take their supports, map them and try to really go further.

Further what, whenever you have this data, then the data has not only to be used only for the billing purpose, right? It has to be used for your planning, planning of network, planning of optimization of resources. I mean you can define any number of use cases the moment you have all this. So this is in pipeline. I’m really thankful to you for reminding me this. And we are trying to.

Audience

So my point was that, for example, geo tagging of all the assets of your, you know, right from the power generation to the end point to the consumer. It’s not done end to end today. Right. It’s also a security risk for the country. If some other, you know, server is hosting all that data, it should all be hosted in India in the data centers, all every platform at the back end, including the LLM, which is managing that should be completely in India. Right. So I’m saying that the end to end deployment of AGI will happen only when we have the real physical layer generating enormous amount of secure data, which is not hosted in outside India and lying within the sovereignty.

Data centers of India. I mean, that’s the kind of thought that government needs to think. then we can become so that’s why I wanted my second question to Parth that what are we doing to build that kind of data which will help us set up the AGI part right so AGI doesn’t come simply from some small use case right you need trillions and trillions of tokens and data for that right and you need a domain expertise and knowledge to build that how do we do that what’s the question?

Shri Ghanshyam Prasad

before part takes in differently these youngsters have written insights than me but this gap that you rightly just now mentioned about the data being hosted elsewhere in fact this we came to know the moment we started rolling out these smart meters in our systems and we found that the suppliers are having their resources somewhere going out you immediately we took that action and we said that nothing doing all the data has to be housed in the country itself so right now whatever smart meters that we are placing in the country their data doesn’t go out and it has to be in India so wherever we are able to plug I think we are trying to do that and trying to create that physical layer so that we are cyber secure that is very very important for the power sector.

Parth I think you will take over now.

Parth Sarthi

Thank you sir I think just to echo some of sir’s thought I think I mean a lot of work is actually being done on this layer right so if you look at data sets we need indigenous data sets you have AI kosh by the India mission which is solving for this right Indian data sets for Indian companies to build these frontier models if you look at compute as you said you need you know we need compute in India so if you look at the budget policy we have this data centers you have till 2047 tax people so you will see a lot of these data centers come in there are already a lot of data centers being built you know we have you know Tarunji who is building E2E right see for India infrastructure so that this compute you know the frontier models of India can be can be hosted in India and it can all be done on local compute.

So I think the GPU infrastructure that is being supported by the India mission is actually solving for the exact case that you’re seeing. And already over the last two years, you’ve scaled up our GPU so much for this. So I do think a lot of work has already been done and this work is just going to continue to solve for this.

Tarun Dua

Sir, I would like to kind of like take a stab at trying to answer your question. I think we are still having some gaps in terms of being able to harness the impatience of the youth to build physical stuff. So unlike software, the physical stuff actually costs money and the cycle time today is very high. So you need to be able to reach the nearest 3D printer to be able to prototype. You need to be able to kind of like design the chips. You need to be able to solve for all the physics problems. I think what the LLMs will do for us and the frontier models will do for us. is to reduce the cycle time of the thinking part like say you have to do the actual physical world calculations you have to do the digital twin part that part is used to take a lot of time that gets solved faster but what we still need to solve is that we need to do the prototyping that is the part we still need to solve but i think like having spoken to a few companies who do who used to do physical prototyping they have done away with a lot of physical prototyping all together and they are just doing it on top of the digital twins now so so i think somewhere we will converge so that’s my hope.

Audience

o I am saying that imagine the next upi innovation is say the agentic ai for the vending machines i am working on that but physical layer right of the sensors which pull the data for a vending machine the back end of the vending machine is the back end of the that hardly any vending machine is connected to iot doesn’t have any physical layer. It is just used like a dabba, right? So the point I am saying is that why the government is not enabling the instruments which help the connectivity of this data to the AI and the data centers and then the intelligence can be built to automate, to create more jobs and you know, it’s very counter intuitive.

We say that we will build agentic AI, we are going to reduce people. No. Actually the work is going to increase because the vending machine infrastructure will go 10 times or 20 times it will become like Japan, right? You will have more vending machines. But I do not think that that kind of an infrastructure, private industries, I am from the private industry, I cannot build it. For me, day to day running a vending machine business is I get cheap labor. I cannot use sensors, right? So this is the catch -20 -t kind of situation where most of the infrastructure that we have in India, we have cheap labor, we still manage with that. We cannot take the next leap.

How do we take the next leap by getting the platforms like UPI to build with physical layers that was the question.

Shri Ghanshyam Prasad

I think some of the answers will be given by the India Energy Stack I think you must have heard about that and I am also a member of that committee and we are deliberating on all these use cases and where that gaps are so definitely I think we will take care of that.

Suvrat Bhoosha

Thank you sir for asking that question I would like to circulate the mic in the audience if other people would like to ask questions people can just raise their hands if they have any I think there is one in the back.

Audience

Hello, good morning to everyone there are three things to develop any industry first is primary sector, second sector and third sector your AI impact summit is always talking about business model what about the management manufacturing sector because if any unit is made like semiconductors are not developed in our country we take Chinese companies we take Chinese companies we take Chinese companies we take Chinese companies we take Chinese companies we take Chinese companies Although the industry is being built now, six units of semiconductor industry are being built. But what about the 9 gigawatt industry which will be built for data centers by 2032, by 2032, what about the manufacturing sector? Until that is not developed, will we keep working on business…

Tarun Dua

All these things are interconnected. So nothing is to the exclusion of another. Whatever sector you are working in, eventually that will feed into the other sectors. So as long as the intent is there to be Indian and by Indian, when there is intent, then automatically all the problems will be solved together. If there is intent that, we will work together to solve all the problems, I will move forward but the rest will stay behind. then we will not move forward. How are we developing that?

Audience

There is no framework. Nothing comes first or later. Everything goes in parallel. Microprocessor units. Microprocessors. Because for AI, the most basic unit is microprocessor. And for data centers, the most micro unit is microprocessor. So what about microprocessor? Will we keep buying from China? No doubt that in 2025, six units are being made in India and all. But so far, there is no prominent result.

Shri Ghanshyam Prasad

It takes a little time to reach that level.

Tarun Dua

Sir, actually, if you look at microprocessors, either mobile phone or server or desktop, on the whole motherboard, there are about a couple of hundred pieces of intellectual property. Now, all that intellectual property, if you look at it, a lot of intellectual property is made in India. It is made by the people of India. So licensing India ke through nahi hoti Because the IP is getting developed with foreign money Toh yaha pe hum R &D karte hain, IP develop karte hain Koi usko aur commercialize karte hain Toh I think that gap has to be fulfilled By having volumes which are domestically available So jab domestically volume available hoga Toh jo system on chip IP jo develop kiya gaya Usko add karne ke liye jo log kaam kar rahe hain They will see a domestic market So uske baad ye saari cheezme automatically honi shuru ho jayengi Toh like I think jo bahuti important part abhi ho raha hai Is to move the country forward Build a large market which is interconnected with the world Once you have large markets interconnected with the world Then youngsters who are very impatient to go and build things And say that okay isko commercialize kia ja sakte They will go and achieve the success So like I said Kuch bhi can’t be serialized.

You can’t say that we will do this first. We will go back to the planning era. The communist states who used to plan first we build fundamentals then we will build something else on top of it. So free market allows you to work on all these things in parallel and it throws up the opportunities. So if we fix our economics, all these things will be fixed on their own. Thank you.

Shri Ghanshyam Prasad

I will say one thing. Basically we are moving gradually in the manufacturing sector. If we talk about the power sector, I don’t have much information about METI and other areas. Here a lot of equipment is almost 100 % indigenous. There are certain which is ranging from 50 % to 80%. They are being targeted to see that the domestic content of that equipment also goes to 100 % in a given time frame. There are still equipments which are yet to take off. Those which are 78, 76, which we were telling you about. . We are trying to reach almost 20 % to 100%. So there are different stages of indigenization. But definitely we are targeting that all these equipments must be manufactured in India.

Audience

It is same that in primary sector, there will be a lot of silicon.

Shri Ghanshyam Prasad

Absolutely. In power sector, we use a lot of electronics. For example, I gave you a small example of IGBT. IGBT is again an electronic equipment, which is right now we are taking from outside. We had the challenge here and we said nothing doing. Now there are Indian companies are going to manufacture this. So now we have given them the task for two years and they need to develop in two years and commercialize. So similarly, we are taking it up. Thank you.

Suvrat Bhoosha

Professor wanted to make a comment.

Professor Jayadeva

So let me divide up that answer into multiple parts. So, the word microprocessor of course is no longer used, right, we do not really talk about microprocessors, the most of the current AIML all runs on GPUs, the architecture is very different from traditional microprocessors. So there is for at least space and some of the other sectors, we have a fairly successful, you know, operation running at semiconductor complex limited Chandigarh, right, Mohali. So the plant at SCL produces some earlier generation microprocessors and produces chips for a variety of other things. There are of course similar other entities around the country, but most of the effort of what you are putting in the VLSI design space, the chip design space, so to speak, most of it is design.

The manufacturing, most countries, I am not saying only India, most countries, India, countries, Europe, US, in fact many of the earlier semiconductor manufacturing plants in the US shut down and are now producing solar panels, right. So the most of the efforts around the world are fabulous design houses and India leads in that. So if you look at, you know, Bangalore, Pune, Hyderabad and to some extent some in Noida, Canada, very significant fraction of designs done for many of the smartphones are actually done within the country almost 100%. In fact some are actually really 100%, complete design. That design is a major component of the cost of developing a new design. It’s actually the manufacturing is there but most of the cost is really in the initial stage really the manufacturing, sorry the design cost.

That’s happening in the country. Scaling up the, you know, the semiconductor design plant itself will take time but you can see it’s already rapidly happening. In the case of memories flash memories and so on there’s already very large investment that’s happened by a fairly prominent multinational in Gujarat and elsewhere that’s already taken off very well. There are similar efforts that probably you will start hearing about their outcomes and outputs in the next 2 or 3 years or even less. So I think as far as the you know that space is concerned the Indian engineers have it almost entirely covered so I don’t think that’s a cause for worry. I think the interlinking of these parts will happen if you ask me organically because everything exists in one.

Thank you.

Suvrat Bhoosha

With that, I would like to thank all of our panelists for spending so much time and answering everyone’s questions. I would like to thank the organizers for letting us go 30 minutes over the time. And thank you so much. I’d like to invite the Indian Air Mission delegates to facilitate the panelists. Thank you so much, everybody. Thank you. Thank you to Suvarath for all the moderation as well. Thank you. Thank you, folks. Thank you, everybody. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (37)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“Suvrat Bhoosha opened the session and stated the NDIA mission to build “frontier models for India” under a sovereign AI mandate.”

The knowledge base explicitly mentions that companies are mandated to build frontier models for India under the NDIA mission to build sovereign frontier models, confirming the opening remarks and the mission statement. [S1]

Additional Contexthigh

“Reliability & provisioning: the “N + 1 + 1” reliability requirement and the 1.7‑times rule for power provisioning for large data‑centres.”

While the exact N + 1 + 1 scheme and 1.7-times rule are not cited, the knowledge base discusses data-centre reliability challenges, including voltage swings that have tripped centres and the need for backup diesel generators, highlighting the broader concern about robust power provisioning. [S15] and [S98] and [S99] provide relevant context.

Additional Contextmedium

“Projected AI load: AI‑driven data‑centres are expanding to gigawatt‑scale sites with roughly 16 GW of load projected for India.”

The knowledge base notes a massive surge in data-centre power demand driven by AI, with expectations of a 165 % increase in electricity consumption by 2030 and a doubling of data-centre energy use by 2025, underscoring the rapid growth of AI-related load even though the exact 16 GW figure is not cited. [S96] and [S97] provide supporting context.

Additional Contextmedium

“Renewable growth: India’s renewable capacity has risen from ~2 GW in 2010 to >250 GW today, with >40 GW added in the last ten months.”

The knowledge base emphasizes the strategic importance of locating data-centres near surplus renewable energy sources, especially in northern regions, which aligns with the report’s emphasis on renewable-rich states, though specific capacity numbers are not provided. [S51] adds contextual support.

Additional Contextlow

“Location diversification: locating data‑centres near renewable‑rich states (Rajasthan, Gujarat, Maharashtra, Karnataka, Telangana, Andhra Pradesh) and creating multiple “landing points”.”

The knowledge base recommends situating data-centres adjacent to surplus renewable energy and highlights benefits such as reduced power consumption and better grid stability, reinforcing the report’s recommendation for geographic diversification. [S51]

External Sources (102)
S1
Indias Roadmap to an AGI-Enabled Future — My name is Pradeep Subramaniam. I come from the physical world. So, AI, I have been recently building an agentic AI, but…
S2
https://dig.watch/event/india-ai-impact-summit-2026/indias-roadmap-to-an-agi-enabled-future — Thank you so much, Tarun, for sharing how E2E got started and the vision behind starting E2E networks. But raw compute a…
S3
https://dig.watch/event/india-ai-impact-summit-2026/indias-roadmap-to-an-agi-enabled-future — And finally my co -founder at Chariot, Mr. Parth Sarthi. To build this ecosystem from ground up starting with the very p…
S4
Indias Roadmap to an AGI-Enabled Future — -Suvrat Bhoosha: Co-founder at Chariot, moderator of the session on “India’s Path to an AGI-Enabling Ecosystem”
S5
https://dig.watch/event/india-ai-impact-summit-2026/how-the-global-south-is-accelerating-ai-adoption_-finance-sector-insights — Arshatosh? Thank you, Tara. And I think now we move to the more important issue of putting money into this particular i…
S6
Indias Roadmap to an AGI-Enabled Future — And finally my co -founder at Chariot, Mr. Parth Sarthi. To build this ecosystem from ground up starting with the very p…
S7
https://dig.watch/event/india-ai-impact-summit-2026/indias-roadmap-to-an-agi-enabled-future — And the key observation is that these environments, you know, it can scale with humans and CPUs and not necessarily GPUs…
S8
https://dig.watch/event/india-ai-impact-summit-2026/indias-roadmap-to-an-agi-enabled-future — All these things are interconnected. So nothing is to the exclusion of another. Whatever sector you are working in, even…
S9
Indias Roadmap to an AGI-Enabled Future — -Tarun Dua: Founder and Managing Director of E2E Networks, focused on building enterprise-grade cloud infrastructure and…
S10
https://dig.watch/event/india-ai-impact-summit-2026/indias-roadmap-to-an-agi-enabled-future — Absolutely. In power sector, we use a lot of electronics. For example, I gave you a small example of IGBT. IGBT is again…
S11
Indias Roadmap to an AGI-Enabled Future — – Professor Jayadeva- Tarun Dua- Shri Ghanshyam Prasad – Shri Ghanshyam Prasad- Audience – Suvrat Bhoosha- Shri Ghansh…
S12
https://dig.watch/event/india-ai-impact-summit-2026/scaling-trusted-ai_-how-france-and-india-are-building-industrial-innovation-bridges — So my question is for Dr. Pino and Dr. Kashi. You know, I work at the intersection of AI and synthetic biology. Google d…
S13
https://dig.watch/event/india-ai-impact-summit-2026/elevenlabs-voice-ai-session-ncrb-npmfireside-chat — And one more thing which I wanted to know. So like you were showing for the websites, it was by default we can choose th…
S14
https://dig.watch/event/india-ai-impact-summit-2026/press-briefing-by-hmit-ashwani-vaishnav-on-ai-impact-summit-2026-l-day-5 — A lot of these applications for AI for social purposes are the ROI may not be immediately available for the developer. I…
S15
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — . in five years in certain areas, and the households are feeling that pinch. There is an issue of reliability. Grids wer…
S16
Growing data centre demand sparks renewable energy investments — US Energy Secretary Jennifer Granholm has assured that the country will be able to meet the growingelectricity demandsdr…
S17
Cybersecurity of Civilian Nuclear Infrastructure | IGF 2023 WS #220 — Nuclear power plants produce energy that is a crucial part of a country’s power supply. Small modular reactors (SMRs) a…
S18
Driving Indias AI Future Growth Innovation and Impact — People entering the workforce are employed today. And you think about delivery of this through online, in -person, incub…
S19
Digital twins gain momentum through AI — AI is accelerating thecreation of digital twinsby reducing the time and labour required to build complex models. Consult…
S20
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — India possesses many essential ingredients for AI success: a robust software services industry, thriving startup ecosyst…
S21
AI Infrastructure and Future Development: A Panel Discussion — Physical infrastructure constraints create bottlenecks – need for skilled trades workers, power, concrete, copper in mas…
S22
From KW to GW Scaling the Infrastructure of the Global AI Economy — The infrastructure demands represent a fundamental shift from traditional data centre design. The speakers noted that wh…
S23
AI energy demand accelerates while clean power lags — Data centres are driving asharp rise in electricity consumption, putting mounting pressure on power infrastructure that …
S24
Discussion Report: Sovereign AI in Defence and National Security — Examples include the lack of transparency in ChatGPT’s training data and alignment process, with multibillion dollar law…
S25
WS #208 Democratising Access to AI with Open Source LLMs — Daniele Turra: Yeah, I’ll try to be very brief. So one key difference that we can see in open LLMs when it comes to t…
S26
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — How do we ensure that we have the right talent, the research infrastructure, the technology expertise, the supply chain,…
S27
Science as a Growth Engine: Navigating the Funding and Translation Challenge — The path forward demands collaboration across traditional boundaries, sophisticated understanding of the research-to-imp…
S28
Next-Gen Industrial Infrastructure / Davos 2025 — Christophe De Vusser: Yeah, and I will build on some of the comments that have been made on your last comment on the …
S29
DC-IoT Progressing Global Good Practice for the Internet of Things | IGF 2023 — Additionally, national policy practices for IoT security often differ significantly from those of other countries, indic…
S30
Open Forum #29 Advancing Digital Inclusion Through Segmented Monitoring — Fabio Senne: No, yes, I agree with this discussion of the cycle. It’s interesting because if you take, there’s a very st…
S31
DC-IoT &amp; IS3C: Global Best Practices for a Resilient and Secure IoT by Design — Jonathan Cave from the Alan Turing Institute and Warwick University highlighted the complexity of IoT data governance. H…
S32
Building Indias Digital and Industrial Future with AI — Deepak Maheshwari from the Centre for Social and Economic Progress provided historical context, tracing India’s digital …
S33
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — “What raw material is needed for AI?”[9]. “sovereign AI comes to India, we’ll have the control”[56]. “Indian government …
S34
Welcome Address — India positions itself as a central hub of technology talent, leveraging a strong IT background and dynamic startup ecos…
S35
Keynote-Jeet Adani — The speech culminated with the announcement of Adani Group’s $100 billion investment commitment to build a sovereign, gr…
S36
The Global Power Shift India’s Rise in AI &amp; Semiconductors — Sovereignty involves ensuring that data and applications remain resident within the country and relevant to national con…
S37
Building Indias Digital and Industrial Future with AI — Another thing I mean in February 2019, 7 years back we had something called draft e -commerce policy. Now the tagline of…
S38
Digital Embassies for Sovereign AI — Li acknowledged this as “the number one question” governments must answer, suggesting a potential solution where “traini…
S39
Indias Roadmap to an AGI-Enabled Future — The government has implemented policies requiring all smart meter data to be hosted within India, reflecting recognition…
S40
DC-IoT Progressing Global Good Practice for the Internet of Things | IGF 2023 — Summary: The analysis of IoT security policies across different countries revealed some significant findings. Firstly, t…
S41
NSW government releases IoT policy — The Government of New South Wales (NSW) has publishedan Internet of things (IoT) policy, launched at IoT Alliance Austra…
S42
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — “Indian user content has to be located in India by certain time frame and so that developers can plan for the grid they …
S43
AI as critical infrastructure for continuity in public services — The discussion revealed that data sovereignty encompasses more than simple data localization. As Pramod noted, true sove…
S44
Panel Discussion Data Sovereignty India AI Impact Summit — Low to moderate disagreement level with high strategic alignment. The disagreements are primarily tactical and reflect d…
S45
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — Government’s shared compute framework with 38,000+ GPUs has proven successful and demonstrates a viable model for scalin…
S46
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — The discussion reveals strong consensus on key strategic directions: comprehensive ecosystem development beyond chip man…
S47
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Anita Gurumurthy: Thank you, thank you, Valeria, and it’s an honor to be part of this panel. So I think the starting poi…
S48
Is AI the key to nuclear renaissance? — AI is projected to contributeUSD 15-20 trillion to the global economy by 2030, driven by rapid adoption and efficiency g…
S49
White House eyes clean energy for AI expansion — A new task force has beenlaunchedby the White House to address the growing demands of AI infrastructure. Led by the Nati…
S50
Growing data centre demand sparks renewable energy investments — US Energy Secretary Jennifer Granholm has assured that the country will be able to meet the growingelectricity demandsdr…
S51
Day 0 Event #249 Sustainable Digital Growth Net Negative Net Zero or Net Positive — – Anton Aschwanden- Karianne Tung Data centers should be strategically located next to surplus renewable energy sources…
S52
The digital economy and enviromental sustainability — It is also highlighted that changes in consumption behaviors are of significant importance. Seeking technological soluti…
S53
From KW to GW Scaling the Infrastructure of the Global AI Economy — The infrastructure demands represent a fundamental shift from traditional data centre design. The speakers noted that wh…
S54
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — Backup generators activated but ran out of fuel after about an hour due to faulty automated refueling systems exacerbati…
S55
Indias Roadmap to an AGI-Enabled Future — Shri Ghanshyam Prasad This comment quantifies the massive scale of energy transformation required for AI infrastructure…
S56
Discussion Report: Sovereign AI in Defence and National Security — Examples include the lack of transparency in ChatGPT’s training data and alignment process, with multibillion dollar law…
S57
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — But the real thing I would say is start now. Many of these models are great, you must have heard Sarvam Modeller beating…
S58
WS #208 Democratising Access to AI with Open Source LLMs — Daniele Turra: Yeah, I’ll try to be very brief. So one key difference that we can see in open LLMs when it comes to t…
S59
Science as a Growth Engine: Navigating the Funding and Translation Challenge — And so I think that ecosystem and to bring those things even for everyday items is incredibly important. The path forwa…
S60
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — How do we ensure that we have the right talent, the research infrastructure, the technology expertise, the supply chain,…
S61
Next-Gen Industrial Infrastructure / Davos 2025 — Christophe De Vusser: Yeah, and I will build on some of the comments that have been made on your last comment on the …
S62
29, filed Jan. 22, 2010, at 9-10. — A robust research and development pipeline is essential to ensuring that spectrum access technologies continue to evolve…
S63
DC-IoT &amp; IS3C: Global Best Practices for a Resilient and Secure IoT by Design — Cave also raised concerns about the difficulties of obtaining meaningful consent for data collection in IoT environments…
S64
THE INTERNET OF THINGS: AN OVERVIEW — Further, the potential for discriminatory pricing practices or unfair services practices may be amplified by the quality…
S65
Table of Contents — As a consequence a large number of proprietary or semi-closed solutions to address specific problems have…
S66
DC-IoT Progressing Global Good Practice for the Internet of Things | IGF 2023 — Jonathan Cave:It’s very quick on the issue of the ethical reflection, ethical consideration and control of these IoT dev…
S67
Keynote-Jeet Adani — The speech culminated with the announcement of Adani Group’s $100 billion investment commitment to build a sovereign, gr…
S68
Keynote ‘I’ to the Power of AI An 8-Year-Old on Aspiring India Impacting the World — This discussion features an 8-year-old prodigy presenting their perspective on global AI development and India’s strateg…
S69
AI Innovation in India — Thank you. the AIM ecosystem in trying to ensure that India tells better stories, tells them legally, ethically and resp…
S70
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — “Do you think AI Summit has been successful?”[68]. “But, in the next 3 -5 years, what are the main targets for India to …
S71
Keynote Address_Revanth Reddy_Chief Minister Telangana — The tone was consistently ambitious, urgent, and nationalistic throughout. The speaker maintained an inspirational and f…
S72
Opening Ceremony — The tone is consistently formal, diplomatic, and optimistic yet cautionary. Speakers maintain a celebratory atmosphere a…
S73
Opening Remarks (50th IFDT) — The overall tone was formal yet warm and celebratory. Speakers expressed pride in the IFDT’s accomplishments and gratitu…
S74
Opening address of the co-chairs of the AI Governance Dialogue — The tone is consistently formal, diplomatic, and optimistic throughout. It maintains a ceremonial quality appropriate fo…
S75
Main Session on Cybersecurity, Trust &amp; Safety Online | IGF 2023 — There are resource limitations on technical, financial, and human fronts
S76
Shaping the Future AI Strategies for Jobs and Economic Development — Several speakers addressed the unique needs of emerging economies, particularly the 70 million MSMEs in India that emplo…
S77
Comprehensive Report: UN General Assembly High-Level Meeting on the 20-Year Review of the World Summit on the Information Society (WSIS) Outcomes — Kenya emphasized that digital inequality is not just about internet access but encompasses multiple interconnected chall…
S78
Digital Trade for Africa’s Prosperity — Major challenges include infrastructure deficits, data inaccuracies, and regulatory gaps
S79
AI Infrastructure and Future Development: A Panel Discussion — Physical infrastructure constraints create bottlenecks – need for skilled trades workers, power, concrete, copper in mas…
S80
Global Digital Governance &amp; Multistakeholder Cooperation for WSIS+20 — ## Audience Engagement and Practical Concerns
S82
Impact &amp; the Role of AI How Artificial Intelligence Is Changing Everything — The discussion maintained a cautiously optimistic tone throughout, balancing enthusiasm for AI’s potential with realisti…
S83
AI as critical infrastructure for continuity in public services — The discussion maintained a collaborative and constructive tone throughout, with participants building on each other’s p…
S84
AI and Human Connection: Navigating Trust and Reality in a Fragmented World — The tone began optimistically with audience engagement but became increasingly concerned and urgent as panelists reveale…
S85
Leaders TalkX: Accelerating global access to information and knowledge in the digital era — The discussion maintained a consistently collaborative, optimistic, and solution-oriented tone throughout. Speakers were…
S86
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — The tone was collaborative and solution-oriented throughout, with participants acknowledging both the urgency and comple…
S87
Regional Leaders Discuss AI-Ready Digital Infrastructure — The discussion maintained a consistently optimistic yet pragmatic tone throughout. Panelists were enthusiastic about AI’…
S88
WS #279 AI: Guardian for Critical Infrastructure in Developing World — The tone of the discussion was largely informative and collaborative. Speakers shared insights from their various backgr…
S89
Open Forum #15 Digital cooperation: the road ahead — The tone was generally constructive and solution-oriented. Participants shared examples of successful partnerships and i…
S90
Opening plenary session and adoption of the agenda — Consequently, an international collaborative effort is advocated, where pooling knowledge and strategy leads to a robust…
S91
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Takahito Tokita Fujitsu — Throughout the presentation, Tokita emphasizes the critical importance of establishing trusted AI infrastructure to inte…
S92
The Power of the Commons: Digital Public Goods for a More Secure, Inclusive and Resilient World — Costanza Sciubba Caniglia opened the event, outlining its structure: opening remarks, success stories, and a panel discu…
S93
High Level Session 3: AI &amp; the Future of Work — ### Opening Remarks: Setting the Stage The discussion featured opening remarks from key stakeholders followed by a mode…
S94
Open Forum #8 Modern Warfare Timeless Emblems — The session followed a structured format with a 20-minute keynote, presentations from both speakers, a 35-minute panel d…
S95
Connecting open code with policymakers to development | IGF 2023 WS #500 — Internet platforms and service providers have data invaluable for informing public policy. This panel session will be co…
S96
AI boom drives massive surge in data centre power demand — According to Goldman Sachs, the surge in AI is set totransformglobal energy markets, with data centres expected to consu…
S97
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — AI has significantlyincreased energy consumption, with data centres now consuming approximately 2% of global electricity…
S98
WS #139 Internet Resilience Securing a Stronger Supply Chain — Olaf Kolkman from the Internet Society illustrated these complexities with concrete examples. His most memorable anecdot…
S99
Introduction — If electrical power is not available, then diesel generators are required to run the equipment, which increases capital …
S100
Acknowledgements — Backup and restore in hybrid cloud computing has the same high-level requirements as it does inside a traditional data c…
S101
https://dig.watch/event/india-ai-impact-summit-2026/powering-ai-_-global-leaders-session-_-ai-impact-summit-india-part-2 — . in five years in certain areas, and the households are feeling that pinch. There is an issue of reliability. Grids wer…
S102
The Glasgow environment summit: A new paradigm? — In world totals of carbon emissions,India lies 4th, with China, the US, the EU, and Russia among the top 5. (China is re…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Shri Ghanshyam Prasad
6 arguments157 words per minute4025 words1530 seconds
Argument 1
Variable load and ultra‑high reliability (N+1+1) pose major challenges for large AI‑driven data centers (Shri Ghanshyam Prasad)
EXPLANATION
Shri Ghanshyam Prasad explains that AI‑driven data centers will have highly variable power demand and must meet an ultra‑high reliability standard (N+1+1), meaning each centre needs multiple independent power sources and backup generators. This level of redundancy is difficult for India’s grid, especially as data‑center capacities grow to the gigawatt scale.
EVIDENCE
He describes how data-center loads are no longer constant but variable, requiring two separate power sources and multiple layers of diesel-generator backup (four layers in total) to satisfy N+1+1 reliability [34-35]. He cites the 50 MW Adani-Google data centre in Noida (10 MW commissioned, 40 MW pending) and upcoming 1,000 MW centres in Mumbai, noting the need for 1.7-2× oversizing of supply [36-49][50-51].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External sources discuss N+1+1 reliability criteria and grid challenges for data centres, confirming the need for dual power sources and high reliability [S1][S15].
MAJOR DISCUSSION POINT
Energy reliability for AI data centres
Argument 2
Need for green‑power supply, diversified landing points and extensive transmission planning to meet data‑center demand (Shri Ghanshyam Prasad)
EXPLANATION
He argues that truly green data centres must be powered exclusively by renewable sources, which requires dedicated transmission lines from renewable generation sites and a diversification of landing points across the country. This reduces dependence on a single grid node and eases transmission constraints.
EVIDENCE
He outlines that green data centres need a mix of solar, wind, hydro, battery and pump-storage, plus dedicated transmission lines to avoid interruptions [54-56]. He stresses diversification of landing points beyond Mumbai and Chennai, proposing additional hubs such as Vizag, Paradeep and Gopalpur to spread load and transmission requirements [66-68][69-74].
MAJOR DISCUSSION POINT
Green power and transmission diversification
AGREED WITH
Tarun Dua
Argument 3
Rapid expansion of renewable capacity (250+ GW) and upcoming 100 GW hydro‑pump storage to support future AI workloads (Shri Ghanshyam Prasad)
EXPLANATION
Shri Prasad highlights India’s fast‑growing renewable portfolio, now over 250 GW, and a planned 100 GW of hydro‑pump storage that will provide 24‑hour balancing for AI‑intensive data centres. This renewable surge is presented as the backbone for future AI energy needs.
EVIDENCE
He notes that renewable capacity grew from a modest 2 GW in 2010-11 to over 250 GW today, with 30 GW added in a single year and 40,000 MW added in ten months [57-60]. He also references a recent report projecting 100 GW of hydro-pump storage within the next ten years to support continuous supply [77-78].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Renewable capacity growth from 2 GW to over 250 GW and plans for 100 GW pump-storage are documented in the roadmap [S1].
MAJOR DISCUSSION POINT
Renewable and storage scaling for AI
AGREED WITH
Tarun Dua
Argument 4
Massive rollout of smart meters and indigenous SCADA systems is required to generate reliable grid data; all data must be hosted within India (Audience & Shri Ghanshyam Prasad)
EXPLANATION
The audience raised concerns about the lack of IoT and physical data collection, and Shri Prasad responded that India is deploying tens of millions of smart meters and developing indigenous SCADA platforms to create a domestic data layer, ensuring that all grid data stays within Indian jurisdiction.
EVIDENCE
The audience asked about the missing physical data layer and data sovereignty [351-366]. Shri Prasad answered that over 3 crore (30 million) smart meters have been installed with 2.5 crore (25 million) more in the pipeline, and that indigenous SCADA systems are being shortlisted and built in India, with strict data-localisation policies [373-383][400-401].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The rollout of tens of millions of smart meters and development of indigenous SCADA systems for data localisation are described in the roadmap [S1].
MAJOR DISCUSSION POINT
Grid digitisation and data sovereignty
AGREED WITH
Audience, Parth Sarthi
Argument 5
Government initiatives such as the ANRF fund aim to bridge research to commercialization and support PhD‑level work (Shri Ghanshyam Prasad)
EXPLANATION
Shri Prasad describes the ANRF (Anusandan) fund, a massive ₹1 lakh crore allocation intended to translate research into commercial outcomes, including support for PhD‑level projects and industry‑university collaborations.
EVIDENCE
He mentions the ANRF fund created with an outlay of ₹1 lakh crore, alongside a ₹20 000 crore CCUS programme, and cites a pilot cluster in Gandhinagar that is linking research to industry for commercialization [271-276].
MAJOR DISCUSSION POINT
Government funding for research‑to‑market
AGREED WITH
Professor Jayadeva, Tarun Dua
Argument 6
Efforts are underway to increase domestic content of power‑electronics equipment (e.g., IGBT) to 100 % within a defined timeframe (Shri Ghanshyam Prasad)
EXPLANATION
He reports that India is tasking domestic firms to develop and commercialise IGBT power‑electronics within two years, aiming for full indigenous content across power‑electronics equipment.
EVIDENCE
He gives the example of IGBT devices currently imported, now assigned to Indian companies with a two-year development target, emphasizing the push for 100 % domestic content [469-475].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Domestic development of IGBT power-electronics with a two-year target is highlighted in the power-electronics discussion [S2].
MAJOR DISCUSSION POINT
Indigenisation of power‑electronics
T
Tarun Dua
5 arguments169 words per minute2019 words714 seconds
Argument 1
Advocacy for Small Modular Reactor (SMR) nuclear plants co‑located with data centers to provide reliable, low‑cost energy (Tarun Dua)
EXPLANATION
Tarun proposes that SMR nuclear reactors be sited alongside data‑centre campuses, offering a stable, low‑cost power source that eliminates transmission fees and reduces reliance on diesel generators.
EVIDENCE
He outlines the benefits of SMR: modular 220 MW units, land requirements for containment zones, cost savings from avoiding grid transit fees, and a timeline of three to five years for deployment near data-centres [238-246].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
SMR benefits, modular size, and security aspects are outlined, supporting co-location with data centres [S17].
MAJOR DISCUSSION POINT
Nuclear power for data‑centre reliability
AGREED WITH
Shri Ghanshyam Prasad
Argument 2
India will require roughly 128 000 GPUs for the top 1 000 organisations; a leapfrog is needed to meet this demand (Tarun Dua)
EXPLANATION
Tarun estimates that each of the top 1,000 Indian organisations will need about 128 GPUs, totaling 128,000 GPUs, and argues that India must accelerate its GPU infrastructure to close the current 18‑month lag and eventually become a global compute hub.
EVIDENCE
He calculates the requirement (128 GPUs per organisation × 1,000 organisations = 128,000 GPUs) and notes India is currently behind by about 18 months but can leapfrog as it did with 5G/4G [203-206].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Estimates of 128 000 GPUs needed for the top 1 000 organisations are provided in the AI infrastructure analysis [S18].
MAJOR DISCUSSION POINT
GPU demand scaling
AGREED WITH
Parth Sarthi, Suvrat Bhoosha
Argument 3
Industry should define goal‑directed research roadmaps and co‑fund projects to ensure timely, market‑relevant outcomes (Tarun Dua)
EXPLANATION
Tarun stresses that industry must set clear, time‑bound research roadmaps for technologies such as optoelectronic networks and co‑packaging, and provide funding, otherwise other countries will outpace India.
EVIDENCE
He cites examples like optoelectronic networks, the need for 100 Gbps to 1.6 Gbps upgrades, and the lack of sufficient industry funding for goal-directed research, urging industry to create roadmaps and co-fund projects [287-300].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for goal-directed research and industry-defined outcomes is noted in the roadmap discussion [S1].
MAJOR DISCUSSION POINT
Goal‑directed research funding
AGREED WITH
Professor Jayadeva, Shri Ghanshyam Prasad
Argument 4
Physical prototyping remains a bottleneck; digital twins can reduce cycle time, but hardware resources and rapid prototyping infrastructure are still needed (Tarun Dua)
EXPLANATION
Tarun points out that while digital twins and LLM‑driven simulations can speed up design, actual physical prototyping still requires costly equipment, 3D printers, and chip‑design facilities, which are scarce in India.
EVIDENCE
He mentions the need for nearby 3D printers, chip design tools, and physics simulations, and suggests that LLMs can accelerate the thinking part but physical prototyping remains a challenge [406-412].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Digital twins can accelerate design cycles, though physical prototyping remains a challenge, as discussed in the digital twins momentum report [S19].
MAJOR DISCUSSION POINT
Prototyping and digital twins
Argument 5
Building domestic volume for semiconductor components will enable commercialization of Indian IP and reduce reliance on imports (Tarun Dua)
EXPLANATION
Tarun argues that creating a sizable domestic market for semiconductor IP will allow Indian designs to be manufactured locally, reducing dependence on foreign suppliers and fostering a self‑sustaining ecosystem.
EVIDENCE
He explains that most semiconductor IP is already developed in India, but without domestic volume the market cannot sustain production; once volume exists, licensing and commercialization will follow, accelerating the ecosystem [447-452].
MAJOR DISCUSSION POINT
Domestic semiconductor market development
P
Parth Sarthi
3 arguments188 words per minute1879 words597 seconds
Argument 1
Scaling laws show that AI progress is fundamentally a compute problem; more GPUs and data drive capability gains (Parth Sarthi)
EXPLANATION
Parth explains that empirical scaling laws demonstrate a predictable power‑law relationship between compute (or model size) and performance loss, meaning that each ten‑fold increase in compute yields consistent improvements, making AI advancement an engineering challenge of acquiring more GPUs, data, and electricity.
EVIDENCE
He references the 2020 OpenAI Scaling Laws paper, noting a straight-line log-log relationship with exponent ~0.07, meaning a 5 % loss reduction per parameter doubling, and cites the Chinchilla correction that data and compute must be scaled together [132-140][141-148].
MAJOR DISCUSSION POINT
Compute scaling laws in AI
Argument 2
The India mission has already provisioned 38 000–50 000 GPUs, creating a domestic compute base for sovereign frontier models (Parth Sarthi)
EXPLANATION
Parth notes that under the India mission, the country has already allocated between 38,000 and 50,000 GPUs, establishing a substantial domestic compute infrastructure for building sovereign AI models.
EVIDENCE
He states that India now has 38,000 GPUs scaling to over 50,000, thanks to the mission, and expects further scaling in the near future [131].
MAJOR DISCUSSION POINT
Domestic GPU provisioning
Argument 3
The AI Kosh initiative and domestic GPU clusters will keep training data and models on Indian soil, ensuring sovereignty (Parth Sarthi)
EXPLANATION
Parth highlights that the AI Kosh program provides Indian datasets, and combined with locally hosted GPU clusters, it ensures that both data and model training remain within India, preserving data sovereignty.
EVIDENCE
He mentions AI Kosh as a source of indigenous datasets and notes that domestic GPU clusters, supported by tax incentives for data-centres, keep compute and data inside the country [402-405].
MAJOR DISCUSSION POINT
Data sovereignty via AI Kosh
P
Professor Jayadeva
3 arguments150 words per minute2406 words958 seconds
Argument 1
Lack of clear career pathways and incentives discourages Indian students from pursuing PhDs; industry‑university joint projects are essential (Professor Jayadeva)
EXPLANATION
Professor Jayadeva argues that Indian students see limited career prospects after a PhD, face social and parental pressures, and lack attractive incentives, making industry‑university collaborations crucial to retain talent and provide meaningful research opportunities.
EVIDENCE
He discusses the missing career connect for PhD graduates, parental pressure, gender disparity, and the need for industry-university joint labs, citing examples of employees working full-time on PhD projects and the difficulty of scaling research without clear incentives [108-118][267-270].
MAJOR DISCUSSION POINT
Talent pipeline and incentives
AGREED WITH
Shri Ghanshyam Prasad, Tarun Dua
Argument 2
A balanced mix of basic, applied, and goal‑directed research is needed, with stronger translation mechanisms between labs and industry (Professor Jayadeva)
EXPLANATION
He stresses that an ecosystem should support fundamental, applied, and goal‑directed research simultaneously, and that effective translation from university labs to industry is essential for innovation and commercialization.
EVIDENCE
He emphasizes the need for all three research types, cites the VLSI Design Tools and Technology Program (running since 1996) that sponsors students, and gives the Berkeley-Cadence joint lab example to illustrate successful translation [307-317][318-324][329-336].
MAJOR DISCUSSION POINT
Research mix and translation
Argument 3
India excels in chip‑design IP; the major cost lies in design rather than fabrication, and domestic design capability is already strong (Professor Jayadeva)
EXPLANATION
Professor Jayadeva points out that India’s strength lies in VLSI and chip‑design IP, with most cost incurred during design rather than manufacturing, and that Indian engineers already handle a large share of global chip design work.
EVIDENCE
He describes India’s VLSI ecosystem, the SCL plant’s limited role, the dominance of design work in Bangalore, Pune, Hyderabad, and Noida, and notes that design cost dominates semiconductor development, with many gold-medal students staying for PhDs and chip projects [478-487].
MAJOR DISCUSSION POINT
Chip‑design expertise
S
Suvrat Bhoosha
5 arguments60 words per minute1654 words1631 seconds
Argument 1
Enterprise‑grade cloud infrastructure at significantly lower cost democratizes AI and empowers thousands of innovators
EXPLANATION
Suvrat states that by offering high‑performance GPU clusters at reduced prices, the company is making advanced AI capabilities accessible to a broad base of developers and startups, thereby supporting India’s sovereign AI ambitions.
EVIDENCE
He notes that the infrastructure delivers H100, H200 and B200 GPUs, empowering over 10,000 innovators, and that this work builds the foundational infrastructure for India’s AI goals [81-82].
MAJOR DISCUSSION POINT
AI democratization through affordable compute
Argument 2
Accurate forecasting of AI‑driven data‑center energy consumption and short‑term grid preparation is essential
EXPLANATION
Suvrat asks the panel to estimate how much power modern AI data centres will require relative to the nation’s total electricity demand and how India can ready its grid over the next 12, 24 and 36 months.
EVIDENCE
He directly poses the question to Shri Ghanshyam Prasad, requesting forecasts of data-center energy use and preparation timelines of 12-36 months [218-220].
MAJOR DISCUSSION POINT
Energy planning for AI infrastructure
Argument 3
Co‑locating data centres with renewable‑energy generation hubs can reduce transmission constraints and improve reliability
EXPLANATION
Suvrat suggests that placing large AI compute facilities near regions rich in renewable generation, such as Rajasthan, would minimise transmission losses and help meet the reliability requirements of AI workloads.
EVIDENCE
He raises the issue of moving data centres close to energy hubs, asking panelists for their perspective on locating centres near renewable-rich states versus current hubs like Mumbai [237-239].
MAJOR DISCUSSION POINT
Data‑centre siting and energy integration
Argument 4
India must strengthen undergraduate readiness and retain PhD talent to sustain an AI ecosystem
EXPLANATION
Suvrat queries the panel on how prepared Indian undergraduates are for AI workloads and why many students seek PhDs abroad, emphasizing the need for stronger domestic graduate programmes and incentives.
EVIDENCE
He asks Professor Jayadeva about undergraduate readiness, motivations for studying abroad, and ways to empower more Indian students to pursue PhDs within the country [247-250].
MAJOR DISCUSSION POINT
Talent pipeline for AI
Argument 5
Current students view PhD programmes as a less attractive career path, requiring clearer incentives
EXPLANATION
Suvrat probes how students perceive PhDs immediately after college, highlighting social and financial deterrents that may discourage them from pursuing research careers.
EVIDENCE
He asks, “How do students today look at PhD as a career path right out of college?” prompting discussion on perceptions and barriers [266-267].
MAJOR DISCUSSION POINT
Perception of PhD careers
A
Audience
5 arguments168 words per minute972 words346 seconds
Argument 1
A comprehensive IoT and digital‑twin layer across generation, transmission and distribution is needed to generate the massive data required for AGI training
EXPLANATION
The audience points out that while data‑centre capacity is discussed, the physical sensor network that feeds real‑time power‑system data is missing, limiting the creation of end‑to‑end digital twins essential for AI model development.
EVIDENCE
They note the absence of IoT sensors in generation, transmission and distribution, cite a pilot digital-twin for Haryana that is not yet scaled, and ask where the ecosystem for the physical layer is being built [351-366].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of digital twins and extensive sensor networks for AI training is emphasized in the digital twins report [S19] and the smart-meter rollout discussion [S1].
MAJOR DISCUSSION POINT
Physical data collection for AI
Argument 2
All grid and AI‑related data must be stored within India to ensure data sovereignty and security
EXPLANATION
The audience stresses that for AGI development, data generated by smart meters and other sensors should never leave Indian jurisdiction, warning that foreign hosting poses security risks.
EVIDENCE
They argue that data from smart meters and future AI systems must be housed in Indian data centres, emphasizing cyber-security and sovereignty concerns [393-401].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Data sovereignty concerns and domestic hosting of smart-meter data are highlighted in the roadmap [S1].
MAJOR DISCUSSION POINT
Data localisation and security
Argument 3
India needs a domestic semiconductor and microprocessor manufacturing ecosystem to support AI infrastructure and reduce reliance on imports
EXPLANATION
Audience members call for the development of indigenous microprocessor production capacity, noting the current dependence on foreign chips and the strategic importance of a local supply chain for AI data‑centres.
EVIDENCE
They highlight the lack of a clear framework, the expectation of a 2025 domestic microprocessor capability, and the necessity of a large domestic market for semiconductor components to enable local production [428-444][436-444].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Domestic manufacturing of power-electronics like IGBT is being pursued, indicating steps toward a local semiconductor ecosystem [S2].
MAJOR DISCUSSION POINT
Indigenous semiconductor manufacturing
Argument 4
Government should enable IoT sensor deployment for physical assets (e.g., vending machines) to create data streams for AI‑driven automation
EXPLANATION
The audience argues that without government‑supported instrumentation, physical systems such as vending machines cannot generate the sensor data needed for AI applications, limiting automation and job creation.
EVIDENCE
They describe the absence of sensors in vending machines, the need for connectivity, and ask why the government does not facilitate the necessary IoT infrastructure [413-424].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Smart-meter and SCADA deployment illustrates government support for IoT sensor infrastructure, reinforcing the need for broader sensor rollout [S1].
MAJOR DISCUSSION POINT
Government facilitation of IoT for AI
Argument 5
A coordinated policy framework is missing, leading to parallel, unsequenced development across sectors
EXPLANATION
The audience observes that there is no overarching framework dictating the order of sectoral development, resulting in simultaneous but disconnected initiatives that hinder efficient progress.
EVIDENCE
They state, “There is no framework. Nothing comes first or later. Everything goes in parallel,” reflecting concerns about fragmented planning [436-440].
MAJOR DISCUSSION POINT
Need for coordinated policy framework
Agreements
Agreement Points
India must scale GPU compute dramatically to support sovereign AI models
Speakers: Parth Sarthi, Tarun Dua, Suvrat Bhoosha
The India mission has already provisioned 38 000‑50 000 GPUs, creating a domestic compute base for sovereign frontier models (Parth Sarthi) India will require roughly 128 000 GPUs for the top 1 000 organisations; a leapfrog is needed to meet this demand (Tarun Dua) Enterprise‑grade cloud infrastructure at significantly lower cost democratizes AI and empowers thousands of innovators (Suvrat Bhoosha)
All three speakers stress that India must dramatically increase its GPU compute capacity, with the India mission already providing 38-50 k GPUs (Parth) and an estimated need of about 128 k GPUs for the top 1 000 organisations (Tarun), while Suvrat highlights affordable enterprise-grade GPU clusters as a democratising force [131][194-206][81-82].
POLICY CONTEXT (KNOWLEDGE BASE)
The government’s shared compute framework already provides over 38,000 GPUs and policy roadmaps call for scaling to millions of GPUs to meet national AI demand [S45]; this aligns with broader sovereign AI strategies emphasizing domestic compute capacity [S36].
Expanding renewable and low‑carbon power (including hydro‑pump storage and nuclear SMRs) is essential to meet AI data‑centre energy needs
Speakers: Shri Ghanshyam Prasad, Tarun Dua
Rapid expansion of renewable capacity (250+ GW) and upcoming 100 GW hydro‑pump storage to support future AI workloads (Shri Ghanshyam Prasad) Advocacy for Small Modular Reactor (SMR) nuclear plants co‑located with data centers to provide reliable, low‑cost energy (Tarun Dua)
Shri Prasad points to the rapid growth of renewable capacity to over 250 GW and a planned 100 GW of hydro-pump storage to meet AI workloads, and Tarun proposes co-locating SMR nuclear reactors with data centres to provide reliable, low-cost power [57-60][77-78][238-246].
POLICY CONTEXT (KNOWLEDGE BASE)
Strategic placement of data centres next to surplus renewable sources is advocated to reduce transmission constraints and improve reliability, as highlighted in sustainability forums [S51]; similar considerations arise from discussions on AI’s high electricity consumption [S48].
Data generated from the power grid and AI workloads must remain within India to ensure sovereignty and security
Speakers: Audience, Shri Ghanshyam Prasad, Parth Sarthi
A comprehensive IoT and digital‑twin layer across generation, transmission and distribution is needed to generate the massive data required for AGI training (Audience) Massive rollout of smart meters and indigenous SCADA systems is required to generate reliable grid data; all data must be hosted within India (Audience & Shri Ghanshyam Prasad) The AI‑Kosh initiative and domestic GPU clusters will keep training data and models on Indian soil, ensuring sovereignty (Parth Sarthi)
The audience raises the need for an IoT and digital-twin layer and for all data to stay within India; Shri Prasad confirms the rollout of tens of millions of smart meters and indigenous SCADA systems to keep data domestic, and Parth notes AI-Kosh and domestic GPU clusters ensure data and model training remain on Indian soil [351-366][373-383][400-401][402-405].
POLICY CONTEXT (KNOWLEDGE BASE)
India’s roadmap mandates that smart-meter and grid data be hosted domestically, reflecting a policy focus on data sovereignty for critical infrastructure [S39]; the AI Impact Summit also called for Indian user content to be localized [S42] and emphasized legal control over data access [S43].
Strengthening the research talent pipeline requires better incentives, funding mechanisms and industry‑university collaboration
Speakers: Professor Jayadeva, Shri Ghanshyam Prasad, Tarun Dua
Lack of clear career pathways and incentives discourages Indian students from pursuing PhDs; industry‑university joint projects are essential (Professor Jayadeva) Government initiatives such as the ANRF fund aim to bridge research to commercialization and support PhD‑level work (Shri Ghanshyam Prasad) Industry should define goal‑directed research roadmaps and co‑fund projects to ensure timely, market‑relevant outcomes (Tarun Dua)
Professor Jayadeva highlights the lack of clear career pathways and incentives for PhDs, calling for industry-university joint projects; Shri Prasad mentions the ANRF fund to bridge research to commercialization; Tarun stresses the need for industry-defined, goal-directed research roadmaps and co-funding [108-118][267-270][271-276][287-300].
POLICY CONTEXT (KNOWLEDGE BASE)
Consensus among stakeholders highlights the need for a broad ecosystem, multi-stakeholder collaboration, and expanded talent development beyond narrow skill training [S46]; this aligns with national objectives to build a next-gen AI workforce.
Locating data centres near renewable generation hubs and diversifying landing points reduces transmission constraints and improves reliability
Speakers: Shri Ghanshyam Prasad, Tarun Dua
Need for green‑power supply, diversified landing points and extensive transmission planning to meet data‑center demand (Shri Ghanshyam Prasad) Advocacy for Small Modular Reactor (SMR) nuclear plants co‑located with data centers to provide reliable, low‑cost energy (Tarun Dua)
Both Shri Prasad and Tarun advocate diversifying data-centre locations, with Shri Prasad urging multiple landing points beyond Mumbai and Chennai and Tarun suggesting SMR-powered campuses near renewable-rich regions [66-68][69-74][238-246].
POLICY CONTEXT (KNOWLEDGE BASE)
Best-practice recommendations stress co-locating data centres with renewable generation to leverage surplus power, lower latency, and cooler climates, as discussed in sustainable digital growth sessions [S51].
Similar Viewpoints
Both speakers argue that industry must play an active role in shaping research agendas—Jayadeva through joint labs and Tarun via goal‑directed roadmaps and co‑funding [108-118][287-300].
Speakers: Professor Jayadeva, Tarun Dua
Lack of clear career pathways and incentives discourages Indian students from pursuing PhDs; industry‑university joint projects are essential (Professor Jayadeva) Industry should define goal‑directed research roadmaps and co‑fund projects to ensure timely, market‑relevant outcomes (Tarun Dua)
The audience’s call for extensive IoT sensor networks and digital twins aligns with Tarun’s observation that physical prototyping is a bottleneck and that digital twins can accelerate design cycles, indicating a shared concern about bridging the physical‑digital gap [351-366][406-412].
Speakers: Audience, Tarun Dua
A comprehensive IoT and digital‑twin layer across generation, transmission and distribution is needed to generate the massive data required for AGI training (Audience) Physical prototyping remains a bottleneck; digital twins can reduce cycle time, but hardware resources and rapid prototyping infrastructure are still needed (Tarun Dua)
Unexpected Consensus
Government acknowledgement of a missing IoT and physical data‑collection layer
Speakers: Audience, Shri Ghanshyam Prasad
A comprehensive IoT and digital‑twin layer across generation, transmission and distribution is needed to generate the massive data required for AGI training (Audience) Massive rollout of smart meters and indigenous SCADA systems is required to generate reliable grid data; all data must be hosted within India (Audience & Shri Ghanshyam Prasad)
While the audience highlighted the absence of IoT infrastructure, Shri Prasad unexpectedly confirmed the gap and described ongoing smart-meter deployment and indigenous SCADA development to address it [351-366][367-370][373-383].
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses of global IoT policy gaps note India’s limited IoT security framework, underscoring the need for a physical data-collection layer [S40]; domestically, the push for indigenous SCADA and smart-meter hosting reflects this acknowledgement [S39].
Data sovereignty for grid and AI data
Speakers: Audience, Shri Ghanshyam Prasad
All grid and AI‑related data must be stored within India to ensure data sovereignty and security (Audience) Massive rollout of smart meters … all data has to be housed in the country itself (Shri Ghanshyam Prasad)
Both the audience and Shri Prasad stress that all data generated by smart meters and future AI systems must remain in India, with Shri Prasad confirming policies to keep smart-meter data domestic, reflecting a consensus on data localisation [393-401][400-401].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy discussions repeatedly stress that sovereignty extends beyond localization to control over legal frameworks, encryption keys, and infrastructure management for both grid and AI data [S43]; Indian panels have shown high strategic alignment on these principles [S44].
Overall Assessment

The panel shows strong convergence on several strategic pillars: massive GPU compute scaling, expansion of renewable and low‑carbon power (including hydro‑pump storage and SMRs), domestic data sovereignty through smart‑meter and SCADA rollouts, and the need for robust research funding and industry‑university collaboration to nurture talent. There is also agreement on diversifying data‑centre locations and linking them to renewable hubs.

High consensus across government, industry and academia on the core enablers (energy, compute, data, talent). This alignment suggests a coordinated policy and investment agenda is feasible, though implementation details (e.g., timelines for SMRs, scaling of smart‑meter networks) will require continued multi‑stakeholder effort.

Differences
Different Viewpoints
Unexpected Differences
Takeaways
Key takeaways
Building sovereign AI in India requires an integrated ecosystem spanning energy, compute, talent, and data sovereignty. AI‑driven data centers will impose variable, high‑reliability loads; meeting N+1+1 reliability and green‑power supply is a major challenge. Rapid expansion of renewable capacity (250+ GW) and upcoming hydro‑pump storage (≈100 GW) are critical to support future AI workloads. Small Modular Reactor (SMR) nuclear power is seen as a promising, low‑cost, reliable source for data‑center clusters, though timelines are uncertain. India’s compute demand could reach ~128,000 GPUs for the top 1,000 organisations; the current mission has provisioned 38,000‑50,000 GPUs and aims to scale further. Scaling laws demonstrate that AI progress is fundamentally a compute‑and‑data problem; more GPUs and better algorithms drive capability gains. A robust talent pipeline is essential: clear career pathways, industry‑university joint projects, and stronger incentives are needed to retain PhDs and researchers. Government initiatives such as the ANRF fund, RDSS program, and smart‑meter rollout aim to bridge research, commercialization, and data generation. Physical data collection (IoT, smart meters, SCADA) is currently a bottleneck; indigenous solutions and keeping data within India are required for security and sovereignty. India already has strong chip‑design expertise; the main gap lies in volume manufacturing and domestic supply of power‑electronics components. Collaboration models that blend basic, applied, and goal‑directed research, with industry defining roadmaps, are necessary for rapid innovation.
Resolutions and action items
Continue dynamic, six‑month transmission planning and annual resource‑adequacy updates (CEA). Accelerate rollout of smart meters: >3 crore installed, ~25 crore in pipeline, to enable grid‑level data and indigenous SCADA development. Leverage the RDSS program to fund reliability and automation projects, including indigenous SCADA systems. Utilize the ANRF fund (≈₹1 lakh crore) to support university‑industry research clusters and translate PhD work into commercial products. E2E Networks to explore co‑location of SMR nuclear plants with future data‑center campuses and assess feasibility within 3‑5 years. Chariot/India mission to expand GPU capacity beyond the current 38‑50 k, targeting the projected 128 k requirement. Industry (e.g., Tarun’s team) to define and share goal‑directed research roadmaps with academia for timely, market‑relevant outcomes. Promote joint industry‑university programs (e.g., VLSI Design Tools and Technology Program) to sponsor student research and patents. Encourage startups to map grid assets and develop domain‑specific RL environments for Indian languages and sectors. Commit to increasing domestic content of power‑electronics (e.g., IGBT) to 100 % within defined timeframes.
Unresolved issues
Exact timeline and regulatory pathway for large‑scale SMR deployment near data centers remain unclear. Financing mechanisms and scaling of physical prototyping infrastructure (3‑D printing, chip fab access) are not yet resolved. How to achieve full 100 % domestic manufacturing of advanced semiconductor components and power‑electronics in the near term. Specific strategies to make PhD careers financially attractive and to retain talent within India need further definition. Comprehensive plan for end‑to‑end IoT data collection across generation, transmission, distribution, and consumer layers is still pending. Details of data‑sovereignty enforcement for all AI‑related datasets and models beyond smart‑meter data were not fully addressed. Coordinated funding and governance framework for multi‑sector research (energy, compute, semiconductor) remains to be formalized.
Suggested compromises
Adopt a mixed renewable‑plus‑nuclear supply strategy for data centers, acknowledging SMR may take longer than initially hoped. Combine goal‑directed industry research with basic academic inquiry to ensure both rapid innovation and long‑term breakthroughs. Encourage data‑center placement near renewable hubs while also diversifying landing points to balance grid load and transmission costs. Provide partial financial incentives (e.g., industry‑sponsored fellowships) to make PhD pathways more attractive without full government funding. Use digital twins and RL environments to reduce physical prototyping cycles, while still investing in limited high‑value hardware labs.
Thought Provoking Comments
Building true frontier intelligence from India is a monumental ecosystem play. We cannot simply import models and talents, run them on borrowed infrastructure, and call them our own. If we want to solve India‑scale problems at population scale, we must own the power, the hardware, and the talent and the research that drives them.
Sets a bold, holistic thesis that AI sovereignty requires an end‑to‑end domestic ecosystem—not just software—framing the entire discussion.
Established the overarching problem statement, prompting each panelist to address their pillar (energy, compute, research) as part of a unified ecosystem. It guided the subsequent questions and kept the conversation anchored to the theme of self‑reliance.
Speaker: Suvrat Bhoosha
The challenge is first how do I maintain a variable load? How do I meet the N plus 1 plus 1 criteria of supply? … If you want to classify yourself exactly green data centers, then that means I need to ensure only green power flows into your data center, which means a combination of solar, wind, battery, hydro, hydro pump storage, or any such type of a combination.
Highlights the technical and reliability complexities of powering large AI data centers with renewable energy, introducing the concept of ‘N+1+1’ redundancy and green‑only supply.
Shifted the conversation from abstract energy availability to concrete grid reliability and renewable integration challenges, leading to follow‑up questions about locating data centers near generation and the role of nuclear/renewable mix.
Speaker: Shri Ghanshyam Prasad
We used to do everything for the world, not for ourselves. Now we are saying that not only are we going to do things for ourselves, we are going to do things for the world. We become the innovation hub and the innovation capital of building cloud infrastructure for the world.
Frames India’s evolution from an outsourcing economy to a global infrastructure provider, linking national ambition with economic opportunity.
Prompted the moderator to ask about compute demand and led Tarun to quantify the GPU needs (128 000 GPUs), which in turn sparked discussion on scaling, leap‑frogging, and India’s potential to become a compute hub.
Speaker: Tarun Dua
Scaling laws show that every time you 10× your compute, your model gets measurably, predictably better. This turned intelligence into an engineering problem, not a science problem. The new ‘reasoning’ models let us spend more RL training compute, which can be done on ordinary CPUs and distributed hardware, not just on the biggest GPU clusters.
Introduces a paradigm shift: instead of chasing ever larger GPU clusters, focus on algorithmic advances (reasoning, RL environments) and leveraging India’s massive human expertise.
Redirected the dialogue from pure hardware scaling to the importance of data, environments, and domain‑specific RL, influencing later audience questions about data collection, IoT, and indigenous datasets.
Speaker: Parth Sarthi
The biggest bottleneck is the career path for researchers. Students see PhDs as a five‑year detour with little pay, and industry‑university collaborations are often arm‑length. We need models where students can work on industry‑relevant problems while being paid, and we need to bridge the IPR gap.
Diagnoses the talent pipeline problem, linking it to structural incentives, funding models, and industry‑academia partnership dynamics.
Steered the conversation toward human capital, leading to suggestions about joint labs, MS‑Research programs, and the need for policy changes (e.g., ANRF fund) to translate research into commercial outcomes.
Speaker: Professor Jayadeva
We have rolled out more than 3 crore smart meters and have 25 crore in the pipeline. Once the smart meters are in place, we will have indigenous SCADA systems and massive domestic data streams that can be used for planning, optimization, and AI use‑cases.
Directly addresses the audience’s concern about the physical data‑collection layer, showing concrete steps toward building a secure, domestic data ecosystem.
Validated the audience’s point about IoT and data sovereignty, expanded the discussion to include smart‑meter rollout, SCADA development, and the role of startups in mapping assets, linking back to the earlier theme of data availability for AGI.
Speaker: Shri Ghanshyam Prasad (response to audience)
We need a mix of short‑term, medium‑term, and long‑term research. Goal‑directed industry projects give immediate value, but we also need exploratory, high‑risk work. The best model is co‑located labs where industry and academia solve problems together, like the Berkeley‑Cadence example.
Provides a nuanced view of research strategy, arguing against a binary choice between applied and fundamental work and proposing a collaborative lab model.
Deepened the analysis of how to structure the research ecosystem, influencing later remarks about funding versus dialogue, and reinforcing the need for industry‑academia integration.
Speaker: Professor Jayadeva (later comment)
Physical prototyping cycles are long and expensive, but LLMs can accelerate the design phase. Eventually, digital twins will replace many physical prototypes, allowing us to converge physical and virtual development.
Connects AI capabilities (LLMs, digital twins) to the bottleneck of hardware prototyping, suggesting a path to faster innovation.
Bridged the earlier gap between AI compute and the physical layer, reinforcing the earlier point about leveraging AI to accelerate hardware development and answering the audience’s concern about IoT infrastructure.
Speaker: Tarun Dua (audience follow‑up)
Overall Assessment

The discussion was driven by a series of pivotal insights that moved it from a high‑level vision of AI sovereignty to concrete challenges and solutions across energy, compute, talent, and data collection. Suvrat’s opening framed the ecosystem narrative, which Ghanshyam’s reliability and renewable‑grid analysis grounded in energy realities. Tarun’s historical perspective and GPU‑demand forecast quantified the compute pillar, while Parth’s scaling‑law and reasoning‑model argument reframed the compute race as an algorithmic and data problem rather than pure hardware. Professor Jayadeva’s diagnosis of the talent pipeline and research‑industry collaboration added the human capital dimension, prompting concrete policy ideas (ANRF fund, MS‑Research). Audience questions about IoT and data sovereignty forced the panel to acknowledge the physical data‑collection layer, leading Ghanshyam to cite smart‑meter rollouts and indigenous SCADA development. Together, these comments shifted the tone from aspirational to actionable, introduced new sub‑topics (green‑only data centers, nuclear SMRs, digital twins), and deepened the conversation by linking each pillar to the others, ultimately shaping a holistic roadmap for building a sovereign AI ecosystem in India.

Follow-up Questions
How will India’s energy consumption from large AI data centers compare to overall national demand, and how can the grid be planned to meet this load over 12‑36 month horizons?
Understanding the scale and timing of power needs is critical to ensure grid reliability and to avoid bottlenecks as AI workloads grow.
Speaker: Suvrat Bhoosha, Shri Ghanshyam Prasad
Should new data centers be co‑located with renewable generation hubs (e.g., Rajasthan) or SMR nuclear plants, and what are the land, regulatory, and timeline implications?
Proximity to clean power can reduce transmission losses and costs, but requires assessment of land availability, safety zones, and deployment schedules for emerging technologies like SMRs.
Speaker: Suvrat Bhoosha, Tarun Dua, Shri Ghanshyam Prasad
What strategies are needed to build a robust IoT and physical data‑collection layer (smart meters, sensors, SCADA) across the power value chain to feed AI models?
High‑resolution, real‑time data from the distribution network is essential for digital twins, predictive analytics, and sovereign AI development.
Speaker: Audience (Pradeep Subramaniam), Shri Ghanshyam Prasad
How can the government ensure that the massive data generated by smart meters and other IoT devices remains within India’s data‑sovereign infrastructure?
Data sovereignty is vital for national security and for training large language models on indigenous datasets without external exposure.
Speaker: Audience, Shri Ghanshyam Prasad
What policies or platforms can enable physical‑layer integration (e.g., sensors in vending machines) to support agentic AI applications, similar to how UPI enabled digital payments?
Connecting everyday hardware to AI pipelines can unlock new use‑cases and economic growth, but requires standards, incentives, and affordable sensor deployment.
Speaker: Audience
How can India accelerate domestic manufacturing of microprocessors and semiconductor components to reduce reliance on imports and support AI compute needs?
Indigenous chip production lowers supply‑chain risk, creates jobs, and aligns with the broader goal of a sovereign AI ecosystem.
Speaker: Audience, Tarun Dua, Professor Jayadeva
What incentives and career pathways are needed to attract more students to PhD and research careers in AI within India, and how can industry‑academia collaborations be structured?
A critical mass of skilled researchers is required for frontier model development; clear incentives and joint projects can bridge the current talent gap.
Speaker: Professor Jayadeva
How should research funding be balanced between goal‑directed, short‑term industry problems and longer‑term exploratory academic work to foster innovation?
Both types of research are necessary: goal‑directed work drives immediate commercial impact, while exploratory work seeds breakthrough technologies.
Speaker: Tarun Dua, Professor Jayadeva
What are the requirements for building domain‑specific reinforcement‑learning environments (e.g., agriculture, legal, medical) in India to train frontier models?
Tailored RL environments leverage India’s vast domain expertise and can create unique AI capabilities not addressed by Western labs.
Speaker: Parth Sarthi
Can large language models be used to shorten physical prototyping cycles and improve digital‑twin simulations for hardware development?
If LLMs can accelerate design iteration, the time and cost of bringing new hardware to market could be dramatically reduced.
Speaker: Tarun Dua
What is the roadmap for scaling smart‑meter deployment and developing indigenous SCADA systems while ensuring cybersecurity?
Smart meters generate the data needed for AI‑driven grid optimization; secure, locally‑built SCADA platforms are essential for national resilience.
Speaker: Shri Ghanshyam Prasad
How will new funding mechanisms such as the ANRF and CCUS initiatives be leveraged to close technology gaps in the power and AI sectors?
Targeted large‑scale funds can accelerate research, commercialization, and infrastructure projects that are currently under‑resourced.
Speaker: Shri Ghanshyam Prasad
What is the realistic timeline for deploying Small Modular Reactor (SMR) technology to power data centers, and what regulatory or land‑use challenges must be addressed?
SMRs could provide reliable, low‑carbon power for high‑density compute, but their rollout depends on safety zones, licensing, and supply‑chain development.
Speaker: Tarun Dua, Shri Ghanshyam Prasad
How can a large, interconnected domestic market be cultivated to drive volume production of microprocessor IP and encourage domestic chip fabrication?
Market size and demand are key to attracting investment in design and manufacturing ecosystems, reducing dependence on foreign suppliers.
Speaker: Tarun Dua
What steps are needed to increase the domestic content of power‑equipment (e.g., IGBTs) from current 50‑80% to near‑100% indigenization?
Higher domestic content improves supply security and aligns with the goal of a self‑reliant AI and energy infrastructure.
Speaker: Shri Ghanshyam Prasad

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Ethical AI_ Keeping Humanity in the Loop While Innovating

Ethical AI_ Keeping Humanity in the Loop While Innovating

Session at a glanceSummary, keypoints, and speakers overview

Summary

The UNESCO-sponsored panel “Humanity in the Loop” examined how to balance AI innovation with ethical safeguards, emphasizing a human-centred approach to technology deployment [1][2]. UNESCO’s 2021 Recommendation on the Ethics of Artificial Intelligence, adopted by 193 member states, frames ethical AI around human rights, dignity and fundamental freedoms and calls for these principles to be operationalised in practice [21][23][40-41]. Dr. Tawfik Jelassi argued that ethics and innovation are not contradictory but mutually reinforcing, insisting that ethical reflection must be built into AI design from the outset (“ethical by design”) and that the UNESCO recommendation provides a global framework for this [38-40][41]. Debjani Ghosh stressed that the real choice is whether technology serves humanity’s basic needs or fuels conflict, and that accountability ultimately rests with humans; she advocated embedding oversight throughout the development lifecycle and using “sandbox” testing to make ethics an integral part rather than an afterthought [49-56][65-68].


Brando Benifei described the EU AI Act’s risk-based approach, noting prohibited uses such as predictive policing and emotional-recognition, and argued that regulation must protect human rights without stifling innovation, while also highlighting the need for global cooperation on issues like military AI [78-84][80-86][191-198]. Virginia Dignam critiqued the narrow “hammer-and-nail” view of innovation, calling for broader, culturally diverse conceptions of AI (e.g., Ubuntu) and for education that equips engineers with social-science perspectives to avoid treating AI as a magical, neutral tool [100-112][124-138]. Paula Goldman shared Salesforce’s practical experience, explaining that embedding ethical controls, real-time accessibility features, and human-in-the-loop escalation mechanisms not only improves inclusivity but also yields superior, more marketable products [140-158][155-159].


The discussion repeatedly highlighted the importance of awareness, capacity-building and multilateral dialogue, with Dr. Jelassi recalling UNESCO’s grassroots projects that used communication tools to empower remote communities, illustrating how AI can be a force for good when coupled with education and advocacy [204-214]. Participants agreed that translating high-level principles into concrete, context-specific mechanisms-through regulation, industry practice, and education-is essential for trustworthy AI deployment [34][65-68][155-159][124-138]. Maria Grazia emphasized that the “human-centered” approach requires not only technical solutions but also deliberate policy instruments and stakeholder participation to define unacceptable uses of AI [88-91][186-188]. The audience raised concerns about involving developers from under-served regions, prompting Debjani to note India’s initiatives such as Startup India that aim to democratise AI design beyond major urban centres [290-298]. Overall, the panel concluded that a coordinated global framework, inclusive design, and continuous human oversight are necessary to ensure AI advances societal welfare while mitigating risks [191-198][226-230][236-242].


Keypoints


Major discussion points


UNESCO’s core position that ethics and innovation are complementary, not opposing forces.


The moderator stresses that “the position of UNESCO… is this is not true” that regulation hinders innovation, and outlines the three pillars-human rights, dignity, and freedoms-that must guide AI ([20-24]). Dr. Tawfik reinforces this by stating that “ethics and innovation… reinforce each other” and that AI must be “ethical by design, ex-ante” ([38-41]).


The need for a risk-based regulatory framework to balance innovation with safeguards.


Brando Benifei explains the EU AI Act’s risk-based approach, naming specific high-risk sectors and prohibited uses (e.g., predictive policing, emotional-recognition) and argues that regulation must be proactive rather than purely ex-post ([74-84]). Maria’s follow-up highlights the importance of defining “what we do not want the technology to do” as a regulatory baseline ([88-90]).


Embedding ethical oversight throughout the AI development lifecycle.


Debjani Ghosh argues that oversight must be built “into the entire development process from design to commercialization” with “flag-offs at every part” and sandbox testing, turning ethics into a design principle rather than an afterthought ([65-69]).


Broadening the conceptual and cultural foundations of AI through education and collective intelligence.


Virginia Dignam critiques the “hammer” metaphor and Western-centric, individualistic AI traditions, calling for diverse epistemologies (e.g., African Ubuntu) and a “toolbox” of skills and perspectives ([106-112][124-138]). She later expands this to collective intelligence as the true “AGI,” emphasizing non-neutrality of technology and the need for interdisciplinary skill-building ([235-254]).


Practical industry steps toward inclusive, trustworthy AI.


Paula Goldman describes concrete practices at Salesforce: real-time monitoring, escalation protocols, and designing for accessibility (e.g., handling different accents and disabilities), arguing that inclusive design yields superior, more marketable products ([140-159][220-227]).


Overall purpose / goal of the discussion


The UNESCO-sponsored panel “Humanity in the Loop: Balancing Innovation and Ethics in the Age of AI” was convened to explore how AI can be advanced responsibly. Participants from UNESCO, government, academia, and industry shared perspectives on translating UNESCO’s global AI ethics recommendation into actionable policies, regulatory models, educational curricula, and corporate practices that keep humans at the centre of AI development and deployment.


Overall tone and its evolution


Opening tone: Formal and optimistic, with the moderator framing the debate as a constructive challenge to the “innovation vs. ethics” narrative ([13-18]).


Mid-session tone: Becomes more critical and reflective; Virginia delivers a “controversial” critique of current innovation paradigms ([96-112]), and Debjani stresses the difficulty of universal ethical alignment ([51-56]).


Later tone: Shifts toward collaborative problem-solving, highlighting concrete regulatory proposals (EU AI Act) and practical industry measures (Salesforce’s inclusive design) ([74-84][140-159]).


Closing tone: Hopeful and inclusive, emphasizing collective intelligence, global cooperation, and the need to democratize both access and design of AI ([191-199][235-254]).


Overall, the discussion moves from high-level framing, through critical analysis of gaps, to concrete solutions and a unifying call for global, multidisciplinary cooperation.


Speakers

Tim Curtis


– Role/Title: Regional Director for UNESCO South Asia


– Area of Expertise: UNESCO regional leadership, AI policy and innovation


Debjani Ghosh


– Role/Title: Distinguished Fellow, NITI Aayog; member of the ETIO think-tank for the Government of India


– Area of Expertise: AI ecosystem development, policy formulation, economic and social development initiatives [S4][S5]


Dr. Tawfik Jelassi


– Role/Title: Assistant Director General for Communication and Information, UNESCO


– Area of Expertise: Communication, information & knowledge societies; AI ethics and governance [S6][S7][S8]


Brando Benifei


– Role/Title: Member of the European Parliament


– Area of Expertise: EU AI Act, risk-based AI regulation, international AI policy coordination [S9][S10]


Paula Goldman


– Role/Title: Chief Ethical and Humane Use Officer, Salesforce


– Area of Expertise: Ethical AI implementation in industry, responsible AI product design


Virginia Dignam


– Role/Title: Professor and Director of the AI Policy Lab, Umeå University; member of UNESCO’s AI Ethics Experts Without Borders


– Area of Expertise: AI policy, AI ethics, interdisciplinary education and research [S15]


Rita Soni


– Role/Title: Audience participant (no formal title provided)


– Area of Expertise: (not specified)


Maria Grazia


– Role/Title: Chief of the Executive Office of UNESCO’s Social and Human Sciences sector; Moderator of the panel


– Area of Expertise: Microeconomics, innovation dynamics, AI governance and ethics [S20]


Audience


– Role/Title: General audience members (including individuals such as “Rajan”)


– Area of Expertise: (not specified)


Additional speakers:


(None – all speakers appearing in the transcript are covered in the list above.)


Full session reportComprehensive analysis and detailed insights

The session opened with Tim Curtis, UNESCO’s Regional Director for South Asia, welcoming participants to the UNESCO-sponsored panel “Humanity in the Loop: Balancing Innovation and Ethics in the Age of AI” and thanking the Government of India for its collaboration. He outlined UNESCO’s aim to promote ethical, human-centred AI while supporting innovation, especially in the Global South [1-3].


Curtis introduced the panellists: Dr Tawfik Jelassi, Assistant Director-General for Communication and Information and a lead of UNESCO’s AI-ethics work; Professor Virginia Dignam, director of the AI Policy Lab at Umeå University and member of UNESCO’s AI Ethics Experts Without Borders; Paula Goldman, Chief Ethical and Humane Use Officer at Salesforce and member of UNESCO’s Business Council; Debjani Ghosh, distinguished fellow at NITI Aayog and architect of India’s AI ecosystem; and Brando Benifei, Member of the European Parliament who would discuss the EU AI Act. The moderator was Dr Maria Grazia from UNESCO’s Social and Human Sciences sector [4-10].


Maria Grazia opened by questioning the premise of the title, arguing that innovation and ethics need not be opposed. Drawing on her micro-economics background, she linked innovation to productivity, welfare and well-being and noted that regulation does not necessarily hinder these dynamics. She reminded the audience of the 2021 UNESCO Recommendation on the Ethics of Artificial Intelligence-adopted by 193 member states and, to date, the only global recommendation of its kind-built on three non-negotiable pillars: human rights, human dignity and fundamental freedoms [12-25][26-33][34-38].


Dr Jelassi responded that there is no contradiction between ethics and innovation; the real tension lies between innovation and over-regulation. He argued that embedding ethical reflection from the design stage makes AI systems more trustworthy and therefore more innovative, emphasizing UNESCO’s “ethical-by-design” principle and the Recommendation’s calls for human oversight, non-discrimination, cultural respect and environmental sustainability [38-41][42-45].


Debjani Ghosh then explored how the high-level principles can be operationalised. She reframed the debate as a choice between using AI to eradicate suffering (e.g., disease, food insecurity, loss of dignity) or to amplify conflict. Acknowledging that universal ethical alignment is impossible, she insisted that accountability must remain with people, not algorithms, and advocated lifecycle “flag-off” checkpoints and sandbox testing to embed ethics as a design principle rather than an after-thought [48-56][57-69].


Brando Benifei described the EU AI Act’s risk-based approach. He identified high-risk sectors such as workforce, health and justice, and outlined strict requirements on data quality, cybersecurity, governance and human control, while prohibiting applications like predictive policing, workplace emotion-recognition and manipulative subliminal techniques. He argued that proactive, risk-based regulation protects human rights without stifling innovation and called for global cooperation on trans-national challenges such as military AI [70-84][85-87][191-198].


Professor Virginia Dignam critiqued the prevailing “hammer-and-nail” metaphor, warning that treating any new AI tool as a universal “hammer” that can nail every problem limits true innovation. She advocated a broader toolbox that incorporates diverse epistemologies, citing the African Ubuntu philosophy (“we are, therefore I am”) as an alternative to the Western Cartesian view (“I think, therefore I am”). She stressed that AI is an “empty signifier” and that engineers need interdisciplinary training to ask why a problem matters, who benefits and who loses, urging a focus on collective intelligence rather than a monolithic AGI [96-112][124-138][235-254].


Paula Goldman explained how Salesforce translates these ideas into practice. Her team continuously monitors AI agents, defines escalation points where control shifts between AI and humans, and builds inclusive, real-time accessibility features, e.g., accent-aware voice agents and on-the-fly UI corrections for users with disabilities. She argued that inclusive design is a commercial advantage, yielding products that perform better and achieve greater market uptake [140-159][220-227].


Audience Q&A


Rajan asked “What is AI policy?” – Prof. Dignam answered that AI policy concerns the tools, skills and knowledge needed to assess AI’s impact throughout its lifecycle, not the technical design itself [263-266].


Rita Soni raised concerns about developers in low-resource settings. Debjani Ghosh replied that democratizing AI design is essential, citing India’s “Startup India” programme and the AI Impact Commons platform (aiimpactcommons.global), which aggregates impact stories from more than 30 countries on issues such as malnutrition, pharma-related suicides and climate resilience [74-80][276-286][290-298].


After Benifei’s remarks, Maria Grazia redirected the discussion to Dr Jelassi, who reiterated UNESCO’s mission to build peace through education, culture and information. He recounted a recent visit to a remote Southern-African village that lacked radio or Internet; UNESCO’s provision of community radios, telecom infrastructure and early-warning systems transformed lives, illustrating how AI can serve humanity when people are truly at the centre [81-88].


Finally, Maria Grazia thanked the panelists, invited a group selfie and formally closed the session, emphasizing the need for continued multilateral dialogue and collective intelligence [89-90].


Consensus & actions – The panel agreed that (i) innovation and ethics are complementary; (ii) UNESCO’s 2021 Recommendation provides a universal set of principles that must be operationalised through lifecycle-wide ethical checkpoints; (iii) ultimate accountability resides with humans; (iv) capacity-building and interdisciplinary education are vital; and (v) global, inclusive cooperation-especially with the Global South-is essential for coherent AI governance. Proposed actions include urging member states to translate the Recommendation into national policies, expanding the AI Impact Commons, adopting an ethics-by-design lifecycle model with mandatory checkpoints and sandbox testing, creating risk-based regulatory sandboxes, investing in interdisciplinary up-skilling programmes, and fostering multilateral forums to align standards on prohibited uses and address cross-border risks such as military AI [2][38-41][55-60][124-138][191-199][40-41][65-69][141-149][191-198][220-227][290-298].


Unresolved issues highlighted were the challenge of achieving global consensus on ethical values amid cultural diversity, mechanisms for turning UNESCO’s high-level principles into enforceable regulations, systematic inclusion of developers from underserved regions, a precise definition of “AI policy” distinct from technical standards, and robust monitoring frameworks for accountability when harms occur. These gaps point to the need for further research, pilot projects and sustained dialogue [48-56][100-112][263-266][276-286].


Session transcriptComplete transcript of the session
Tim Curtis

This afternoon to this UNESCO sponsored event, my name is Tim Curtis, I’m the Regional Director for UNESCO for South Asia and very happy to have you all for the event today, Humanity in the Loop, Balancing Innovation and Ethics in the Age of AI. Of course we’re grateful to the Government of India for its collaboration on this session because we at UNESCO believe, which we at UNESCO believe goes to the heart of our engagement with the ethics of artificial intelligence and namely how to ensure an ethical and human AI centred deployment whilst also encouraging the development of artificial intelligence and innovation in a technology that can offer so many benefits to humanity. and including and in particular to the global south.

So it gives me great pleasure to just present today’s panellists and moderator. We’ve had Dr. Tawfiq Jilasi, who’s Assistant Director General for Communication and Information and who’s really been a pivotal figure in UNESCO’s work on AI ethics. Professor Virginia Dignam, who is a Director of the AI Policy Lab at Umeå University and she’s also a member of UNESCO’s AI Ethics Experts Without Borders and has been supporting UNESCO’s readiness assessment methodology in multiple countries. Also privileged to have Paula Goldman, Chief Ethical and Humane Use Officer at Salesforce, who are a member of UNESCO’s Business Council and she has really been leading by example in the private sector’s responsible AI ethics. Debjani Ghosh, a distinguished fellow Niti Aiyog who needs no introduction here in India, a household name in India for her role in building and leading India’s AI ecosystem.

Thank you for coming. And finally, a great pleasure to welcome Brado Benefai, a member of the European Parliament who will share his insights on the EU AI Act and how they have been able to navigate balancing innovation and ethics. And finally, of course, I’m a moderator Dr Maria Grazia from the Chief of the Executive Office of UNESCO’s Social and Human Sciences sector. Please, Maria Grazia, over to you.

Maria Grazia

Hello, good afternoon. So we’ll try to have this session very dynamic because it’s after lunch, it’s Friday, over five days, very interesting, a long week. So let me start by challenging the very title of this meeting, that is Balancing Innovation and Ethics in the Age of AI. Now, nobody’s first. effect. So I’m a microeconomician, which is a very complicated word, which looks like a rude word, but it’s not. It’s mathematics applied to economics and especially applied to understanding the dynamics of innovation and new technologies. Why I’m saying that, because of course the question of innovation, what drives innovation, how can we get more innovation, is something that we always ask by the time you study what drives productivity growth, what drives welfare and well -being.

And then at times we also hear this, that having constraints or having frameworks will actually hinder these dynamics. And the position of UNESCO has been very clear. The position is, this is not true. So what UNESCO has, actually the member states that have adopted UNESCO recommendation on the ethics of artificial intelligence already in 2021, which means that you all countries, including India, were discussing these issues already since 2019 to get to an agreement. is actually what it means and how can we put technologies at the service of humanities and not let anything that is technologically feasible go if that technological feasibility actually hurts people, hurts humanity. And so for us at UNESCO, ethics of AI means something very concrete.

It means AI, technologies, and here I would like to invite you to think that it’s technologies, it’s not one single element, it’s a lot of things, that actually abide by three simple things that too often we give for granted, whereas perhaps we want to think about it more, and these are human rights, human dignities, and fundamental freedoms. And if we are able to develop, deploy, and use technologies in a way that we abide to these three components, then for sure we do have technologies that serve humanity. And why? I’m challenging the very topic because too often we see… innovation, or actually the narrative that we use out there, that is used out there, puts innovation and ethics, or ethical AI, which actually means an AI that also throughout the life cycle is ethical, as trade -offs.

So if we innovate, it cannot be ethical because by the time it’s gone out, we don’t have the time to check on these things. Well, think of a parallel, and then we take it from there on the concrete dynamics of AI. But think, if you were to think about one sector that is very much regulated, perhaps what comes to mind is pharma, pharmaceutical. Now, to my knowledge, but that can be my ignorance, I have never seen one single study being able to prove that the regulation in that sector has actually hindered the innovativeness or actually the productivity or even the remuneration of the sector. So by the same token, and actually the pervasiveness of AI to some extent leads us to think to the pervasiveness of of the paracetamol, for instance, we use every day by the time we have an ad, like I think some of you this afternoon might have, and after listening to me, perhaps even more.

But, you know, it’s really the pervasiveness of technology that touches our life, each and every day in many ways. And this is what I think is important to discuss from different perspectives. And allow me to start with my ADG, ADG jealousy. And as I mentioned, from UNESCO, we give this global perspective, because the recommendation was adopted by 193 member states. Now, very often, what is very challenging is to go from principles to practice. That is, sometimes we know what we need to do, but then the question becomes, how do we translate it into practice? So, ADG jealousy, when do you see what are the biggest gaps that exist between going from principles and what instead is happening on the ground?

Dr. Tawfik Jelassi

Thank you, Maria Grazia. maybe before I briefly answer your question let me say that you used the word of innovation and ethics I don’t see personally an issue, a contradiction between the two, I see it more between innovation and regulation because say to be creative, innovative you should free up the mind of the people, you should not constrain them, you should not tie their hands I used to be chair of a telecom operator board and there of course telecom and mobile phones and access to private data of consumers, the issue of regulation is paramount but we don’t want regulation that hinders innovation, I think here so I don’t see ethics and innovation being in contradiction to the contrary, I think they reinforce each other how is that?

Because clearly if you integrate ethical reflection in the design of AI systems of course if you do that AI systems will be more respected more trustworthy, more used and therefore more broadly deployed across society so I see ethics and innovation really reinforcing each other and quite often at UNESCO we say AI systems have to be ethical by design it should be done ex ante not ex post not when we see mistakes and hazards and risks and harmful impact of AI we say wait a minute let’s go back to see what went wrong in those models in the data sets, are there some biases etc so I think it has to be done from the very early stage and therefore innovation has to be human centric and has to be contextualized, there is no one size fits all, we know that what you can provide is an overarching framework so it’s a broad set of guidelines and principles as you said Maria Grazia and this is what the UNESCO recommendation on the ethics of AI is about You know that this recommendation has been so far the only global recommendation of its kind.

It was adopted back in 2021 by 193 member states of UNESCO, and it calls for human oversight, non -discrimination, respect for cultural diversity, respect for environmental sustainability. These are the principles that need to be translated into action and that need to be operationalized within a certain context.

Maria Grazia

Thank you very much, Elie Dji. Let’s actually go to Debjani, because I would like to go further into this operationalization question. So, from your work at NITI IOC, and also your experience with NASCOM, so what are the mechanisms that can really help embed the ethical reflection into what is the everyday life of both companies and sectors?

Debjani Ghosh

Thank you. Thank you, Deb. Okay. Thank you. Thank you for having me here. So, first of all, I’ll just go back to the topic, if I may, for a second, right? because I don’t think the choice is between innovation and ethics. I really don’t. I think the choice is between do we use technology to ensure that everyone in the world is cancer -free, everyone in the world lives with dignity, everyone in the world has enough to eat, or do we use the technology to make the world a much bigger conflict zone, develop the next atom bomb, and worse. So I think the choice is that. And therefore, the biggest challenge we have, and I hate applying the word, the label of ethics to technology, because I think the biggest challenge we have is can we, all the wisdom in this room, can we say that we will be successful in aligning every single human on this planet to the same ethical values?

The answer is no. No. we’re not going to be able to do that. And we know we’re not going to be able to do that. So as long as we humans don’t align to the same ethical values, you will always have good actors and you will always have bad actors, you know that technology is going to be used in ways that are non -ethical. So the accountability, you’ve talked about humanity in the loop, the accountability comes back to us. So I think it’s very important to sort of understand that because in all our dialogues on technology, we somehow delegate the accountability to technology. I don’t think we can as yet. Maybe in another 10 years when cognitive reasoning becomes a thing, maybe then, but not as yet because for somebody who actually builds codes and builds agents, I know they’re not that intelligent as yet.

So I think the accountability on humans is what we have to focus on. And going back to your question, if you’re talking about how does industry ensure? I mean, one of the things I’m very clear about that regulation is usually an afterthought. You develop the technology and then you say, okay, how do we now regulate it to ensure that it’s used right? And I think that has to fundamentally change. Oversight has to be built into the entire development process from design to commercialization. And it has to be built with the right flag -offs at every part of the design and development process. If you do that and you’re able to, you know, red tape the product that you are developing at every single stage to certain standards that have been developed, you are going to develop something that, and then hopefully after the entire development phase, there’s also a sandbox where you test out the impact.

You will get to a stage where ethics becomes by design versus an afterthought. And I think that’s what we have to move towards.

Maria Grazia

Thank you. I’d like to a bit change the order of the speakers because you brought in the argument of the regulators and you have one next to you that I’m going to refer to. And how do you see this relationship? Because we know fundamentally the regulation that has been pushed in Europe is a risk base. So what was the logic and how this relates to what she was discussing as the human oversight or even the redress of mechanisms that we might want to put in place in order to have AI that is ethical?

Brando Benifei

Well, first of all, excuse me for the voice, but that’s it. Exactly, but thanks to technology, you can hear me anyway. So I think that I… I can also adhere to the point that innovation and ethics are not one against the other. in fact this summit that is concentrating on impact on action, on diffusion, is not separate from keeping the track on on reflection, on safety on how to protect human rights how to make AI human centric, the things are interwined, the point is how do we regulate effectively and how we find a good balance, but I want to bring maybe a controversial point to the table because I have my strong conviction on this we have chosen globally, including in Europe, that has been often the forefront of regulating in one of those rooms now, I was with her in another panel there was Anu Bradford professor of Columbia University that has written the book The Brussels Effect so in fact EU has often opened the way for many regulatory pathways I mean even Europe has chosen when looking at the social media to actually not regulate we have let the social media diffuse without regulation and today we are discussing about limits for minors we heard about that also in the inaugural session we are discussing about misinformation and labelling of deepfakes even Prime Minister Modi talked about that in the inaugural session but we are doing it all now after a lot of things have happened and my point, that’s my opinion we have already unmodifiable consequences so I think that when we talk about when we should regulate we should regulate and we should regulate and we should regulate and we should regulate and we should regulate and we should regulate if we should let the innovation flow and act only ex post.

Sometimes we might be wrong and risk unchangeable effects. So we need to build a balance that doesn’t hinder innovation, but also identifies human rights challenges. The AI Act tried to build a risk -based approach, identifying areas where we need AI to be overseen, workforce use of AI, healthcare use of AI, administration of justice use of AI. We want to be sure when we deal with that that data used for training is quality data, cybersecurity is sufficient, the governance of the data is solid, and there is human control. These are examples of what we have identified. Everything. And in fact, we even chose to prohibit. a few use cases, for example, predictive policing, for example, emotional recognition in workplaces and in study places, manipulative subliminal techniques.

I don’t think it’s a taboo to choose that some use cases of AI, we don’t want them in our society, and we just keep them out. So I think this approach based on the risk, you can look if you like it this way, if you want to modify, but it’s an interesting perspective, because you can choose what you think is in need of a certain regulation, and you can also promote transparency, which I think is crucial to build trust. Without trust, especially in democratic contexts, it’s impossible to accelerate adoption of AI, which is still a big challenge from both the global north and the global south. The numbers tell us that a lot of companies, or public administrations that could benefit from an ethical and correct use of AI, they are not using it because they don’t know what could

Maria Grazia

You put forward a very important point, Brando, that is like perhaps we might not be able or we might not want to decide what the technology should do for us. But for sure we might want to discuss and agree on what we do not want the technology to do for us because these are unacceptable uses of deployment. And in this case, this also highlights the importance of awareness, of the centrality of people, of having this human -centered approach. And here I would like to invite Virginia into the conversation because of course you, as an educator, as part of this beautiful world of educators, as a professor, you have this constant contact and the ability to interact and nurture the humankind.

So what do we have to do to avoid that people are just consumers or, you know, are possibly exposed to it instead of stealing the technology to work where we want to go?

Virginia Dignam

Sure. Thank you very much. Thank you for inviting me to be here. Again, like all my previous colleagues, I want to go back to the title. And I’m not going to talk about the balancing part. I’m just going to claim and to be controversial and to wake up all. We are doing both the innovation as the ethics and regulation side all wrong. We are doing it not in the way that it needs to be done. On the innovation side, we are doing it wrong because we are somehow understanding innovation as the capacity of using this hammer that we found out a couple of years ago of Gen AI or whatever. And now we want to use the hammer to nail any nail that we find out.

Innovation is much more than that. innovation is really challenging ourselves to go further. And I want to go back to a sentence that has come with me and is the main thing I’m taking from this summit today. In a couple of sessions ago where I spoke, someone was saying, most people developing AI never experienced power cuts, never experienced broken roads. I would like to go further. AI, and I have been working in AI for 40 years, all the different types of AI that existed before, has been developed extremely on the Western tradition, the Cartesian tradition. We think, therefore we are. I think, therefore I am. First it is individualistic, and then equates intelligence with cognition. Human intelligence is much more than cognition.

If you would think about AI developed for instance in the African Ubuntu tradition, it says, we are, therefore I am. It would be a completely different type of AI. So we do need to challenge ourselves not to go with this hammer that is there already and try to find the nails and call that innovation. It is not innovation. It’s just running around like chickens without heads and see if one of those hammers works. So that’s one. On the side of ethics and the regulation, we are also assuming there are two things that usually come with the idea, and especially in this type of combination, that ethics is this kind of finger that points, thou shalt behave, thou shalt be good, and that regulation is about prohibiting you to do things.

Neither ethics is the finger, nor regulation is necessarily only about prohibitions. Moreover, regulation like AI, like the hammer, like the telephone, is not about prohibiting you to do things. It is about prohibiting you to do things. It is about prohibiting you to do things. It is about prohibiting you to do things. It is about prohibiting you to do things. It is about prohibiting you to do things. artifact that we built we built regulation and we can apply to regulation and to ethics the application of ethics exactly the same type of principles that we apply to technology let’s experiment, let’s try let’s verify, let’s evaluate let’s see what’s there and not have this idea of the finger or the loss written in stone which stays there once and forever so that’s going back and now very quickly on your answer because I don’t want to take much time I think that education needs exactly to start by this point technology alone is not enough so we really need to up our education of the engineers, the computer scientists the data scientists on the humanity side we know as engineers we know very well how to solve a problem, we never ask ourselves why is this a problem who has this problem, what are the alternatives to my solution who gains, who loses, what is gained, what is lost this is humanity We need to somehow bring that together in the engineering case and in the humanities and social science case.

We need them, because I’m an engineer, to help us understand that we need to be much more precise in what we are talking about. AI at this moment is actually an empty signifier. It doesn’t mean nothing. Everything is AI. Nothing is AI. All kinds of things are AI. The applications are AI. The sectors are AI. The technology is AI. The research, everything is AI. And we cannot just go around with this word which actually means magic. In most politicians’ talks, it means magic. And we want to regulate magic. Okay, good luck. So we need the humanities, the social science, to really help us. As being precise about what are we doing. So this is the education we need.

Maria Grazia

fantastic you couldn’t have put it much easier to me to then ask Paola how are we doing that in companies because this is very easy to say we need to translate the principles the values in concrete models of that actually work work for a company, work to deliver results and work for people

Paula Goldman

yes indeed, well first of all thank you for that and I mean we were just talking about how this is our last speaking panel of the week and that was a fiery way of drawing things together, I really appreciate it, kind of an energy boost so yeah, I think the answer is actually much more practical and much less abstract than one might imagine and so I’ll just tell you a little bit about my experience I spend my days at Salesforce both testing our products and making sure that our AI has features baked into it so that our customers There’s no, I can observe what’s going on and know how to tweak the controls and understand, for example, when they should set for an AI agent to escalate to a human or a human to escalate back to AI and so on.

And when we do this, it’s not like we think we at Salesforce have all the answers because clearly we don’t and we serve a variety of industries and all over the world and so on. But everyone, all of our customers are basically asking the same questions, right? They’re asking, how do I know what kind of results I’m getting? How can I tell if something goes wrong? What are my options if something goes wrong? What part of AI ethics is your responsibility and what part is mine, right? And these questions don’t necessarily have the most mature answers because we’re in the early innings. of AI agents and a lot more work. to do. But actually, these are the right questions to be asking and also allows for some flexibility and some cultural or industry specificity for people to find the right answers to the questions.

So that would be part one of my answer. It’s actually very, very practical. To adopt AI, companies and organizations need to be able to trust that it’s going to work. They don’t want to be embarrassed by it, right? And they’re not going to be able to scale it if it doesn’t work. So that’s number one. The second thing is also increasingly what we’re finding when we work with companies on this is that the most successful companies at Scaling AI put the people at the center of the transformation. They work with not just top down, like you shall use this application. They give people a chance to sort of have a voice around what is actually working.

What is actually most useful to them in the day -to -day work? Where is AI going to actually help them and where is it kind of useless? right and it’s that kind of understanding of how work actually gets done what actual processes are going to benefit from that kind of application that I think is really important and allows people to sort of stay at the center of this large -scale transformation that we’re part of

Maria Grazia

that might happen or should happen in the context of making AI ethical by design?

Debjani Ghosh

Well, in my current role in ETIO, which is the think tank for government of India, we’re looking at what are the unlocks for technology, including AI, to ensure that we can use technology to solve for some of the biggest problems, right? Now, what Professor Virginia said about AI as a hammer, I think that’s a luxury of the developed countries, and I do agree with you when it comes to developed countries. But when you come to developing countries where you don’t have a lot of resources, you cannot afford to use the technology that takes a lot of deep investment to sort of do things where you’re not sure. You’re not sure of the ROIs. And one of the things, examples I want to give is as part of this summit, there were seven working groups that were set up looking at different problems.

I chaired one of the working groups on economic development and social good, which was all about impact and how do you scale impact, right? And we had around 50 countries participating. Now, one of the things that came out of that working group was, which is one of the outcomes of this summit, is the creation of AI Impact Commons globally, and it’s online. You guys can look it up, aiimpactcommons .global, which has impact stories from more than 30 countries, and counting, and it’s growing every day, with learnings on what kind of problems can be solved and how do you scale it. And by I said it’s a luxury of developed countries is because when you look at those impact stories, and most of them are from developing countries, and you’ll be amazed with the kind of problems they’re solving, from malnutrition to pharma, you know, to suicides, how do you lower pharma suicides by using technology to improve yield.

Thank you. ensure that they don’t suffer from climate changes and shocks. I mean, the problems are so inspiring. So I think it won’t be fair to say that we don’t know what problems we are solving today, and I will absolutely stand for that. And I think it’s – I’ll go back to what Paula said. I’m not sure if industry today is really putting human at the center of the loop, but I think they need to. They absolutely need to. I do, because as we develop technology, for example, the end goal right now of – seems like the end goal of AI, all the big companies are talking about, is AGI. Now, when you look at what does AGI mean, it’s about control.

Why do we want to build something to control everyone? Why don’t we want to build something that is going to augment lives? And if we could change the narrative, then I would say, yes, humans are at the center. Right now, I think we still have – we still have a lot of work to do to bring humans back into the center of the loop. And it’s something I think we have to realize and industry has to realize. that that is the only way you can build sustainable businesses. And that’s how you sort of build your staying power. So it’s going to be very important to do.

Maria Grazia

Absolutely. And it’s about having these different entities around the table, but also having different governments and having this multilateral setting talk to each other to have regulation or more generally, because at the end of the day, we talk a lot about regulation, but regulations are part of the policy framework that one could put in place. So actually, let’s go to Brando, because I was seeing he was kind of calling me with his eyes by the time we were talking, and I’m sure he wants to add on the multilateral setting. Please, over to you, Brando. Perhaps you were not calling me, but you’ve been called in. Never the less.

Brando Benifei

Well, I think that it’s very important that we use occasions like this, this summit, to… to advance a global cooperation framework. And for sure… it’s also a part of the mission of UNESCO to unite different cultures and approaches to what we are talking about. And you explained it earlier, the longstanding work of the organization. But I think that we need to face the reality that there are issues where global cooperation will be crucial and that it’s still not sufficient. Let’s think of military use of AI or the existential risks of losing control of very powerful AI models. This is something that is part of a controversial debate, we would say. But I wouldn’t dismiss renowned scientists that sustain that we are.

in a context where the lack of globally adopted rules are putting us in very significant danger. And this is also part of the idea of balancing innovation and ethics. Because for sure we need domestic rules to foster the best opportunities out of the various use cases of AI. In these days I met many companies that were working on very practical, extremely useful AI use cases to ameliorate our life. To ameliorate. To ameliorate societal good. But this cannot be left in the hands of just the… judgment of private sector companies that have a specific objective, profit for their owners or shareholders, it’s not societal good they might want to add that on top but that’s not their objective, it’s natural, so we need to have frameworks in place on what is our daily impact with AI and we need to build common standards the more broadly adopted standards we have globally the best will be to reach results but we also need a step further that is global cooperation on those issues where we cannot actually do very much domestically they are global issues and I think that with an increased geopolitical tension soon the use of AI for peace will be quite an important topic on which the international community has to find a way to take quick steps forward I hope that our leaders will deal with that

Maria Grazia

I can’t agree more with the need to coordinate and have an approach that is global and actually allow me the prerogative of the moderator to call my ADG Tophie I will take the consequences of that but what I would like to ask you is what it means to have people at the center and let’s remember that in your case, given the work you lead on the communication and information sector what is the role of the information Virginia was hinting at that before in terms of awareness could you please share a bit of those insights

Dr. Tawfik Jelassi

Thank you Maria Garcia let me pick it up where Brando left it, he said AI for peace maybe some in the room know why UNESCO was created back in 1945 80 years ago almost to the day the mission of UNESCO was and has been to build peace in the minds of men and women how? through education culture, sciences, communication and information everything happens in the mindset of the people today of course we want AI to be a force for good but it could be also a force for hazards, for harm for risk I tend to say technology is neutral it depends what humans make out of it it could be a force for good it could be a force for you mentioned wars or unwanted things so yes humanity in the loop that’s fundamental I always ask myself and that’s my team at UNESCO I say if whatever we do in the field if that transforms lives then we are spot on if you make the beneficiaries of our educational program whatever if you can make them more successful through what you offer them then that’s impact.

Where is the impact? AI can transform lives, yes. And you mentioned to us some examples. It can help cure cancer, as you said, provide food for the needy people, and so on and so forth. We want that type of AI. And AI does not stand for artificial intelligence. AI stands for all -inclusive. That’s AI as well. So if you have that perspective to things, if you really put humanity in the loop, at the center, not only in the loop, in the center, and allow me one minute to share with you, I have been at UNESCO for five years. My most memorable day happened last week in a tiny village in remote southern Africa. A village in which people had no access to radio, no TV, no mobile telephony, no internet, nothing.

They always felt we were second -class citizens in this country. Imagine that you don’t have access to your own internet. Do you have that information? you don’t know what’s happening around you you cannot call your relatives living in other cities this was the case of 15 small communities what UNESCO did, it provided first community radios, set up a tower with transmission equipment so through the radio people have information know what’s happening and when we did that, telecom operators came in to plug in their equipment to provide mobile telephony, and then it became internet connectivity, and then UNESCO put in place early warning systems, because these areas were very much prone to floodings, and whenever that happened it wiped out the cattle the livelihood of the people, etc that’s transforming the lives of the people, AI can contribute in a huge way to that extent and I think if we put that at the center, then of course it has to be ethical, it has to be human centered, it has to be accountable, transparent, all the principles that we talked about, and then comes the issue of …

advocacy, capacity development because more informed policy makers will go this route but if we don’t bring up awareness if we don’t do the advocacy and the capacity building and the training then of course we can see that some companies or some people going for the buck for the profit out of this technology not the social benefit not transforming lives

Maria Grazia

thanks very much all over to you because the company is at the end of the speech so over to the company and really how you see also this fact of including the other stakeholders in what you do and how that can transform and help you deliver on AI that is added by the company

Paula Goldman

well thank you for saying that and I actually think that it becomes more and more obvious that that’s actually the only way to scale the technology um um And, you know, but just think about, think about if you’re developing a technology that’s meant to serve many different markets and many different populations, that you need to know, for example, like we have in our AI agent, we have a voice capability. We need to know that that voice capability, even if we’re just talking about English, forget about other languages for a second. We’re just talking about English. It needs to work on different vernaculars of English, different accents, etc. I work a lot on product accessibility, right?

It needs to understand a deaf accent, for example. And so the most inclusively designed technology is going to be the one that’s most successful. It’s going to increase accuracy rates and so on. I also think this is to that end it’s actually a very very exciting time to be able to use AI for inclusion and so I mentioned for example product accessibility one of the things that to me that’s most hopeful and most exciting about this time is that like we’re starting to see AI agents that correct in real time we’re working my team is working on this at Salesforce correct in real time code that is not accessible or correct in real time a browser extension so that if you’re like on your phone and something comes up and maybe a common problem is you’re trying to zoom out or in and it breaks it will correct it in real time and these are the this is this kind of technology is the difference between someone that’s able to use some software to actually get their job done or someone that’s excluded from getting their job done and so again I guess I guess the point that I’m trying to make is the most inclusively designed technology is going to be the most commercially successful and also this is an incredibly exciting time to be doing

Maria Grazia

I’m really happy to hear from the voice of the industry that the more, so those that include are actually not making a favor to those that get included, but actually the AI, the systems get superior. And so that is something I think that’s another comment of a common legend out there that says, no, you know, it’s costly and perhaps then, you know, the profit is not there. What we are hearing from the voices of the companies is really like, well, no, because it’s a superior product, it’s a better product, it performs better. Last but not least, back to our Virginia. Especially here, I would like to listen from you about what you think is the role of a specific component of human capital, that is the skill.

And we have heard throughout this week the importance of upskilling, reskilling. And is that really the solution?

Virginia Dignam

thank you very much firstly going back to if I made the impression that hammers are not useful it’s not the case there are many useful hammers my point is more like we need a toolbox we don’t need only hammers and even outside of the western world we are too much focused on hammers maybe the skills yes we really need to focus on skills we need to focus on our own capabilities on our lived experience and so on someone talked about AGI and indeed at this moment the AGI concept is about power is about providing power to those companies that claim they will build it how are they building it is what I call the play -doh approach they are putting all the data of the world with all the capacities of the world creating a huge ball of play -doh if anyone who played with We played out before, you know, that after you play, there is no color, there is no shape, there is nothing anymore.

It’s just a thing. And then, of course, that thing might do, but no one knows what’s inside, what came in, what came out, and so on. We need to go much more broader in understanding how this AGI is. What fundamentally AGI means, a system that is more intelligent than us, that can solve problems that we cannot. We already have AGI. We always had AGI. It’s called collective intelligence. The moment that we work together, we can do more than each one of us. If we are using the AI technology that we are developing to support this collaboration together, to develop the different skills, to integrate all our different capabilities, our different differences, our different experiences, our different capabilities, our different abilities, our different abilities, the different tools that we have developed.

then we get a much broader bouquet, not anymore a bowl of Play -Doh without color, but a huge bouquet of flowers of all those colors and so on. So AGI is about, and we cannot let the big companies run away with the concept of AGI by the idea that they are going to create God which is going to solve our problems. AGI is about us. It’s about putting all us together because our collective intelligence is really what, at the end of the day, is going to solve or to support us solving our problems. It’s just one more thing, and I think that’s also part of the skills. Technology, and there I disagree with you, is not neutral.

All technology embeds and encompasses our choices, our options, our data. All of that is part of. We have to understand technology as a non -neutral. artifact, and take those capabilities and also embrace the different perspectives and the different colors of this. But again, altogether, that’s the only way forward, is not giving up and hoping that AI is going to solve whatever complex problems we have. Now it’s really embracing and enforcing collective intelligence. That is AGI.

Maria Grazia

Excellent. Now, collective intelligence. Now we are going to have a collective set of questions, just a couple, because the time doesn’t allow for more. So, please, by the time you want to intervene, be absolutely short, say your name, say whom you want to ask the question to, and the question without doing the history of humankind before shopping with a person. So, I have to say, I spotted that surface, and there was a lady on this side. Now I think she got shy, and she just put the… So, let’s start by that gentleman. No, it’s the gentleman behind you, I’m sorry. is there I can do everything from moderating to giving you the part we are proactive and problem solving let’s go your name is

Audience

hello everyone myself Rajan I am from business club TV and I am the CEO and the founder of the startup so I have a very basic questions for professor Virginia Dignam yes so professor I have a question for you what is AI policy

Virginia Dignam

Wow, okay, how many hours? Okay, very shortly, AI policy is about the tools, the capabilities, the skills, the information, the knowledge on the understanding how to address the impact of AI. Not the technology, not the designing of the technology, but really addressing the impact of this technology from the whole loop and all development from the beginning, asking ourselves, why are we using AI? Is that the best problem that we have? To the way we are developing it, to the way that we are evaluating. And addressing the impact of it.

Maria Grazia

No, I’m sorry, because we have to give it, let’s be inclusive, let’s allow the other. to speak as well. Please, that lady, yes, exactly, the one with the hand raised. It’s just down here, three rows ahead. I’m going to be gender equal, so one -on -one. I’m not going to have the men speak because typically you’re the fastest to raise your hand, the women, we are more sharp. Go ahead.

Rita Soni

I love that. Thank you for that. Hi, my name is Rita Soni. I don’t know who should answer this question, but at the beginning of this panel, I heard someone say those that are developing AI and designing it probably have never experienced a power cut or potholes in the road. I thought that there would be more discussion about who is actually involved in the humans in the loop. Dave Donnie, you know me. So I have to ask this question about the people that are actually developing it and whether we’re thinking about responsibly employing them. Right now, we know that there’s overhauls of half a million people in the world. And so, I’m going to ask you to think about that.

that we consider impact workers. They’ve typically been excluded, but now they are. So how do we support this as a movement of getting those that have experienced power cuts to help design and develop it? This is a development -related question.

Maria Grazia

Who wants to attack it? Because we are over. That’s the last question, and then we will have to say thank you and continue the conversation in parallel.

Debjani Ghosh

Yeah, I fully, I mean, you know, if you’re talking about have developers suffered, to develop the technology, power cuts, anyone who’s working out of Bangalore or any Indian cities, yes, they have. They’ve definitely suffered in the development. Now, I think, Rita, the point you were making is how do we make it more inclusive? How do we bring in? And I think that’s something that goes back to the perennial question, is how do you ensure that you democratize not just access to technology, but you also democratize design and creation of the technology, right? And it’s not just gender. It’s also how do you diffuse it down to smaller cities, so people who are actually facing the problems in smaller cities.

And I think at least in India we are doing that through our initiators like Startup India, etc., which is more focused today on building capabilities in Tier 2, Tier 3 cities, not users, not just for adoption, but actually for design and development. So there’s a lot of focus, and I’m sure there are founders here who have come from the smallest of cities in India. And the best part is when we track the numbers, the growth of startups and founders is higher in the Tier 2, Tier 3, Tier 4 cities than in Tier 1 cities. So that tells us we’re doing something right.

Maria Grazia

enjoyed at least like half of as much I have enjoyed this panel. Please join me in thanking the journey from your to be and we’re going to do a large show so please stand up we’re going to do a selfie with all of you in the back come here stand like this so we’re all together this is our collective intelligence thank you thank you very much thank you thank you Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (38)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“Dr Tawfik Jelassi is the Assistant Director‑General for Communication and Information at UNESCO”

The knowledge base lists Tawfik Jelassi as UNESCO’s Assistant Director General for Communication and Information [S119].

Confirmedhigh

“Dr Maria Grazia is from UNESCO’s Social and Human Sciences sector”

UNESCO’s records identify Dr Mariagrazia Squicciarini (also referred to as Dr Maria Grazia) as the CEO of the Social and Human Sciences sector [S22].

Confirmedhigh

“The 2021 UNESCO Recommendation on the Ethics of Artificial Intelligence was adopted by 193 member states”

The recommendation was adopted two years ago by 193 UNESCO member states, demonstrating its worldwide acceptance [S26] and is described as a global normative foundation [S126].

Additional Contextmedium

“UNESCO aims to promote ethical, human‑centred AI while supporting innovation, especially in the Global South”

UNESCO’s three-pronged approach – fostering AI opportunities, mitigating risks, and addressing harms – reflects this dual focus on ethical, human-centred AI and innovation, with particular attention to the Global South [S84].

Additional Contextmedium

“Regulation does not necessarily hinder innovation; efficient ethical regulation can guide innovation toward benefiting humanity”

UNESCO emphasizes that innovation and regulation are not contradictory and that well-designed ethical regulations should steer innovation positively [S46].

Additional Contextlow

“The UNESCO Recommendation is built on three non‑negotiable pillars: human rights, human dignity and fundamental freedoms”

The recommendation’s principles are rooted in human rights and also highlight inclusivity, sustainability, transparency and explainability, providing a broader set of values beyond the three pillars mentioned [S127].

External Sources (127)
S1
https://dig.watch/event/india-ai-impact-summit-2026/ethical-ai_-keeping-humanity-in-the-loop-while-innovating — So it gives me great pleasure to just present today’s panellists and moderator. We’ve had Dr. Tawfiq Jilasi, who’s Assis…
S2
Ethical AI_ Keeping Humanity in the Loop While Innovating — -Dr. Tawfiq Jilasi- Assistant Director General for Communication and Information (mentioned by Tim Curtis in introductio…
S3
AI That Empowers Safety Growth and Social Inclusion in Action — – Ankit Bose- Tim Curtis- Rein Tammsaar
S4
Ethical AI_ Keeping Humanity in the Loop While Innovating — -Debjani Ghosh- Distinguished Fellow at NITI Aayog, former role with NASCOM
S5
Panel Discussion: 01 — -Debjani Ghosh- Distinguished Fellow, Niti Aayog (role: moderating the ministerial conversation)
S6
WSIS+20 High-Level Event 2025 Inaugural Session: Celebrating Two Decades and Achieving Future Milestones Together — ### UNESCO Assistant Director-General Tawfik Jelassi – **Tawfik Jelassi** – Role/Title: Assistant Director General for …
S7
Day 0 Event #119 Roam X Driving WSIS Implementation and Digital Cooperation — – **Tawfik Jelassi** – Assistant Director General of UNESCO for Communication and Information, delivered keynote remarks…
S8
DC-OER The Transformative Role of OER in Digital Inclusion | IGF 2023 — Dr. Tawfik Jelassi, Assistant Director-General for Communication and Information Sector, UNESCO
S10
Ethical AI_ Keeping Humanity in the Loop While Innovating — -Brado Benefai- (Appears to be the same person as Brando Benifei, mentioned in introduction) -Brando Benifei- Member of…
S11
Open Forum #72 European Parliament Delegation to the IGF &amp; the Youth IGF — – Brando Benifei: Member of European Parliament (mentioned but not in speakers list)
S12
Ethical AI_ Keeping Humanity in the Loop While Innovating — -Paula Goldman- Chief Ethical and Humane Use Officer at Salesforce
S13
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — -Paula Goldman: Area of expertise, role, and title not mentioned in the transcript
S14
https://dig.watch/event/india-ai-impact-summit-2026/from-technical-safety-to-societal-impact-rethinking-ai-governanc — I think we can just continue the discussion and I hope we’ll do. This is today just a start. I also hope that we will be…
S15
Ethical AI_ Keeping Humanity in the Loop While Innovating — -Professor Virginia Dignam- (Same as Virginia Dignam, referenced with title)
S16
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Tatjana Titareva: Thank you so much. Today’s session’s focus is to discuss the roadmap for AI Policy Lab that we have de…
S17
https://dig.watch/event/india-ai-impact-summit-2026/welfare-for-all-ensuring-equitable-ai-in-the-worlds-democracies — Thank you so much. My name is Rita Soni. I work with a company that’s operating in small -town India, delivering all the…
S19
Ethical AI_ Keeping Humanity in the Loop While Innovating — I love that. Thank you for that. Hi, my name is Rita Soni. I don’t know who should answer this question, but at the begi…
S20
Ethical AI_ Keeping Humanity in the Loop While Innovating — -Maria Grazia- Chief of the Executive Office of UNESCO’s Social and Human Sciences sector, moderator, microeconomist spe…
S21
https://dig.watch/event/india-ai-impact-summit-2026/ethical-ai_-keeping-humanity-in-the-loop-while-innovating — Thank you for coming. And finally, a great pleasure to welcome Brado Benefai, a member of the European Parliament who wi…
S22
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — – **Dr. Maria Grazia Grani** – CEO from the Social and Human Sciences Sector UNESCO (mentioned in introduction but appea…
S23
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S24
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S25
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S26
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — The UNESCO recommendation on AI ethics has become a critical guide for global AI governance. It was adopted two years ag…
S27
IGF 2023 WS #313 Generative AI systems facing UNESCO AI Ethics Recommendation — Additionally, IFAP focuses on building capacities to address the ethical concerns arising from the use of frontier techn…
S28
Shaping an inclusive global action to anticipate quantum technologies — Such international cooperation is crucial to bridge the digital divide, enabling holistic participation in developing gl…
S29
Diplomacy amid Disorder / DAVOS 2025 — Need for collaboration between global north and south
S30
“Re” Generative AI: Using Artificial and Human Intelligence in tandem for innovation — Audience:I am dealing. I’m a professor of ethics. And I’m dealing with AI and ethics in some years. And I’m struggling a…
S31
DC-Inclusion &amp; DC-PAL: Transformative digital inclusion: Building a gender-responsive and inclusive framework for the underserved — – Tawfik Jelassi: Assistant Director General of Communication and Information Sector of UNESCO Tawfik Jelassi: Good mo…
S32
Day 0 Event #252 Editorial Media and Big Tech Dependency the Material Conditions for a Free and Resilient NeWS Media — Chris Disspain warns against using the term ‘regulation’ because it can be misinterpreted by authoritarian governments, …
S33
Ministerial Roundtable — – **Tawfik Jelassi** – ADGE of UNESCO (Assistant Director-General for Education) Ms. Doreen Bogdan-Martin, Mr. Tomas La…
S34
Ethics and AI | Part 5 — The principles stipulated by the Convention do not come with anything that would deal with issues which we have identifi…
S35
Ethics and AI | Part 6 — Even if the Act itself does not make direct reference to “ethics”, it is closely tied to the broader context of ethical …
S36
Transcript from the hearing — Now, regulation is often said to stifle innovation. But there is no real trade off between safety and innovation. An AI …
S37
WSIS Action Line C10: Ethics in AI: Shaping a Human-Centred Future in the Digital Age — – **Ethics as foundational rather than an afterthought**: The panelists emphasized that ethics should be embedded from t…
S38
WS #294 AI Sandboxes Responsible Innovation in Developing Countries — When sandboxing AI solutions, it’s important to consider that individuals will be affected regardless of whether their p…
S39
The digital economy in the age of AI: Implications for developing countries (UNCTAD) — Artificial Intelligence (AI) has the potential to revolutionise industries, enhance efficiency, and support innovation a…
S40
Impact of the Rise of Generative AI on Developing Countries | IGF 2023 Town Hall #29 — Another aspect discussed is the need to redefine the term ‘developing countries.’ This argument emphasises the existence…
S41
AI for Good Technology That Empowers People — The discussion revealed that edge AI is not merely a fallback solution for areas with poor connectivity, but rather enab…
S42
Scaling Innovation Building a Robust AI Startup Ecosystem — This comment is insightful because it explicitly addresses the geographic democratization of innovation in India, acknow…
S43
GermanAsian AI Partnerships Driving Talent Innovation the Future — The focus on tier-2 and tier-3 cities in India exemplifies this inclusive approach, supported by Dr. Azariah’s evidence …
S44
The Innovation Beneath AI: The US-India Partnership powering the AI Era — A significant announcement was Google’s Climate Technology Center, developed in partnership with the Office of Principal…
S45
Human Rights-Centered Global Governance of Quantum Technologies: Implications for AI, Digital Rights, and the Digital Divide — UNESCO Director Guilherme Canela emphasized that innovation and human rights protection are not opposing forces but comp…
S46
La découvrabilité des contenus numérique: un facteur de diversité culturelle et de développement (Délégation Wallonie-Bruxelles, Belgian Mission to the UN in Geneva) — The analysis also highlighted the importance of implementing ethical principles and existing consensuses on a global sca…
S47
Tokenisation and the Future of Global Finance: A World Economic Forum 2026 Panel Discussion — Legal and regulatory | Economic References to financial crises being born from misleaded or dangerous financial innovat…
S48
Large Language Models on the Web: Anticipating the challenge | IGF 2023 WS #217 — In conclusion, generative AI technology has the potential for positive impacts in multiple industries. It enhances commu…
S49
WS #162 Overregulation: Balance Policy and Innovation in Technology — Balancing regulation and innovation Paola Galvez argues that regulation is needed, but the focus should be on how to re…
S50
The Future of AI in the Judiciary: Launch of the UNESCO Guidelines for the use of AI Systems in the Judiciary — Ms. Amanda Leal:And I think to contextualize, I wanted to bring two points. One about the governance throughout the AI s…
S52
The Foundation of AI Democratizing Compute Data Infrastructure — The emphasis on community participation, data sovereignty, and alternative technical architectures suggests AI developme…
S53
Driving Indias AI Future Growth Innovation and Impact — These key comments fundamentally shaped the discussion by expanding it beyond technical infrastructure to encompass trus…
S54
Shaping AI’s Story Trust Responsibility &amp; Real-World Outcomes — High level of consensus with strong alignment on fundamental principles and practical approaches. This suggests the AI g…
S55
Towards a Safer South Launching the Global South AI Safety Research Network — The tone was collaborative and urgent throughout, with speakers expressing both excitement about the network’s potential…
S56
WS #438 Digital Dilemmaai Ethical Foresight Vs Regulatory Roulette — – **Implementation focus**: Early-stage development influence versus enforcement cooperation – **Regulatory mechanisms*…
S57
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — I use often use cars as an example. I know it’s a bit silly, but I also like to use the child seat, you know, because I …
S58
Ethical AI_ Keeping Humanity in the Loop While Innovating — Treating regulation and ethics as experimental artifacts that can be tested, evaluated, and refined rather than fixed pr…
S59
Why science metters in global AI governance — The panel discussion explored practical challenges in the science-policy interface, with experts from India, France, WHO…
S60
WS #100 Integrating the Global South in Global AI Governance — Key issues highlighted included the technology gap between developed and developing nations, regulatory uncertainty in m…
S61
How to make AI governance fit for purpose? — Given that AI technologies are inherently global, effective governance requires international engagement and cooperation…
S62
Laying the foundations for AI governance — International Cooperation and Standards Need for international cooperation despite geopolitical challenges
S63
Trade regulations in the digital environment: Is there a gender component? (UNCTAD) — In conclusion, the analysis reinforces the potential of digitalisation and emerging technologies, such as artificial int…
S64
Robotics and the Medical Internet of Things /MIoT — In summary, the analysis highlights the importance of inclusive technology design and ensuring that technological advanc…
S65
WS #45 Fostering EthicsByDesign w DataGovernance &amp; Multistakeholder — Rosanna Fanni: Thank you. Thank you very much. and also thanks for all my fellow panelists. I think a lot of things …
S66
WSIS Action Line C10: Ethics in AI: Shaping a Human-Centred Future in the Digital Age — – **Challenges of implementing ethics in commercial environments**: Multiple speakers highlighted the tension between go…
S67
Ethics and AI | Part 3 — In November 2021, UNESCO adopted theRecommendation on the Ethics of Artificial Intelligence, marking its first global st…
S68
The fading of human agency in automated systems — In many settings, humans retain formal accountability while losing meaningful influence over outcomes. When a decision i…
S69
ICT vulnerabilities: Who is responsible for minimising risks? | Introduction — Human intervention is necessary; the problem can’t be completely solved by technology alone Responsibility also lies wi…
S70
AI in Action: When technology serves humanity — Across these domains (conservation, disaster response, language preservation, small business, and agriculture), technolo…
S71
Driving Indias AI Future Growth Innovation and Impact — Less regulation preferred to avoid curtailing innovation Rajgopal advocates for minimal regulation to avoid stifling in…
S72
Tackling disinformation in electoral context — While some regulation is necessary, over-regulation should be avoided as it could stifle innovation and growth in the di…
S73
New Technologies and the Impact on Human Rights — Balanced regulatory approach Regulation should be proportionate and risk-based, focused on actual likely harms rather t…
S74
Responsible AI in India Leadership Ethics &amp; Global Impact part1_2 — The discussion highlighted the importance of collaborative regulation development, where industry expertise informs regu…
S75
The open-source gambit: How America plans to outpace AI rivals by democratising tech — A “worker-first AI agenda” is the key social pillar of the Plan. The focus is on helping workers reskill and build capac…
S76
WS #283 AI Agents: Ensuring Responsible Deployment — Capacity development | Online education Government Perspectives and Regulatory Approaches Need for enhanced education …
S77
Open Forum #17 AI Regulation Insights From Parliaments — Capacity building and education are essential for all stakeholders Development | Capacity development
S78
Ethics and AI | Part 5 — The principles stipulated by the Convention do not come with anything that would deal with issues which we have identifi…
S79
Ethics and AI | Part 2 — 4.An ethic is framework, or guiding principle, and it’s often moral. […]  A social ethic might include “treating people …
S80
Main Session | Policy Network on Artificial Intelligence — Yves Iradukunda : Thank you, and good afternoon. It’s great to be here in this critical conversation, and thanks to t…
S81
Ethical AI_ Keeping Humanity in the Loop While Innovating — And then at times we also hear this, that having constraints or having frameworks will actually hinder these dynamics. A…
S82
Human Rights-Centered Global Governance of Quantum Technologies: Implications for AI, Digital Rights, and the Digital Divide — UNESCO Director Guilherme Canela emphasized that innovation and human rights protection are not opposing forces but comp…
S83
La découvrabilité des contenus numérique: un facteur de diversité culturelle et de développement (Délégation Wallonie-Bruxelles, Belgian Mission to the UN in Geneva) — In conclusion, the analysis provided insight into various arguments and concerns surrounding AI, internet languages, and…
S84
WS #110 AI Innovation Responsible Development Ethical Imperatives — UNESCO’s representative, Guilherme Canela de Souza Godoy, stressed that innovation and human rights protection should no…
S85
Large Language Models on the Web: Anticipating the challenge | IGF 2023 WS #217 — In conclusion, generative AI technology has the potential for positive impacts in multiple industries. It enhances commu…
S86
Secure Finance Risk-Based AI Policy for the Banking Sector — The discussion revealed several unresolved tensions, particularly the fundamental disagreement between risk-based and em…
S87
Tokenisation and the Future of Global Finance: A World Economic Forum 2026 Panel Discussion — Legal and regulatory | Economic References to financial crises being born from misleaded or dangerous financial innovat…
S88
WS #162 Overregulation: Balance Policy and Innovation in Technology — Balancing regulation and innovation Paola Galvez argues that regulation is needed, but the focus should be on how to re…
S89
WSIS Action Line C10: Ethics in AI: Shaping a Human-Centred Future in the Digital Age — – **Ethics as foundational rather than an afterthought**: The panelists emphasized that ethics should be embedded from t…
S90
WS #438 Digital Dilemmaai Ethical Foresight Vs Regulatory Roulette — Alexandra Krastins Lopes: Great, thanks. It’s an honor to contribute to this important discussion. And while I have a pr…
S91
“Re” Generative AI: Using Artificial and Human Intelligence in tandem for innovation — Moira de Roche:Yes, that’s why I said, Don, we’ve always looked at everything through an ethical lens and we believe tha…
S92
Comprehensive Report: China’s AI Plus Economy Initiative – A Strategic Discussion on Artificial Intelligence Development and Implementation — This comment expanded the education discussion beyond formal systems to include organic, curiosity-driven learning. It r…
S93
Building Population-Scale Digital Public Infrastructure for AI — “Thank you so much, Mr. Nandan.”[4]. “We’ll start by taking a quick group photograph together and then begin the discuss…
S94
Harnessing Collective AI for India’s Social and Economic Development — <strong>Moderator:</strong> sci -fi movies that we grew up watching and what it primarily also reminds me of is in speci…
S96
Shaping AI’s Story Trust Responsibility &amp; Real-World Outcomes — High level of consensus with strong alignment on fundamental principles and practical approaches. This suggests the AI g…
S97
AI That Empowers Safety Growth and Social Inclusion in Action — Second, they want to close capacity gaps. Many developing countries need infrastructure, skills, and compute to particip…
S98
GPAI: A Multistakeholder Initiative on Trustworthy AI | IGF 2023 Open Forum #111 — Abhishek Singh:So the details of side events will be up on the website very soon, hopefully by next week or so. And we w…
S99
Building Inclusive Societies with AI — And in fact, the platform that the committee recommended in some sense was to also help to Uberize, to create demand, to…
S101
Opening — The overall tone was formal yet optimistic. Speakers acknowledged the serious challenges posed by rapid technological ch…
S102
WAIGF Opening Ceremony &amp; Keynote — The overall tone was formal yet optimistic. Speakers expressed enthusiasm about the potential of digital technologies wh…
S103
Collaborative Innovation Ecosystem and Digital Transformation: Accelerating the Achievement of Global Sustainable Development Goals (SDGs) — The discussion maintained a consistently professional, collaborative, and optimistic tone throughout. Speakers demonstra…
S104
Opening address of the co-chairs of the AI Governance Dialogue — The tone is consistently formal, diplomatic, and optimistic throughout. It maintains a ceremonial quality appropriate fo…
S105
Strengthening Corporate Accountability on Inclusive, Trustworthy, and Rights-based Approach to Ethical Digital Transformation — The discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while ack…
S106
WS #31 Cybersecurity in AI: balancing innovation and risks — Melodena Stephens: So this is a tough one, right? Because when I look at ethics, I think ethics are great. The line b…
S107
(Day 1) General Debate – General Assembly, 79th session: afternoon session — The level of disagreement among speakers is moderate. While there is general consensus on the need to address global cha…
S108
https://dig.watch/event/india-ai-impact-summit-2026/ethical-ai_-keeping-humanity-in-the-loop-while-innovating — Thank you. Thank you, Deb. Okay. Thank you. Thank you for having me here. So, first of all, I’ll just go back to the top…
S109
Smart Regulation Rightsizing Governance for the AI Revolution — The discussion began with a notably realistic and somewhat pessimistic assessment of global cooperation challenges, but …
S110
Shaping the Future AI Strategies for Jobs and Economic Development — The discussion maintained an optimistic yet pragmatic tone throughout. While acknowledging significant challenges around…
S111
WS #462 Bridging the Compute Divide a Global Alliance for AI — The discussion maintained a constructive and collaborative tone throughout, with participants building on each other’s i…
S112
Panel 5 – Ensuring Digital Resilience: Linking Submarine Cables to Broader Resilience Goals — This comment emphasizes the critical importance of collaboration while also pushing for concrete actions rather than jus…
S113
AI Meets Agriculture Building Food Security and Climate Resilien — The discussion maintained an optimistic and collaborative tone throughout, characterized by visionary leadership and pra…
S114
Bridging the AI innovation gap — The tone is consistently inspirational and collaborative throughout. The speaker maintains an optimistic, forward-lookin…
S115
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — The tone was collaborative and solution-oriented throughout, with participants acknowledging both the urgency and comple…
S116
Welfare for All Ensuring Equitable AI in the Worlds Democracies — The conversation maintained an optimistic and collaborative tone throughout, with participants sharing practical solutio…
S117
Democratizing AI: Open foundations and shared resources for global impact — The tone was consistently collaborative, optimistic, and forward-looking throughout the discussion. Speakers maintained …
S118
How Multilingual AI Bridges the Gap to Inclusive Access — The tone was consistently collaborative, optimistic, and mission-driven throughout the conversation. Speakers demonstrat…
S119
WSIS Action Line C7 E-learning — – **Tawfik Jelassi** – Assistant Director General for Communication and Information at UNESCO Tawfik Jelassi, UNESCO’s …
S120
Leaders TalkX: Gateway to Knowledge: Empowering Global Access Through Digital — Lori Schulman warmly initiated the Leader Talks panel with a welcome, thanking the audience for their patience and pledg…
S121
WSIS Action Line C7: e-Learning: Empowering Educators and learners: Enhancing Teacher Training and e-Learning for Digital Inclusion — At the WSIS Plus 20 event, a session chaired by Zeynep Varoglu focused on Action Line 7, which addresses the empowerment…
S122
WSIS Action Line C10: The Future of the Ethical Dimensions of the Information Society — Dr. Mariagrazia Squicciarini:Absolutely. We are for plural inclusivity. The last word to Ashu. Dr. Mariagrazia Squiccia…
S123
Main Topic 3 – Innovation and ethical implication  — Good morning. Vanya Skoric, serving as the Programme Director at the European Center for Not-for-Profit Law, spotlights …
S124
Technology Rewiring Global Finance: A Panel Discussion Summary — Koffey emphasized that regulation must be a force for economic growth and innovation, breeding adoption and trust throug…
S125
Building fair markets in the algorithmic age (The Dialogue) — The current system struggles with jurisdictional and sovereignty issues, as companies are often not based in the territo…
S126
AI diplomacy — We are, in essence, searching for a common language to discuss AI ethics, safety, and security. We can see the early res…
S127
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — The recommendation includes values of human rights, inclusivity, and sustainability, serving as a guide for developers a…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
T
Tim Curtis
2 arguments73 words per minute339 words276 seconds
Argument 1
UNESCO’s stance: ethical frameworks do not hinder innovation; they guide AI to serve humanity
EXPLANATION
Tim emphasizes that UNESCO believes ethical AI does not impede technological progress but rather ensures AI serves humanity, especially in the Global South. He frames ethics as central to responsible AI deployment while still encouraging innovation.
EVIDENCE
Tim states that UNESCO’s belief is that ethical AI deployment should not hinder innovation and that AI can offer many benefits to humanity, particularly in the Global South [2]. He also references the 2021 UNESCO recommendation on AI ethics, showing that member states have been discussing these issues since 2019 to put technology at the service of humanity [21].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
UNESCO explicitly states that ethical frameworks are not a barrier to innovation but a guide for AI to serve humanity, countering the notion that constraints hinder progress [S2]; this view is reinforced by analyses that argue safety and innovation are not trade-offs [S36].
MAJOR DISCUSSION POINT
Innovation, Ethics, and Regulation are Not Mutually Exclusive
AGREED WITH
Dr. Tawfik Jelassi, Debjani Ghosh, Brando Benifei, Maria Grazia
Argument 2
UNESCO’s partnership with India exemplifies the need for South‑South collaboration and inclusive global standards
EXPLANATION
Tim highlights the collaboration with the Government of India as an example of UNESCO’s commitment to inclusive, South‑South cooperation in AI ethics. This partnership showcases how global standards can be shaped through joint efforts with developing countries.
EVIDENCE
Tim thanks the Government of India for its collaboration on the session and notes UNESCO’s engagement with the Global South in AI ethics and innovation [2].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
UNESCO’s engagement with the Global South, including India, is highlighted as a model for inclusive standards and South-South cooperation [S28]; recent US-India AI collaborations further illustrate the value of such partnerships [S44].
MAJOR DISCUSSION POINT
Global Cooperation and Multilateral Governance
AGREED WITH
Brando Benifei, Virginia Dignam, Debjani Ghosh, Maria Grazia
D
Dr. Tawfik Jelassi
4 arguments156 words per minute961 words369 seconds
Argument 1
Innovation and ethics reinforce each other; ethical design leads to more trusted, widely adopted AI
EXPLANATION
Tawfik argues that integrating ethical reflection early in AI design makes systems more trustworthy and thus more widely adopted, showing that ethics and innovation are complementary rather than contradictory.
EVIDENCE
He explains that ethical design makes AI systems more respected, trustworthy and widely used, and that AI should be ethical by design, not after-the-fact, citing the UNESCO recommendation as the global framework for this approach [38-40].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panel discussions stress that ethics should be foundational, not an afterthought, and that trustworthy AI drives adoption [S37]; UNESCO’s stance that ethical AI does not impede innovation supports this view [S2].
MAJOR DISCUSSION POINT
Innovation, Ethics, and Regulation are Not Mutually Exclusive
AGREED WITH
Tim Curtis, Debjani Ghosh, Brando Benifei, Maria Grazia
Argument 2
UNESCO Recommendation provides a global set of principles that need concrete translation into actions
EXPLANATION
Tawfik points out that the UNESCO recommendation outlines high‑level principles, but these must be operationalised into concrete actions on the ground to be effective.
EVIDENCE
He notes that the UNESCO recommendation calls for human oversight, non-discrimination, cultural diversity and environmental sustainability, which need to be translated into practice [40-41].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The UNESCO recommendation, adopted by 193 member states, offers high-level principles that require operationalisation through concrete actions and capacity-building programmes [S26][S27][S37].
MAJOR DISCUSSION POINT
Operationalising AI Ethics: From Principles to Practice
AGREED WITH
Debjani Ghosh, Paula Goldman, Virginia Dignam, Maria Grazia
Argument 3
Human oversight, non‑discrimination, cultural respect, and environmental sustainability are core to ethical AI
EXPLANATION
Tawfik reiterates that ethical AI must be built on three pillars: respect for human rights, dignity and fundamental freedoms, which include oversight, non‑discrimination, cultural respect and sustainability.
EVIDENCE
He lists the core components of ethical AI as human oversight, non-discrimination, respect for cultural diversity and environmental sustainability as defined in the UNESCO recommendation [40-41].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The UNESCO recommendation explicitly lists human oversight, non-discrimination, cultural diversity and environmental sustainability as key pillars of ethical AI [S26]; related analyses discuss how these principles shape responsible AI governance [S34].
MAJOR DISCUSSION POINT
Human‑Centred AI and Accountability (“Humanity in the Loop”)
AGREED WITH
Debjani Ghosh, Tim Curtis
Argument 4
Capacity‑building, advocacy, and awareness‑raising are vital for policymakers to implement ethical AI
EXPLANATION
Tawfik shares a field example where UNESCO’s capacity‑building activities (community radios, early‑warning systems) transformed lives, illustrating the importance of advocacy and training for ethical AI adoption.
EVIDENCE
He describes a remote African village where UNESCO introduced community radios, later enabling mobile and internet connectivity, and early-warning systems for floods, showing how capacity-building leads to tangible impact [214-218] and underscores the need for advocacy and training [204-209].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
UNESCO’s IFAP programme and other capacity-building initiatives are cited as essential for translating ethical guidelines into practice and raising awareness among policymakers [S27][S31].
MAJOR DISCUSSION POINT
Global Cooperation and Multilateral Governance
AGREED WITH
Virginia Dignam, Paula Goldman, Debjani Ghosh, Maria Grazia
D
Debjani Ghosh
5 arguments164 words per minute1281 words466 seconds
Argument 1
No trade‑off between innovation and ethics; accountability must remain with humans, not delegated to technology
EXPLANATION
Debjani stresses that the real choice is how technology is used, not whether it is ethical, and that ultimate accountability lies with humans rather than being outsourced to algorithms.
EVIDENCE
She argues that the choice is between using technology for good (e.g., cancer-free world) versus harmful purposes, and that accountability must stay with humans, not be delegated to technology, noting that we cannot align everyone on the same ethical values [49-56] and that accountability rests on people [57-60].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
UNESCO emphasizes that accountability rests with humans, not algorithms, and that safety and innovation are not mutually exclusive [S2][S36].
MAJOR DISCUSSION POINT
Innovation, Ethics, and Regulation are Not Mutually Exclusive
AGREED WITH
Tim Curtis, Dr. Tawfik Jelassi, Brando Benifei, Maria Grazia
Argument 2
Embed ethical oversight throughout the AI lifecycle – design, development, sandbox testing – rather than as an afterthought
EXPLANATION
Debjani calls for ethics to be built into every stage of AI development, from design to commercialization, with checkpoints and sandbox testing to ensure compliance before deployment.
EVIDENCE
She describes the need for oversight at each stage of design and development, with flag-offs and sandbox testing to make ethics by design rather than an afterthought [65-69].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI sandbox frameworks call for continuous human oversight at every stage of development, aligning with calls for ethics-by-design throughout the lifecycle [S38][S37][S2].
MAJOR DISCUSSION POINT
Operationalising AI Ethics: From Principles to Practice
AGREED WITH
Dr. Tawfik Jelassi, Paula Goldman, Virginia Dignam, Maria Grazia
Argument 3
AI Impact Commons showcases how developing countries solve local problems with AI, highlighting diverse use‑cases
EXPLANATION
Debjani presents the AI Impact Commons as a platform that collects impact stories from over 30 countries, demonstrating how AI is applied to address issues like malnutrition, pharma suicides, and climate resilience in developing contexts.
EVIDENCE
She mentions chairing a working group that produced the AI Impact Commons (aiimpactcommons.global) with stories from more than 30 countries solving problems from malnutrition to pharma suicides and climate shocks [166-171].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Studies on AI’s role in developing economies illustrate diverse, locally-driven use cases, mirroring the AI Impact Commons portfolio [S39][S40].
MAJOR DISCUSSION POINT
Inclusive and Diverse Perspectives in AI Development
AGREED WITH
Tim Curtis, Brando Benifei, Virginia Dignam, Maria Grazia
Argument 4
Building capabilities in Tier‑2/3 cities democratizes AI design and fuels inclusive innovation
EXPLANATION
Debjani explains that initiatives like Startup India are focusing on Tier‑2 and Tier‑3 cities to develop AI talent, leading to higher startup growth outside major metros and promoting inclusive design.
EVIDENCE
She notes that programs such as Startup India are building capabilities in Tier-2/3 cities, resulting in higher startup growth in those areas compared to Tier-1 cities [292-300].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Research on India’s AI ecosystem shows that tier-2 and tier-3 cities are emerging hubs of innovation, expanding talent pools beyond traditional metros [S42][S43].
MAJOR DISCUSSION POINT
Education, Skills, and Upskilling for Ethical AI
AGREED WITH
Dr. Tawfik Jelassi, Virginia Dignam, Paula Goldman, Maria Grazia
Argument 5
Accountability ultimately rests with people; ethics cannot be outsourced to algorithms
EXPLANATION
Debjani reiterates that accountability for AI outcomes must remain with human actors, as technology itself cannot be held responsible, emphasizing the need for human governance.
EVIDENCE
She states that accountability comes back to humans and that delegating accountability to technology is not feasible at present [55-60].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
UNESCO’s position stresses human accountability for AI outcomes, reinforcing that ethical responsibility cannot be delegated to technology [S2][S36].
MAJOR DISCUSSION POINT
Human‑Centred AI and Accountability (“Humanity in the Loop”)
AGREED WITH
Dr. Tawfik Jelassi, Tim Curtis
B
Brando Benifei
4 arguments119 words per minute947 words476 seconds
Argument 1
Innovation and ethics can coexist; risk‑based regulation (EU AI Act) balances both and prevents harmful uses
EXPLANATION
Brando argues that innovation and ethics are not opposed; the EU AI Act’s risk‑based approach exemplifies how regulation can protect human rights while still allowing innovation to flourish.
EVIDENCE
He describes the EU AI Act’s risk-based framework, identifying high-risk sectors, ensuring data quality, cybersecurity, human control, and prohibiting certain uses such as predictive policing and emotional recognition [78-84].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The EU AI Act’s risk-based framework exemplifies how regulation can protect rights while fostering innovation, echoing EU Ethics Guidelines for Trustworthy AI [S35][S36][S1].
MAJOR DISCUSSION POINT
Innovation, Ethics, and Regulation are Not Mutually Exclusive
AGREED WITH
Tim Curtis, Dr. Tawfik Jelassi, Debjani Ghosh, Maria Grazia
Argument 2
Risk‑based regulatory approach identifies high‑risk sectors, mandates transparency, and even bans certain applications
EXPLANATION
Brando details how the EU AI Act uses a risk‑based methodology to target specific sectors, enforce transparency, and outright ban applications deemed unacceptable, illustrating practical regulation of AI.
EVIDENCE
He lists examples of regulated sectors (workforce, healthcare, justice), requirements for quality data and governance, and bans on predictive policing, emotional recognition, and manipulative techniques [80-84].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The EU AI Act specifies high-risk sectors, transparency obligations and bans on applications such as predictive policing and emotion recognition [S35][S36].
MAJOR DISCUSSION POINT
Operationalising AI Ethics: From Principles to Practice
Argument 3
A worldwide cooperation framework is required for issues that transcend borders, such as military AI and existential risks
EXPLANATION
Brando stresses that certain AI challenges, like military applications and existential threats, cannot be addressed by national rules alone and need coordinated global governance.
EVIDENCE
He cites the need for global cooperation on military AI and existential risks, warning that lack of globally adopted rules puts humanity in danger [195-199].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for global north-south collaboration on AI governance, especially for military and existential risks, are highlighted in recent diplomatic discussions [S29][S28].
MAJOR DISCUSSION POINT
Global Cooperation and Multilateral Governance
AGREED WITH
Tim Curtis, Virginia Dignam, Debjani Ghosh, Maria Grazia
Argument 4
Regulation must consider human‑rights challenges across contexts, ensuring no community is left behind
EXPLANATION
Brando argues that regulation should protect human rights and exclude harmful AI uses, ensuring equitable outcomes for all societies.
EVIDENCE
He emphasizes that regulation must identify human-rights challenges, balance innovation, and mentions prohibitions on specific high-risk uses as examples of protecting rights [79-84].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Both UNESCO’s human-rights-focused recommendation and the EU’s AI Act stress the need for regulations that safeguard rights and avoid exclusionary outcomes [S26][S35][S36].
MAJOR DISCUSSION POINT
Inclusive and Diverse Perspectives in AI Development
P
Paula Goldman
4 arguments159 words per minute846 words318 seconds
Argument 1
Companies must turn ethical principles into practical controls, ensuring transparency, trust, and the ability to intervene when needed
EXPLANATION
Paula explains that Salesforce embeds ethical controls into its AI products, providing mechanisms for monitoring, escalation, and transparency so that customers can trust and manage AI outcomes.
EVIDENCE
She describes testing products, setting escalation points to humans, answering customer questions about results, failures and responsibility, and emphasizes practical, flexible solutions for different industries [141-149].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
UNESCO’s guidance advocates embedding ethics into concrete controls and monitoring mechanisms throughout product development [S37]; sandbox experiences reinforce the need for such operational safeguards [S38].
MAJOR DISCUSSION POINT
Operationalising AI Ethics: From Principles to Practice
AGREED WITH
Dr. Tawfik Jelassi, Debjani Ghosh, Virginia Dignam, Maria Grazia
Argument 2
Placing people at the centre of transformation drives adoption and ensures AI serves real work needs
EXPLANATION
Paula notes that successful AI scaling puts people at the centre, gathering user feedback on usefulness and integrating it into daily workflows, which leads to higher adoption and relevance.
EVIDENCE
She highlights that the most successful companies give employees a voice about AI usefulness, focus on real work processes, and keep people central to large-scale transformation [155-159].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Human-centred AI approaches argue that placing users at the core of design increases relevance and uptake, a view echoed in UNESCO’s ethical AI framework [S37][S36].
MAJOR DISCUSSION POINT
Human‑Centred AI and Accountability (“Humanity in the Loop”)
Argument 3
Product accessibility—supporting different languages, accents, and abilities—creates superior, inclusive AI solutions
EXPLANATION
Paula stresses that designing AI to handle diverse linguistic and accessibility needs (e.g., different English accents, deaf accents) not only promotes inclusion but also improves overall product performance.
EVIDENCE
She cites the need for AI voice capabilities to work across vernaculars, accents and for deaf users, arguing that inclusive design leads to higher accuracy and commercial success [221-227].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI for Good initiatives stress the importance of designing for linguistic and accessibility diversity to improve performance and inclusion [S41].
MAJOR DISCUSSION POINT
Inclusive and Diverse Perspectives in AI Development
Argument 4
Companies invest in upskilling staff to manage AI agents, escalation protocols, and ethical decision‑making
EXPLANATION
Paula describes how Salesforce trains its workforce to understand AI behavior, set escalation protocols, and handle ethical dilemmas, highlighting the importance of continuous skill development.
EVIDENCE
She mentions that staff are taught when AI should escalate to humans, how to monitor outcomes, and how to address ethical issues, reflecting an upskilling effort [140-148].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Capacity-building programmes that train staff on AI governance, escalation and ethical decision-making are highlighted as best practices by UNESCO and related capacity-building projects [S27][S31].
MAJOR DISCUSSION POINT
Education, Skills, and Upskilling for Ethical AI
AGREED WITH
Dr. Tawfik Jelassi, Virginia Dignam, Debjani Ghosh, Maria Grazia
V
Virginia Dignam
4 arguments146 words per minute1372 words562 seconds
Argument 1
Innovation should draw on varied cultural traditions (e.g., Ubuntu) rather than a single Western paradigm
EXPLANATION
Virginia argues that AI innovation rooted in diverse cultural philosophies, such as the African Ubuntu tradition, would produce fundamentally different and potentially more inclusive AI systems compared to the dominant Cartesian, individualistic approach.
EVIDENCE
She contrasts the Western Cartesian tradition (“I think, therefore I am”) with the Ubuntu perspective (“We are, therefore I am”), suggesting that AI built on Ubuntu would differ markedly [106-112].
MAJOR DISCUSSION POINT
Inclusive and Diverse Perspectives in AI Development
AGREED WITH
Tim Curtis, Brando Benifei, Debjani Ghosh, Maria Grazia
Argument 2
Education must equip engineers with tools to ask “why” and assess impacts, moving beyond vague AI buzzwords
EXPLANATION
Virginia calls for engineering curricula that integrate humanities and social sciences, enabling engineers to question the purpose, beneficiaries, and trade‑offs of AI solutions rather than focusing solely on technical aspects.
EVIDENCE
She stresses the need for engineers to ask why a problem exists, who gains or loses, and to combine technical skills with humanities to be precise about AI’s impact [124-130].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
UNESCO’s calls for integrating humanities and social sciences into engineering curricula aim to foster critical questioning of AI’s societal impact [S37][S34].
MAJOR DISCUSSION POINT
Operationalising AI Ethics: From Principles to Practice
AGREED WITH
Dr. Tawfik Jelassi, Paula Goldman, Debjani Ghosh, Maria Grazia
Argument 3
Collective intelligence and inclusive participation are essential for truly human‑centred AI
EXPLANATION
Virginia proposes that AI should amplify collective intelligence, bringing together diverse skills and perspectives, rather than being a monolithic tool, thereby ensuring AI serves humanity as a shared resource.
EVIDENCE
She describes AGI as a collective intelligence that emerges when people collaborate, likening it to a bouquet of diverse flowers rather than a single colour of Play-Doh, and stresses the need for inclusive participation [236-244].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
IFAP’s emphasis on inclusive, equitable societies underscores the role of collective intelligence and broad stakeholder participation in shaping human-centred AI [S27].
MAJOR DISCUSSION POINT
Human‑Centred AI and Accountability (“Humanity in the Loop”)
Argument 4
Engineers need training that integrates humanities and social sciences to ask critical “why” questions
EXPLANATION
Virginia reiterates that engineering education must blend technical expertise with humanities to foster critical thinking about AI’s societal impact, ensuring responsible development.
EVIDENCE
She again emphasizes the importance of asking why a problem matters, who benefits, and integrating humanities into engineering practice [124-130].
MAJOR DISCUSSION POINT
Education, Skills, and Upskilling for Ethical AI
M
Maria Grazia
4 arguments164 words per minute1794 words655 seconds
Argument 1
The session title is challenged to highlight that constraints need not stifle productivity
EXPLANATION
Maria questions the premise that innovation and ethics are at odds, arguing that appropriate frameworks can actually enhance productivity rather than hinder it.
EVIDENCE
She explicitly states she will challenge the title and argues that constraints or frameworks do not necessarily hinder innovation or productivity [13-20].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses argue that safety and innovation are not mutually exclusive, and that appropriate frameworks can actually boost productivity [S36].
MAJOR DISCUSSION POINT
Innovation, Ethics, and Regulation are Not Mutually Exclusive
AGREED WITH
Tim Curtis, Dr. Tawfik Jelassi, Debjani Ghosh, Brando Benifei
Argument 2
Multilateral dialogue helps align diverse regulatory approaches and share best practices
EXPLANATION
Maria underscores the importance of multilateral platforms, such as UNESCO’s global perspective, for harmonising AI regulations and fostering cooperation among nations.
EVIDENCE
She references the need for a multilateral setting to discuss regulation and align approaches, noting UNESCO’s global perspective and the role of dialogue [186-188].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
North-South diplomatic initiatives and UNESCO’s global cooperation efforts illustrate the value of multilateral dialogue for harmonising AI regulation [S29][S28].
MAJOR DISCUSSION POINT
Global Cooperation and Multilateral Governance
AGREED WITH
Tim Curtis, Brando Benifei, Virginia Dignam, Debjani Ghosh
Argument 3
UNESCO’s mission links education, culture, and communication to embed humanity in AI deployment
EXPLANATION
Maria points out that UNESCO’s historic mission of building peace through education, culture and communication provides a foundation for placing humanity at the centre of AI initiatives.
EVIDENCE
She cites UNESCO’s original mission to build peace via education, culture, science and communication, and connects this to the need for human-centred AI [203-204].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
UNESCO’s 2021 AI ethics recommendation, rooted in its broader mission of education, culture and communication, provides the normative basis for human-centred AI [S26][S28].
MAJOR DISCUSSION POINT
Human‑Centred AI and Accountability (“Humanity in the Loop”)
Argument 4
Ongoing capacity development for policymakers and practitioners is essential for responsible AI deployment
EXPLANATION
Maria stresses that translating ethical principles into practice requires continuous capacity‑building for both policymakers and implementers, ensuring effective and responsible AI use.
EVIDENCE
She mentions the challenge of moving from principles to practice and the need for capacity development, referencing UNESCO’s global perspective and the importance of multilateral dialogue [33-36] and [186-188].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Capacity-building initiatives for policymakers are identified as crucial for moving from principles to practice in AI governance [S27][S31][S16].
MAJOR DISCUSSION POINT
Human‑Centred AI and Accountability (“Humanity in the Loop”)
AGREED WITH
Dr. Tawfik Jelassi, Virginia Dignam, Paula Goldman, Debjani Ghosh
R
Rita Soni
1 argument161 words per minute167 words62 seconds
Argument 1
Developers from regions facing power cuts, poor infrastructure, etc., must be included in design processes
EXPLANATION
Rita asks how to ensure that AI developers who experience real‑world challenges like power outages are involved in creating solutions, advocating for inclusive design that reflects diverse lived experiences.
EVIDENCE
She references the earlier comment about developers never experiencing power cuts, and asks how to involve those affected in the design of AI systems [278-286].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Edge-AI solutions designed for low-connectivity environments demonstrate the importance of involving developers who understand such constraints, reinforcing inclusive design principles [S41][S28].
MAJOR DISCUSSION POINT
Inclusive and Diverse Perspectives in AI Development
A
Audience
2 arguments101 words per minute45 words26 seconds
Argument 1
Defining AI policy involves tools, skills, and impact assessment beyond mere technology design
EXPLANATION
The audience member asks for a definition of AI policy, and Virginia responds that it concerns the tools, capabilities, skills and impact assessment needed to address AI’s societal effects, not just the technology itself.
EVIDENCE
The audience member asks the question about AI policy [263]; Virginia answers that AI policy is about tools, capabilities, skills and impact assessment rather than technology design [264-269].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AI Policy Research Roadmap stresses that effective AI policy must address tools, capabilities, skills and impact assessment, not just technical design [S16]; UNESCO’s broader policy perspective aligns with this view [S37].
MAJOR DISCUSSION POINT
Global Cooperation and Multilateral Governance
Argument 2
Global policy frameworks should support continuous learning and skill development to keep pace with AI advances
EXPLANATION
The audience highlights the need for policy frameworks that facilitate ongoing education and upskilling so societies can adapt to rapid AI developments.
EVIDENCE
Following the earlier discussion on AI policy, the audience emphasizes that policy must enable continuous learning and skill development to stay current with AI advances (derived from the same exchange) [263] and [264-269].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Continuous learning and upskilling are highlighted as essential components of AI policy frameworks to stay abreast of rapid technological change [S16][S27].
MAJOR DISCUSSION POINT
Education, Skills, and Upskilling for Ethical AI
Agreements
Agreement Points
No trade‑off between innovation and ethics; ethical frameworks do not hinder innovation but can support it.
Speakers: Tim Curtis, Dr. Tawfik Jelassi, Debjani Ghosh, Brando Benifei, Maria Grazia
UNESCO’s stance: ethical frameworks do not hinder innovation; they guide AI to serve humanity Innovation and ethics reinforce each other; ethical design leads to more trusted, widely adopted AI No trade‑off between innovation and ethics; accountability must remain with humans, not delegated to technology Innovation and ethics can coexist; risk‑based regulation (EU AI Act) balances both and prevents harmful uses The session title is challenged to highlight that constraints need not stifle productivity
All speakers affirm that innovation and ethics are compatible and that ethical guidelines or regulation need not impede technological progress; instead they can enhance trust and productivity [2][38-40][49-56][74-77][13-20].
POLICY CONTEXT (KNOWLEDGE BASE)
This view aligns with discussions that regulation and ethics can act as enablers rather than barriers, as highlighted in WS #438 where speakers challenged the narrative that regulation stifles innovation and promoted flexible principle-based approaches [S56]; it is also echoed in perspectives treating ethics and regulation as experimental artifacts that can be refined over time [S58].
Ethics must be embedded throughout the AI lifecycle and translated into concrete practices.
Speakers: Dr. Tawfik Jelassi, Debjani Ghosh, Paula Goldman, Virginia Dignam, Maria Grazia
UNESCO Recommendation provides a global set of principles that need concrete translation into actions Embed ethical oversight throughout the AI lifecycle – design, development, sandbox testing – rather than as an afterthought Companies must turn ethical principles into practical controls, ensuring transparency, trust, and the ability to intervene when needed Education must equip engineers with tools to ask “why” and assess impacts, moving beyond vague AI buzzwords Multilateral dialogue helps align diverse regulatory approaches and share best practices
Speakers concur that high-level ethical principles must be operationalised at each stage of AI development, with oversight, testing, practical controls, and interdisciplinary education to make them effective [40-41][65-69][141-149][124-130][186-188].
POLICY CONTEXT (KNOWLEDGE BASE)
The need for lifecycle-wide ethics is central to the “Ethics-by-Design” approach presented in WS #45 and reinforced by UNESCO’s Recommendation on the Ethics of Artificial Intelligence, which calls for human-centred principles throughout development [S65][S67]; implementation challenges in commercial settings further underscore this requirement [S66].
Human accountability remains central; technology itself cannot be held responsible.
Speakers: Dr. Tawfik Jelassi, Debjani Ghosh, Tim Curtis
Human oversight, non‑discrimination, cultural respect, and environmental sustainability are core to ethical AI Accountability ultimately rests with people; ethics cannot be outsourced to algorithms UNESCO believes ethical AI deployment should be human‑centred
All agree that ultimate responsibility for AI outcomes lies with people, not the algorithms, emphasizing human oversight and accountability [40-41][55-60][2].
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses of automated systems emphasize that formal accountability stays with humans, even when decision-making is delegated to AI, confirming that responsibility remains human-centric and extends to platforms distributing the technology [S68][S69].
Capacity development, education and upskilling are essential for responsible AI.
Speakers: Dr. Tawfik Jelassi, Virginia Dignam, Paula Goldman, Debjani Ghosh, Maria Grazia
Capacity‑building, advocacy, and awareness‑raising are vital for policymakers to implement ethical AI Education must equip engineers with tools to ask “why” and assess impacts, moving beyond vague AI buzzwords Companies invest in upskilling staff to manage AI agents, escalation protocols, and ethical decision‑making Building capabilities in Tier‑2/3 cities democratizes AI design and fuels inclusive innovation Ongoing capacity development for policymakers and practitioners is essential for responsible AI deployment
There is broad consensus that continuous training, education and capacity-building-both for policymakers and industry practitioners-are crucial to embed ethics in AI practice [214-218][124-130][140-148][292-300][186-188].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple forums stress capacity building as foundational for AI governance, citing the need for local data infrastructure and training in the Global South [S60], online education and stakeholder engagement for responsible deployment [S76], and broader calls for capacity development across all actors [S77].
Global/multilateral cooperation and inclusive standards, especially involving the Global South, are needed for AI governance.
Speakers: Tim Curtis, Brando Benifei, Virginia Dignam, Debjani Ghosh, Maria Grazia
UNESCO’s partnership with India exemplifies the need for South‑South collaboration and inclusive global standards A worldwide cooperation framework is required for issues that transcend borders, such as military AI and existential risks Innovation should draw on varied cultural traditions (e.g., Ubuntu) rather than a single Western paradigm AI Impact Commons showcases how developing countries solve local problems with AI, highlighting diverse use‑cases Multilateral dialogue helps align diverse regulatory approaches and share best practices
Speakers highlight the importance of inclusive, multilateral approaches that bring together diverse cultural perspectives and the Global South to shape AI standards and address cross-border challenges [2][191-199][106-112][166-171][186-188].
POLICY CONTEXT (KNOWLEDGE BASE)
Panels on integrating the Global South highlight technology gaps and the necessity of international cooperation and inclusive standards to enable AI innovation worldwide [S60]; this is reinforced by calls for multilateral frameworks despite geopolitical tensions [S61] and by the broader argument for international standards as a foundation for AI governance [S62].
Similar Viewpoints
Both stress that ethical considerations must be integrated from the outset and throughout the AI development process, turning high‑level principles into concrete, lifecycle‑wide actions [38-40][40-41][65-69].
Speakers: Dr. Tawfik Jelassi, Debjani Ghosh
Innovation and ethics reinforce each other; ethical design leads to more trusted, widely adopted AI UNESCO Recommendation provides a global set of principles that need concrete translation into actions Embed ethical oversight throughout the AI lifecycle – design, development, sandbox testing – rather than as an afterthought
Both underline that inclusive, culturally aware design and interdisciplinary education are key to building AI systems that are both effective and socially responsible [221-227][106-112][124-130].
Speakers: Paula Goldman, Virginia Dignam
Product accessibility—supporting different languages, accents, and abilities—creates superior, inclusive AI solutions Innovation should draw on varied cultural traditions (e.g., Ubuntu) rather than a single Western paradigm Education must equip engineers with tools to ask “why” and assess impacts, moving beyond vague AI buzzwords
Both argue that regulatory or ethical frameworks can coexist with, and even support, innovation rather than impede it [2][74-77].
Speakers: Tim Curtis, Brando Benifei
UNESCO’s stance: ethical frameworks do not hinder innovation; they guide AI to serve humanity Innovation and ethics can coexist; risk‑based regulation (EU AI Act) balances both and prevents harmful uses
Both emphasize the necessity of global, multilateral cooperation to address AI challenges that go beyond national jurisdictions [186-188][191-199].
Speakers: Maria Grazia, Brando Benifei
Multilateral dialogue helps align diverse regulatory approaches and share best practices A worldwide cooperation framework is required for issues that transcend borders, such as military AI and existential risks
Unexpected Consensus
Inclusive design not only serves social goals but also yields superior, more commercially successful AI products.
Speakers: Paula Goldman, Virginia Dignam
Product accessibility—supporting different languages, accents, and abilities—creates superior, inclusive AI solutions Innovation should draw on varied cultural traditions (e.g., Ubuntu) rather than a single Western paradigm
While industry often focuses on profitability, both speakers converge on the view that designing AI to accommodate diverse linguistic, cultural and accessibility needs improves overall performance and market success, a point not explicitly linked before the discussion [221-227][106-112].
POLICY CONTEXT (KNOWLEDGE BASE)
Research on inclusive technology design demonstrates that such approaches benefit society and improve market performance, supporting the claim that inclusive design drives commercial success [S64]; the Ethics-by-Design discourse further links inclusive practices to better product outcomes [S65].
Overall Assessment

The panel displayed a high degree of consensus across multiple dimensions: (i) innovation and ethics are compatible; (ii) ethical principles must be operationalised throughout the AI lifecycle; (iii) human accountability is paramount; (iv) capacity building and education are essential; (v) global, inclusive cooperation is required, especially involving the Global South.

Strong consensus – most speakers reiterated overlapping arguments, indicating a shared understanding that ethical, human‑centred AI can coexist with innovation when supported by concrete practices, capacity development and multilateral frameworks. This consensus provides a solid foundation for coordinated policy actions and collaborative initiatives in AI governance.

Differences
Different Viewpoints
Timing and method of integrating ethics into AI development – whether ethics should be built‑in from the design stage (ex‑ante) or can be addressed later as an after‑thought, and whether ethical frameworks act as constraints or enablers.
Speakers: Maria Grazia, Dr. Tawfik Jelassi, Debjani Ghosh, Brando Benifei, Virginia Dignam
Maria challenges the title and suggests that frameworks might hinder innovation [13-20]. Tawfik argues that ethics and innovation reinforce each other and that AI must be ethical by design, not after the fact [38-40]. Debjani stresses that oversight must be built into every stage of the AI lifecycle, turning ethics into a design principle rather than an after-thought [65-69]. Brando promotes a risk-based regulatory approach that should be applied proactively to avoid irreversible harms, implying early governance rather than purely post-deployment checks [78-84]. Virginia criticises the current “hammer” view of innovation and says ethics is often treated as a finger that merely points, not as an integral part of design, calling for a broader toolbox [100-115][117-123].
Speakers diverge on whether ethical considerations are a necessary early design constraint or a later regulatory add‑on, with Maria fearing possible hindrance, Tawfik, Debjani and Brando advocating ex‑ante integration, and Virginia questioning the prevailing simplistic view of both innovation and ethics.
POLICY CONTEXT (KNOWLEDGE BASE)
The debate mirrors WS #438’s contrast between early-stage influence and later enforcement of ethical guidelines [S56]; it is also reflected in discussions that treat ethics and regulation as experimental, adaptable tools rather than fixed constraints [S58], and in the Ethics-by-Design narrative that stresses ex-ante integration [S65].
Scope and nature of regulation – risk‑based bans and proactive rules versus a more flexible, minimal‑intervention stance that avoids stifling innovation.
Speakers: Maria Grazia, Brando Benifei, Virginia Dignam, Debjani Ghosh
Maria notes that regulation might impede productivity and questions whether we should decide what technology should do for us, focusing instead on what not to allow [88-90]. Brando outlines the EU AI Act’s risk-based framework, including bans on predictive policing and emotion-recognition, arguing that such regulation is essential to protect human rights while still fostering innovation [78-84]. Virginia argues that regulation is often reduced to a prohibitive “finger” and should not be seen only as a tool for prohibition, calling for a more nuanced, experimental approach [117-123]. Debjani points out that regulation is usually an after-thought and must be fundamentally changed to be embedded throughout the development process [62-64].
While all participants agree regulation is needed, they disagree on how extensive it should be: Maria worries about over‑regulation, Brando defends a strong risk‑based regime with explicit bans, Virginia warns against viewing regulation merely as prohibition, and Debjani calls for integrating regulation early rather than as a post‑hoc fix.
POLICY CONTEXT (KNOWLEDGE BASE)
This tension is evident in WS #438’s comparison of flexible principle-based regulation versus binding law [S56]; similar viewpoints are expressed in India’s advocacy for minimal regulation to preserve innovation [S71]; and in calls for proportionate, risk-based regulatory frameworks that balance harms with innovation benefits [S73].
Definition of innovation – whether innovation is merely the application of new tools (the “hammer” metaphor) or a deeper, culturally‑informed challenge that goes beyond single‑purpose technologies.
Speakers: Virginia Dignam, Maria Grazia, Brando Benifei, Debjani Ghosh
Virginia critiques the reduction of innovation to using a hammer for any nail, urging a broader toolbox and cultural diversity in AI design [100-115]. Maria, while not directly defining innovation, challenges the notion that constraints necessarily hinder it, implying a more expansive view of innovation’s drivers [13-20]. Brando emphasizes that innovation must coexist with ethical safeguards, suggesting that innovation is not just tool-use but must respect human-rights challenges [78-84]. Debjani stresses that innovation should be directed toward solving real human problems (e.g., cancer-free world) rather than being an abstract pursuit [48-51].
The panelists differ on what counts as genuine innovation: Virginia calls for culturally‑rooted, problem‑oriented creativity, whereas other speakers treat innovation more generally as technological progress that must be balanced with ethics.
POLICY CONTEXT (KNOWLEDGE BASE)
The discussion resonates with analogies like the child-seat example used to argue that safety (ethical) measures do not impede innovation, illustrating a broader conception of innovation beyond mere tool deployment [S57]; similar themes appear in broader innovation-vs-regulation debates [S71].
Unexpected Differences
Regulation as prohibition versus regulation as an enabling, experimental tool
Speakers: Brando Benifei, Virginia Dignam
Brando presents regulation (EU AI Act) as a necessary, risk-based framework that includes explicit bans on high-risk uses to protect rights [78-84]. Virginia repeatedly describes regulation as a “finger” that merely points and a series of prohibitions, arguing that this view is too narrow and should be replaced by a more experimental, toolbox-oriented approach [117-123].
Both speakers support regulation but clash on its character: Brando sees bans as essential safeguards, whereas Virginia warns that viewing regulation solely as prohibition limits innovation and fails to capture its potential as a flexible, experimental instrument.
POLICY CONTEXT (KNOWLEDGE BASE)
Panels such as WS #438 and Ethical AI sessions frame regulation not as a prohibitive barrier but as an experimental, enabling artifact that can be iteratively refined to support responsible AI development [S56][S58].
Perception of AI as an ‘empty signifier’ versus AI as a concrete, problem‑solving tool
Speakers: Virginia Dignam, Debjani Ghosh
Virginia claims AI is currently an empty signifier, a vague term that needs precise definition and grounding in collective intelligence [124-133]. Debjani points to concrete impact stories from the AI Impact Commons, showing AI already solving specific problems in developing countries (malnutrition, pharma suicides, climate resilience) [166-171].
Virginia’s abstract critique of AI’s conceptual fuzziness contrasts with Debjani’s presentation of tangible AI applications, revealing an unexpected tension between viewing AI as a nebulous concept versus a set of real‑world solutions.
Overall Assessment

The panel exhibits broad consensus that ethical, human‑centred AI is essential and that innovation should not be sacrificed. However, substantial disagreement persists on how to operationalise ethics – whether through early design integration, risk‑based regulation, or broader cultural re‑thinking – and on the appropriate scope of regulation, with some advocating strong, pre‑emptive bans and others warning against over‑regulation. These divergences reflect differing institutional lenses (UNESCO policy, EU law, corporate practice, academic critique) and suggest that achieving coordinated global governance will require reconciling ex‑ante design mandates with flexible, context‑sensitive regulatory models.

Moderate to high. While the overarching goal of trustworthy, human‑centred AI is shared, the lack of alignment on timing, mechanisms, and the philosophical framing of innovation and regulation could impede the formulation of cohesive policies and slow the translation of ethical principles into practice.

Partial Agreements
The speakers share the goal of ethical, trustworthy AI, yet propose different pathways – international normative guidance, internal governance processes, corporate product‑level controls, and statutory risk‑based regulation – to achieve it [38-40][65-69][141-149][78-84].
Speakers: Dr. Tawfik Jelassi, Debjani Ghosh, Paula Goldman, Brando Benifei
All agree that AI must be trustworthy and human-centred, but differ on implementation: Tawfik stresses UNESCO’s global recommendation as the guiding framework [38-40]; Debjani calls for lifecycle oversight with flag-offs and sandbox testing [65-69]; Paula describes concrete product controls, escalation points and user-feedback loops within Salesforce [141-149]; Brando outlines a risk-based legal regime with sector-specific requirements and bans [78-84].
While agreeing on the importance of inclusive capacity development, they differ on the primary mechanism – multilateral policy forums, national impact‑story platforms, or curriculum reform – to embed humanity in AI.
Speakers: Maria Grazia, Debjani Ghosh, Virginia Dignam
All emphasize the need for inclusive capacity building: Maria calls for multilateral dialogue and capacity development for policymakers [33-37][186-188]; Debjani highlights AI Impact Commons and tier-2/3 city initiatives to democratise AI design [166-171][292-300]; Virginia stresses education that blends engineering with humanities to ask ‘why’ and incorporate diverse cultural perspectives [124-130][106-112].
Takeaways
Key takeaways
Innovation, ethics and regulation are not mutually exclusive; ethical design can enhance trust and adoption of AI. UNESCO’s 2021 Recommendation provides global principles (human rights, dignity, freedoms) that must be translated into concrete actions across the AI lifecycle. Human‑centred AI requires accountability to remain with people, not delegated to algorithms; oversight should be built in from design through deployment. Risk‑based regulatory approaches (e.g., EU AI Act) can balance innovation with protection by identifying high‑risk sectors and prohibiting harmful uses. Global cooperation and multilateral governance are essential for cross‑border challenges such as military AI and existential risks. Inclusive and culturally diverse perspectives (e.g., Ubuntu tradition, developers from low‑resource settings) enrich AI innovation and avoid a single Western paradigm. Education and up‑skilling must integrate technical, humanities, and social‑science knowledge so engineers can ask “why” and assess impact. Practical industry examples (Salesforce) show that embedding transparency, escalation mechanisms, and accessibility leads to superior, market‑ready AI.
Resolutions and action items
Encourage member states to continue operationalising UNESCO’s AI ethics recommendation through national frameworks and capacity‑building programmes. Develop and promote the AI Impact Commons platform to share impact stories and best practices, especially from developing countries. Adopt a ‘ethics‑by‑design’ lifecycle model that includes ethical checkpoints, sandbox testing, and transparent documentation before commercial release. Support the creation of risk‑based regulatory sandboxes that allow innovation while ensuring high‑risk applications are monitored or prohibited. Invest in up‑skilling programmes for engineers, data scientists and policymakers that blend technical training with humanities and social‑science perspectives. Facilitate multilateral dialogue (UNESCO‑EU‑India) to align standards on prohibited AI uses (e.g., predictive policing, emotion‑recognition in workplaces).
Unresolved issues
How to achieve global consensus on a common set of ethical values when cultural, political and economic contexts differ markedly. Specific mechanisms for translating high‑level UNESCO principles into enforceable national or sectoral regulations remain unclear. Ways to systematically include developers from under‑served regions (e.g., those experiencing power cuts) in AI design processes were raised but not detailed. The definition and scope of “AI policy” and how it should be differentiated from technical standards need further clarification. Methods for monitoring compliance with ethical checkpoints and for assigning accountability when harms occur were not fully resolved.
Suggested compromises
Adopt a risk‑based regulatory framework that bans clearly harmful applications while allowing lower‑risk innovation to proceed under oversight. Implement flexible, context‑specific ethical guidelines rather than a rigid one‑size‑fits‑all approach, enabling adaptation to local realities. Combine regulatory requirements with industry self‑governance (e.g., internal ethics boards, sandbox testing) to reduce compliance burden while maintaining safeguards. Promote inclusive design toolkits that provide a variety of “tools” (not just a single “hammer”) to accommodate diverse cultural and technical needs.
Thought Provoking Comments
I’m challenging the very title of this meeting, that is *Balancing Innovation and Ethics in the Age of AI*. Innovation and ethics are not a trade‑off; they can reinforce each other, just as regulation in pharma has not hindered innovation.
She reframes the central premise of the panel, turning a presumed dichotomy into a question about how the two can be synergistic. By invoking the pharma analogy she introduces a concrete counter‑example that many participants had not considered.
Her challenge prompted the first round of responses that explicitly addressed the relationship between innovation, ethics and regulation. It set the tone for a constructive rather than adversarial debate and opened space for speakers to discuss operationalising ethics rather than viewing it as a barrier.
Speaker: Maria Grazia
I don’t see a contradiction between ethics and innovation; I see it between innovation and regulation. Ethical design makes AI more trustworthy and therefore more widely adopted – ethics and innovation reinforce each other.
He shifts the focus from a perceived conflict to a complementary relationship, emphasizing ‘ethics‑by‑design’ and the need for early‑stage human‑centred safeguards.
His point directly answered Maria’s challenge and provided a framework (ethical design ex‑ante) that other panelists referenced. It steered the conversation toward practical integration of ethics in the development lifecycle.
Speaker: Dr. Tawfik Jelassi
The real choice is not between innovation and ethics, but between using technology to make everyone cancer‑free, fed and dignified, or using it to create conflict and weapons. We cannot align every human on the same ethical values, so accountability must stay with people, not the technology.
She reframes the debate from abstract principle to concrete societal outcomes and highlights the limits of universal ethical consensus, stressing human accountability.
Her framing broadened the discussion from technical guidelines to societal purpose, prompting Brando Benifei to discuss risk‑based regulation and Virginia Dignam to question the very notion of ‘innovation’ as a hammer‑and‑nail metaphor.
Speaker: Debjani Ghosh
Innovation is more than just using the latest hammer (e.g., generative AI) to nail any problem. We need diverse epistemologies – imagine AI built on the African Ubuntu tradition ‘we are, therefore I am’ rather than the Western Cartesian ‘I think, therefore I am’. Ethics and regulation should be experimental tools, not immutable commandments.
She introduces cultural pluralism into AI design, challenges the dominant Western epistemology, and reconceptualises ethics and regulation as iterative, experimental processes rather than static rules.
Her cultural critique sparked a shift toward discussing inclusivity and education. It led Paula Goldman to stress practical, inclusive product design, and Debjani to highlight the ‘luxury’ of developed‑country perspectives versus the needs of developing nations.
Speaker: Virginia Dignam
Regulation should not be an afterthought. Oversight must be built into every stage of development, with ‘red‑tape’ checkpoints and sandbox testing, so ethics becomes by‑design rather than a post‑hoc fix.
She provides a concrete procedural roadmap for embedding ethics, moving the conversation from abstract principles to actionable governance mechanisms.
This concrete suggestion influenced Brando Benifei’s description of the EU AI Act’s risk‑based approach and reinforced Paula Goldman’s emphasis on iterative testing and human escalation points in AI systems.
Speaker: Debjani Ghosh
The EU’s risk‑based AI Act shows that we can prohibit certain high‑risk uses (e.g., predictive policing, emotion recognition at work) while still fostering innovation elsewhere. Transparency is crucial for trust, especially in democratic societies.
He brings a concrete policy example that balances prohibition with innovation, illustrating how regulation can be selective rather than blanket, and underscores the role of trust.
His example gave the panel a real‑world reference point, prompting Maria Grazia to ask about the role of multilateral cooperation and leading Dr. Jelassi to discuss global peace‑building dimensions of AI.
Speaker: Brando Benifei
Education must bridge the gap between engineers and humanities. Engineers need to ask ‘why is this a problem, who benefits, who loses?’ and the humanities must help make AI a precise, non‑magical term.
She identifies the root cause of ethical lapses as disciplinary silos and proposes interdisciplinary education as the remedy, moving the debate from policy to capacity‑building.
This comment deepened the conversation about skill development, influencing later remarks by Paula Goldman on practical training and by Debjani on up‑skilling in Tier‑2/3 Indian cities.
Speaker: Virginia Dignam
At UNESCO we built community radios in a remote African village, then telecoms followed, then internet and early‑warning systems. That shows how technology, when placed at the centre of people’s lives, can transform societies.
He provides a vivid, ground‑level case study that illustrates the ‘human‑in‑the‑loop’ principle in action, moving the discussion from theory to tangible impact.
The anecdote reinforced the panel’s emphasis on human‑centred deployment and inspired other speakers (e.g., Paula Goldman) to talk about inclusive product design and real‑world testing.
Speaker: Dr. Tawfik Jelassi
AI agents that correct accessibility issues in real time (e.g., fixing broken UI for a deaf user) demonstrate that inclusive design is not a cost but a commercial advantage – the more inclusive the product, the more successful it is.
She links ethical design directly to business value, countering the myth that inclusion is a financial burden and providing a concrete example of ethical AI in practice.
Her point shifted the conversation toward the business case for ethics, prompting Maria Grazia to highlight that inclusion improves performance, and reinforcing the earlier claim that ethics and innovation are mutually supportive.
Speaker: Paula Goldman
We must democratise not just access to AI but also its design. Developers in Tier‑2/3 cities face power cuts and infrastructure challenges; we need to bring those lived experiences into AI creation to ensure relevance and fairness.
She raises a practical equity issue – the inclusion of under‑represented developers – that had not been explicitly addressed, linking back to earlier cultural critiques.
Her question prompted Debjani Ghosh to discuss initiatives like Startup India and the AI Impact Commons, highlighting concrete steps to broaden participation in AI development.
Speaker: Rita Soni (audience)
Overall Assessment

The discussion began with a theoretical framing of ‘balancing innovation and ethics.’ Maria Grazia’s challenge to this framing acted as a catalyst, prompting speakers to reconceptualise the relationship as synergistic rather than antagonistic. Dr. Jelassi’s ‘ethics‑by‑design’ stance, Debjani’s focus on societal outcomes, Brando’s concrete EU policy example, and Virginia’s cultural‑pluralism critique each introduced new dimensions—operational mechanisms, global governance, and epistemic diversity—that redirected the conversation from abstract principles to actionable pathways. Paula’s industry‑level illustration that inclusive design drives commercial success reinforced the emerging consensus that ethics fuels innovation. Audience input from Rita highlighted the need for inclusive developer participation, closing the loop on the panel’s theme of ‘humanity in the loop.’ Collectively, these pivotal comments shifted the tone from a high‑level debate to a concrete, multi‑stakeholder roadmap, underscoring that ethical AI is achievable through early‑stage design, inclusive education, targeted regulation, and global cooperation.

Follow-up Questions
What are the biggest gaps between UNESCO’s AI ethics principles and their implementation on the ground?
Identifying practical barriers is essential to move from high‑level recommendations to actionable policies and practices.
Speaker: Maria Grazia (moderator) / Dr. Tawfik Jelassi
What mechanisms can effectively embed ethical reflection into the everyday operations of companies and sectors?
Concrete frameworks are needed so that ethical considerations become routine rather than an after‑thought in product development.
Speaker: Maria Grazia (moderator) / Debjani Ghosh
How should “human oversight” and redress mechanisms be defined and operationalised within AI regulation?
Clear guidance on oversight is required to ensure AI systems respect human rights while remaining innovative.
Speaker: Maria Grazia (moderator) / Brando Benifei
How can we prevent people from being merely consumers of AI and instead empower them to shape and direct the technology?
Education and participatory approaches are needed so that citizens actively influence AI development rather than passively receive it.
Speaker: Maria Grazia (moderator) / Virginia Dignam
What concrete models or tools can translate AI ethics principles into actionable practices for companies?
Businesses need practical, scalable solutions to embed ethics into product design, testing, and deployment.
Speaker: Maria Grazia (moderator) / Paula Goldman
What exactly constitutes an “AI policy” and how does it differ from technical design or regulation?
A clear definition helps policymakers, companies, and educators align on the scope and objectives of AI governance.
Speaker: Rajan (audience) / Virginia Dignam
How can developers from low‑resource contexts (e.g., areas with frequent power cuts) be included in AI design and development?
Inclusive design requires democratizing access to AI development tools and training for people who experience the challenges AI aims to solve.
Speaker: Rita Soni (audience) / Debjani Ghosh
What global cooperation frameworks are needed to address high‑risk AI applications such as military use and existential threats?
International research and policy coordination are crucial to prevent unregulated deployment of potentially dangerous AI systems.
Speaker: Brando Benifei
How can the terminology around AI be clarified, given that it is often used as an “empty signifier”?
Research into precise definitions will improve public discourse, policy drafting, and interdisciplinary collaboration.
Speaker: Virginia Dignam
What are the measurable impacts of AI projects in developing countries, and how can successful models be scaled?
Studying impact stories (e.g., via the AI Impact Commons) will provide evidence for effective AI interventions and guide replication.
Speaker: Debjani Ghosh
What evidence exists on the effectiveness of risk‑based AI regulation (e.g., the EU AI Act) in balancing innovation and ethics?
Empirical research is needed to assess whether risk‑based approaches achieve intended safety outcomes without stifling innovation.
Speaker: Brando Benifei (implied)
How does collective intelligence compare to the concept of AGI, and what research is needed to understand their relationship?
Exploring collective intelligence as a practical alternative to speculative AGI can inform more realistic AI governance strategies.
Speaker: Virginia Dignam

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

How Multilingual AI Bridges the Gap to Inclusive Access

How Multilingual AI Bridges the Gap to Inclusive Access

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session opened with Markus Reubi emphasizing that AI must serve the public good by supporting all languages and cultures, framing multilingual access as a democratic imperative and previewing the Geneva AI Summit 2027 as a venue for continued cooperation [4-5][6-9][15]. He announced that the Indo-Swiss Joint Research Programme will launch three new joint calls covering geosciences, social sciences, and One Health, and introduced a longer-term Indo-Swiss Research Framework that will include artificial intelligence as a high-priority topic [27-34][41-44]. New funding mechanisms such as Explore, Experiment, and Expand grants were also presented to foster novel collaborations, increase mobility, and host flagship events in both countries [45-49]. Nina Frey then introduced the panel, noting the focus on language diversity and inviting Amitabh Nag to discuss India’s Bhashini initiative, which targets 22 constitutionally recognized languages across speech, text, and OCR modalities [77-86][89]. Nag explained that Bhashini overcame a lack of digital data by field-collecting corpora from 200 volunteers and has already deployed a voice-first agricultural advisory system for farmers, while expanding to 36 languages and scripts without written form [99-108][93-95]. Aya Bedir of Current AI described the organization as a public-private partnership with $400 million pledged, dedicated to multilingual diversity and cultural preservation, and warned that large-tech data-scraping can treat communities as mere data rather than partners [124-136][158-164]. Alex Ilic presented the open-source Apertus model, highlighting a global talent shortage-only about a hundred experts can build foundation models-and argued that academia must receive compute, data, and benchmark resources to scale multilingual AI [183-190][194-202][210-218]. He noted that current training data is 60 % English and outlined a plan to incrementally raise performance for the next hundred languages while leveraging collaborations such as with the ICAIN network [194-198][201-202]. Petri Myllymäki from the Nordic ELIS network stressed that language access is a human right, that cultural value frameworks differ, and called for inclusive global initiatives that invite all nations to the AI “dinner table” [224-232][236-239]. A representative from NTU Singapore described the C-Line model covering 13 Southeast Asian languages, emphasizing frugal data approaches, sovereignty concerns, and the need to reflect code-switching and dialectal variation in AI systems [250-263][265-272]. Annie Hartley illustrated the risks of deploying poorly adapted models in high-stakes medical settings, recounting a mis-diagnosis in Ethiopia due to reliance on a Bible-trained model, and advocated for neutral academic validation through the MOVE project that gathers real-world feedback [287-301][326-334]. She argued that such implementation science, though costly, is essential for ensuring models work accurately across diverse cultural contexts and for maintaining control-or “sovereignty”-over AI tools [338-347][354-357]. Across the contributions, participants agreed that multilingual, culturally aware AI requires coordinated funding, open-source models, talent development, and robust validation pipelines [8][237-239]. The discussion concluded with a reaffirmation that the upcoming Geneva AI Summit will serve as a platform to advance these collaborative efforts and embed multilingual equity into future AI governance [15][210][358].


Keypoints


Major discussion points


Multilingual AI as a democratic and public-good imperative – Swiss representatives framed language inclusion as essential for democratic participation and digital equity, citing the need to serve “all languages and all cultures” and describing linguistic exclusion as a “persistent barrier” and a “democratic imperative” [4-5]. They highlighted the open-source multilingual model Apertus (developed by ETH Zurich and EPFL) as a concrete example of a public-interest tool [14-15]. Later speakers repeatedly returned to the theme, stressing that language diversity underpins cultural preservation and equitable AI [68-69][135-136][225-233].


Indo-Swiss research collaboration and new funding programmes – Torsten Schwede announced three new joint calls (geosciences, social sciences, One Health) and the launch of an Indo-Swiss Research Framework Program, emphasizing “high-impact research” and “long-term co-created research” [27-34][41-45]. He also introduced new grant schemes (Explore, Experiment, Expand) and expanded mobility funding to sustain durable collaborations [46-48].


India’s Bhashini initiative: building multilingual data and applications – Amitabh Nag described Bhashini (Bhasha Interface for India) as a platform covering 22 constitutional languages, detailing the five technical pillars (ASR, text-to-text, text-to-speech, OCR, digital dictionary) [83-87] and the grassroots data-collection effort that created monolingual and bilingual corpora [99-105]. He gave concrete use-cases such as a voice-first agricultural advisory system for farmers [108] and the “Gyan Bharatam” manuscript project [108-109].


Public-private partnerships and open-source models to scale multilingual AI – Aya Bedir outlined the Current AI public-private partnership, its $400 million initial commitment (aiming for $2.5 billion), and its focus on multilingual diversity and cultural preservation [124-131][135-144]. Alex Ilic explained the Apertus model, the talent bottleneck (≈ 100 experts worldwide) and the need for academia-driven compute, data and benchmarks [183-194][195-200]. Petri Myllymäki reinforced the human-right framing of language access and the necessity of inclusive global initiatives [224-233][236-239].


Validation, real-world high-stakes testing, and sovereignty over AI tools – Annie Hartley warned that language-only performance is insufficient for high-stakes domains (e.g., medical advice in Ethiopia) and described the MOVE (Massive Open Online Validation and Evaluation) project to collect real-world feedback [287-298][329-337]. She linked this to broader concerns about “sovereignty” – the need for communities and nations to control AI systems rather than be passive data sources [158-166][266-272].


Overall purpose / goal of the discussion


The session served as a high-level convening of governments, research institutions, and public-private initiatives to (1) announce new Indo-Swiss research funding, (2) showcase concrete multilingual AI projects (Apertus, Bhashini, Current AI), (3) stress the democratic necessity of language inclusion, and (4) chart a collaborative roadmap-culminating in future summits (Geneva 2027) and joint validation efforts-to build a globally equitable AI ecosystem.


Tone of the discussion


The conversation began with formal, diplomatic language emphasizing partnership and policy [1-10]. As the agenda progressed, speakers adopted a more enthusiastic and celebratory tone when announcing funding and showcasing projects [27-34][77-89]. Mid-session, the tone shifted to reflective and cautionary, highlighting ethical concerns, data-ownership issues, and the need for community-centric approaches [158-166][287-298]. Throughout, the overall atmosphere remained collaborative and forward-looking, ending on an appreciative and hopeful note [358-364].


Speakers

Alex Ilic – Executive Director of the AI Center; Co-founder of ICAIN; expertise in multilingual AI model development and academic-industry collaboration [S1][S3]


Annie Hartley – Professor (EPFL/Yale); Director of the LIGHTS Lab (Laboratory for Intelligent Global Health and Humanitarian Response Technology); expertise in high-stakes medical AI applications and validation [S2]


Aya Bedir – CEO of Current AI; expertise in public-interest AI, multilingual diversity, and hardware-focused AI initiatives [S4][S5]


Markus Reubi – Swiss delegate/speaker representing Switzerland’s AI policy and multilingual AI agenda; expertise in AI governance and international collaboration [transcript]


Participant – Dean of the College of Humanities, Arts and Social Sciences at NTU Singapore; historian; expertise in cultural aspects of AI, multilingual models for Southeast Asia, and sovereignty in AI [S8]


Amitabh Nag – CEO of Bhasha Interface for India (Bhashini); expertise in multilingual speech, text, and OCR technologies for Indian languages [S11][S12]


Nina Frey – Executive Director of ICAIN (also referred to as ICANN in the transcript); expertise in network coordination for multilingual AI research and policy [S13][S14]


Petri Myllymäki – Founding member of ICAIN; representative of the ELIS Network and Finnish Supercomputing Centre; former member of the UN Age Lab; expertise in language preservation, human-rights aspects of language access [S15]


Torsten Schwede – President of the Swiss National Science Foundation; expertise in research funding, Indo-Swiss scientific collaboration, and multidisciplinary AI research [S16]


Additional speakers:


(None identified beyond the listed speakers)


Full session reportComprehensive analysis and detailed insights

The session opened with Markus Reubi framing multilingual artificial intelligence as a democratic necessity. He argued that “AI can only serve the public good if it serves all languages and all cultures” and described linguistic exclusion as “one of the most persistent barriers to digital participation” – a technical challenge that is also a “democratic imperative” [4-5]. Reubi placed this message within a broader international trajectory that began with the Paris 2025 public-interest AI process, continued at the India AI Summit 2026, and will culminate in the Geneva AI Summit 2027 [6-9]. He highlighted Switzerland’s contribution of the open-source multilingual model Apertus, developed by ETH Zurich and EPFL, as a concrete public-interest tool that underpins inclusive digital public services [14-15]. Reubi also underscored ICANN’s role in providing equitable access to compute, data and multilingual models [12-13].


Torsten Schwede then announced a suite of new Indo-Swiss research initiatives. Under the Indo-Swiss Joint Research Programme (JRP), three calls were launched – one on geosciences, one on social sciences, and a recent One Health call that addresses the interconnected health of humans, animals and the environment [27-34]. He presented the Indo-Swiss Research Framework Programme as a long-term mechanism to co-create research, with artificial intelligence identified as a high-priority thematic area [41-44]. To stimulate novel collaborations, Schwede introduced “Explore, Experiment and Expand” grants that allow consortia to test blue-sky ideas, extend proven partnerships and increase mobility funding for sustained collaboration [45-48]. He also pledged a series of flagship events in both Switzerland and India to keep the network engaged [49-50].


The panel was introduced by Nina Frey, Executive Director of ICANN. She noted that the network links academic partners across Europe, Africa and Singapore and that the session’s focus on language diversity reflects a “red line” running through the series of summits, from Bletchley to the present [54-60][68-69][70-72]. Frey also noted the presence of a board member from the Finnish Supercomputing Center, highlighting the importance of high-performance computing for multilingual AI[54-60]. She then handed the floor to Amitabh Nag to discuss India’s Bhashini initiative [73-75].


Amitabh Nag described Bhashini (Bhasha Interface for India) as a platform that initially covered the 22 languages enumerated in India’s Eighth Schedule and aimed to “transcend the language barrier using artificial intelligence” [77-86]. The programme targets five technical pillars – automatic speech recognition, text-to-text translation, text-to-speech synthesis, optical character recognition and a digital dictionary – all built for the 22 languages [83-87]. Since its launch, Bhashini has expanded to 36 languages, including scripts that previously lacked written form, and is actively digitising tribal languages [91-96]. A key obstacle was the “non-availability of digital data”, which was overcome by a field-based “brute-force” effort involving about 200 volunteers who collected speech, images and text to create monolingual and bilingual corpora [99-105]. Nag highlighted two early deployments: a voice-first agricultural advisory system that lets farmers ask questions in their native language, and the “Gyan Bharatam” manuscript digitisation project [108-109].


Aya Bedir presented Current AI, a public-private partnership that has secured an initial $400 million commitment (with a target of $2.5 billion) from the French government and multiple other national and philanthropic partners [124-131][132-134]. The initiative places multilingual diversity and cultural preservation at its core, extending its focus beyond language to behaviours, norms and artefacts [135-144]. Bedir warned that large-tech firms often “scrape data” and treat communities as mere data points, arguing that genuine progress requires “getting as close as possible to the communities themselves” and supporting them to preserve their own cultures [158-164].


Alex Ilic elaborated on the open-source foundation model Apertus, noting that only about a hundred experts worldwide possess the expertise to build such large-scale models and that both talent and high-performance computing are critical bottlenecks [183-186]. He explained that the current training data is 60 % English and 40 % non-English, limiting performance for many languages [191-196]. Ilic outlined a strategic plan to incrementally raise performance for the next hundred languages, assess the associated costs, and leverage ICANN’s shared compute infrastructure [197-202][210-218]. He stressed the need for community-defined benchmarks that reflect cultural contexts rather than generic corporate metrics [191-196][197-200].


Petri Myllymäki, representing the Nordic ELIS network, framed language access as a human right, noting that “access to language and culture is a human right” and that AI systems must respect diverse value frameworks [224-233]. He called for inclusive global initiatives that invite every nation to the AI “dinner table” as guests, not merely as part of the menu [236-239].


A participant from NTU Singapore described the regional C-Line model, which supports 13 Southeast Asian languages (including Tamil) and is built partly on Apertus[250-263]. The speaker highlighted the model’s frugal data approach, its respect for national sovereignty, and its ability to handle code-switching and dialectal variation in everyday speech [264-272].


Annie Hartley illustrated the dangers of deploying inadequately adapted models in high-stakes settings. In Ethiopia, an AI system trained primarily on the Bible incorrectly advised a patient not to take insulin, demonstrating that “language-only performance is insufficient for medical advice” [287-295][296-304]. Hartley also highlighted that she heads the LIGHTS laboratory (Laboratory for Intelligent Global Health and Humanitarian Response Technology), which coordinates the MOVE (Massive Open Online Validation and Evaluation) project to collect real-world clinical feedback and continuously improve models [329-337]. She argued for rigorous, real-world validation and introduced MOVE as a neutral, open-science platform because commercial entities lack incentive to test models in such critical contexts [305-313][338-345].


Agreements emerged across the discussion: all speakers affirmed that multilingual AI is essential for democratic participation, human rights and equitable digital development [4-5][57-58][224-229][126-132][158-164][190-196][250-263][288-295][78-86][89-95]; talent scarcity and the need for shared compute resources were identified as major bottlenecks [183-186][220-222][57-58]; and there was unanimous support for collaborative, multistakeholder funding mechanisms-such as the Indo-Swiss joint calls, the Explore/Experiment/Expand grants, and the public-private structure of Current AI-to sustain long-term research and deployment [27-34][41-44][45-49][124-131][124-134].


Disagreements were noted. Bedir advocated for frugal, community-led scaling that avoids exploitative big-tech data scraping, whereas Reubi and Ilic emphasized the necessity of high-performance computing and specialised talent to train foundation models [126-164][220-222][183-186]. A second tension concerned the role of big-tech versus academia: Bedir warned against “brute-force” data collection by large firms, while Ilic highlighted that current benchmarks are dominated by big-tech publications and called for academia-driven alternatives [126-164][183-186][27-34][41-44]. A third divergence related to validation focus: Hartley called for extensive health-sector testing, whereas Ilic’s remarks centred on model development and benchmark creation without explicit health-specific validation [287-295][183-186].


In conclusion, participants reaffirmed that the upcoming Geneva AI Summit 2027 will serve as a pivotal platform to advance these collaborative efforts [7-9][15][210-212]. Concrete action items include: launching the three Indo-Swiss joint research calls and the broader Research Framework Programme [27-34][41-44]; deploying the Explore, Experiment and Expand grants [45-49]; expanding Bhashini to cover all 100 + Indian languages, including script-less tribal languages [77-86][91-96]; continuing development and open dissemination of Apertus with improved, culturally relevant benchmarks [183-202]; and scaling the MOVE validation pipeline for high-stakes domains such as healthcare [329-337]. Unresolved issues-sustainable financing for large-scale data collection, ethical data-ownership practices, standards for culturally relevant benchmarks, and mechanisms to balance national sovereignty with interoperable global models-were acknowledged as priorities for future work. The session closed with a collective commitment to maintain momentum through regular flagship events, shared compute resources, and ongoing multilateral dialogue [358-364].


Session transcriptComplete transcript of the session
Markus Reubi

as a bridge to democratic access. Switzerland is very pleased to contribute to this global conversation at a pivotal time, a pivotal moment for responsible AI. Our message, which was supposed to be delivered by our president, is very clear. AI can only serve the public good if it serves all languages and all cultures. Today, linguistic exclusion remains one of the most persistent barriers to digital participation, ensuring multilingual access is therefore not only a technical challenge, it’s a democratic imperative. This discussion forms part of the international arc that began with the Paris 2025 public interest AI process, continues here at India AI Summit 2026, and will advance further when Switzerland will happily host the Geneva AI Summit. The Geneva AI Summit in 2027.

Our shared objective is continuity, cooperation and genuinely global approach to AI governance. Switzerland is proud that this session brings together partners who embody open and collaborative innovation. India’s Barshini Initiative, current AI that emerged from the French AI Summit and then many partners from the broader network of academic and policy institutions of ICAIN, the International Computation and AI Network. Such partners as ELIS, NTU Singapore and of course the Swiss partners ETH and EPFL. ICAIN really reflects Switzerland’s commitment to equitable access to compute data and multilingual models. A notable example is Apertus, which maybe many of you have heard of. It was developed by ETH Zurich and EPFL, fully open and transparent multilingual model designed to support public interest applications across diverse linguistic communities.

As we prepare for Geneva 2027, Switzerland views multilingual AI as a foundation for inclusive digital public services and for strengthening participation across societies. Allow me to briefly, just very briefly outline today’s agenda. We will begin with the announcement of the launch of the three new joint calls under the lead of the Indo -Swiss Joint Research Programme, JRP, which is making a further strengthening of our bilateral ties in science, innovation and research between Switzerland and India. This will be followed by a panel discussion. We have distinguished international guests and I’m very happy to announce that this will be moderated by my colleague Nina Frey, the Executive Director of ICAIN. Thank you so much for attending. I will hand over the floor to the next speaker, Professor Thorsten Svede, President of the Swiss National Science Foundation.

Very warm welcome. Thank you.

Torsten Schwede

Your Excellencies, ladies and gentlemen, namaste. It’s my great pleasure to be here today with you. It’s a moment to highlight a particularly exciting moment in the Indo -Swiss research collaboration. As many of you know, Switzerland and India have long -standing trusted partnership in research built on reciprocity, on joint excellence, and on shared priorities. Today, this collaboration is stronger than ever, and I’m delighted to announce three new calls for joint research projects, as well as the launch of our new Indo -Swiss research framework program, between the Swiss National Science Foundation and the Swiss National Science Foundation. and our Indian partner organizations. This is a really remarkable convergence that underscores both the depths and the breadth of our bilateral engagements.

The three calls for joint research programs span a very diverse range of disciplines and are designed to foster cutting -edge, high -impact research. The first two calls that we launched earlier this year are in the geosciences and in the social sciences. Together with the Indian Ministry of Earth Sciences, we are inviting proposals on natural hazards in mountain regions, a field of great relevance for both our countries as we are each facing very unique geological challenges. In parallel, our call with the Indian Council of Social Science Research opens the door for joint projects on pressing social and societal questions, again strengthening our collaboration in a domain where cross -cultural perspectives are significantly enriching the research outcomes.

And two weeks ago, the Swiss National Science Foundation, together with the Indian Department of Biotechnology and the Indian Council of Medical Research, launched a third call focused on One Health, a topic of real global urgency. This One Health call is particularly important for us. It reflects many months of preparations and close coordinations with our Indian partners and embodies a holistic approach needed to understand interconnected health of humans, animals, and the environment. The challenges we face in this area know no borders, and international collaboration is indispensable. We therefore anticipate a very high uptake and interest and participation of researchers in both our communities. Taken together, these three simultaneous calls represent an exceptional moment in IndusVis research cooperation.

They showcase our commitment enabling ambitious science from fundamental research questions in the natural and the life sciences to complex issues shaped by society, geography or technology. And with each call, we reaffirm our shared belief that long -term co -created research is the key to addressing the major challenges of our times. So building on these strong foundations now is the right moment to announce a new strategic long -term collaboration, the Indo -Swiss Research Framework Program between the SNSF and our Indian partner organizations. We aim to create a program in which all researchers wishing to contribute to the Indo -Swiss cooperation can find appropriate support. Thematic calls on strategic areas will be launched together with our Indian partner partners and remain at the core of this program.

And to this audience, it might not come as a real surprise. that one of the high -priority topics we are currently considering is artificial intelligence. In addition to these bilateral and multilateral calls, I’m also pleased to announce that we are launching several new measures and funding schemes to support collaborative research. With our brand -new Explore, Experiment, and Expand grants, we want to give consortia the opportunities to explore new collaborations, new networks, new partnerships. We want to allow them to experiment with blue -sky thinking topics and methodologies that haven’t been tried before, but we also want to allow them to expand on already established functional collaborations and build them in an innovative way into the future.

We’re also increasing mobility funding for existing consortia to make sure that every project we fund by our program can lead to a durable collaboration, impactful events that connect with the wider world, and the wider society. and early career researchers can truly benefit from the mobility and the capacity building. We plan to hold frequent flagship events, both in Switzerland and in India, to keep connecting our various partners of this program from funding actors, beneficiaries of the calls, policy makers and prospective applicants and early career researchers. So make sure you follow our website and social media and there’s more updates coming soon. I want to extend my sincere thanks to all our partner organizations here in India for their continued trust and collaboration and the two research communities in both our countries that show a lot of enthusiasm and engagement in these programs.

So I encourage all interested researchers here in the room and out there to take advantage of these new opportunities and continue building the bridges that make our partnership so successful. Thank you. Thank you very much for your attention.

Nina Frey

Thank you so much. Thank you so much also from my side. My name is Nina Frey or Katharina Frey as my colleague or former colleague, Markus Reuvi, has introduced me. I am the executive director of ICANN, which is this network linking already academic partners from Europe, Africa, and Singapore. And I’m very glad that I have many representatives from the network that will be on the panel and actually also one of the board members sitting in the second row from the Finnish Supercomputing Center. Thank you, Damian, for coming. So we have such a big panel representing ICANN that there’s not even a space for me, so I will be standing here. And I would like to invite my panelists to take seats on the different names.

I will introduce you and hand over the mic to you. In a minute. Please have a seat. turned out there was a seat for me yeah I know we do a group photo at 1225 ok oh now so you have to bear with us this afternoon ok you can have a fast smile I have to stand oh we have to stand how can I get in ok ok so we have to stand ok thank you now we can all take a picture do I have a mask to join me here no ok we have we have a mask to join me here ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok Thank you.

Thank you. Thank you. Wonderful. Thank you so much. Thank you so much for bearing with us, for taking pictures. We actually talk about language, but let me think about an analogy to pictures. We’ll dive right into the importance of, I would say, as to the language question, obviously also the cultural and the contextual embedment of different AI in the different settings. So again, allow me to extend my thanks to all my distinguished panelists for coming, for also allowing us to show how this ICANN collaboration works from very different angles. The idea of this next 40 minutes is really to try to give a red line, I think you say, between the different summits. Actually, it started obviously in Bletchley, and I hope we can then showcase how this topic of language and cultural diversity was somehow present in all the different summits and unites us all.

Since we’re here in your host country, allow me to hand over the mic to you to talk about the ICANN collaboration. And also to share with us, like, why did Bajini been funded? you had presented your work this morning to me and Alex. It was very impressive how it translated immediately live from Hindi to German to English. But please share with us maybe the next five minutes what your work is, what has it been, and where you’re going. Thank

Amitabh Nag

you. Yeah, thank you very much, Nina, and thanks for inviting me here. Bhashini stands for Bhasha Interface for India, so it is basically looking at 22 languages which are enshrined in our Eighth Schedule of Constitution, which basically says that we will conduct or we will have these languages as the languages to start off with for our work in the regions. We started off as a program for transcending the language barrier using artificial intelligence. In these 22 languages, we have been able to do a lot of work and we’ve been able to do a lot of work So it’s been a very the first place. We had our own challenges but the methodology which we followed was to collaborate with 70 research institutes across the country and the problem statement was actually divided in between all the 70 research institutes.

We were solving five problems to work on. First was automatic speech recognition that means the digital systems should be able to understand what we are speaking in all 22 languages. Then we are looking at the second piece of it which is text to text translation and again bidirectionally in all 22 languages. Third was text to speech which was basically again that the digital system should be able to speak to you that is again in 22 languages. And then we are looking at optical character recognition in 22 languages and also our digital, our dictionary. Which is basically the vocabulary in all 22 languages are not digital. So there was an attempt to digitize all the vocabulary which is around. That includes names of places, people, companies, etc., etc.

We have till now achieved 22 languages in all the modalities. We also have increased the number of languages. Incidentally, in India, there are 100 languages which are spoken or written by at least 100 ,000 plus people. So our journey is not complete when we do 22 languages. We are moving ahead with more languages. So we now have 36 languages on text and we are going to add more languages to move forward. We also have languages which don’t have script and those are basically in the tribal area. So we are attempting to digitize that also and that is being digitized. One of them has been digitized and will be launched in next few days. We. We also have. So in all of this, we had one basic challenge, which was non -availability of digital data.

So the non -availability of digital data, which is oil to the AI models, was basically done for the first time in the world as a brute force data, digital data collection. So what we had done was that we had about 200 -odd people who would go down on the field and, you know, speak to the people on a certain subject. Pick up a picture or any other things so that it becomes the topic of discussion. We will create the monolingual corpus by requesting them to write the same thing or bilingual corpus if they are, you know, having two languages. And that is how we build the bare minimal digital data. Obviously, when we have done these things, the model is like a child.

It only read 100 books, so it will be as intelligent as those 100 books. So we realized that… over a period of time we need to collect more data that means give the child thousand books so is more intelligent and more and that journey continues so we have taken AI as a journey but we haven’t waited for some things to become perfect so that we are in a position to launch them as a product we launched them and built a narrow use cases a narrow use cases in the sense that okay let’s build something for the farmers I will try to give two examples for want of time is one is that we have built up an interface for the farmers where farmers in their own language can ask a question about agricultural advisory and he is he or she is answered in that particular language so it’s a voice first and voice journey so that means I will be talking in voice and you know the answers will be coming on voice so that’s a voice journey sequence the other thing which we had actually experimented on our working is this is a deployed system so it is actually a very large system we are now we are working which is one of the things which have been displayed here is a project called Gyan Bharatam where you know the manuscripts have been made interactive.

Plus we have multiple other use cases perhaps I will come to them during the discussion but means we have about 20 odd of them displayed in

Nina Frey

Thank you Amitabh. Thank you so much and I somehow assumed everyone knows it but obviously I should introduce you as well so apologies for that. Mr. Amitabh Nag, he is the CEO of Bajini, the national language initiative and we will be collaborating Alex will be mentioning more later on on that. But before that, I would turn a year back to Paris, where obviously Current AI was started and came out of the Public Interest AI Working Group, if I think. So, Mrs. Aya Bedir, you say? Bedir, sorry for that. She’s the CEO, quite recent CEO of Current AI, a very, very important initiative that amongst others also wants to thank for the topic that we’re talking about.

But please, Aya, I know you come from a wrong background also in hardware. You are launching, I think, this afternoon something very impressive. That also helps… the importance of language diversity. Could you share with us some of your key focus interests and also why you so focus on hardware? Thank you.

Aya Bedir

Thank you so much for having me. So, my name is Aya Bedir. I, yes, did join recently, about a month and a half ago. Exactly. So really feeling the very warm welcome in India. Current AI was an initiative that came out of the French AI Summit. The founder, Martin Disney, was the special envoy of President Macron at the summit, and the initiative essentially has a vision for AI that is global, that is collaborative, and that is collective. And so the idea is that if we acknowledge that some of the biggest tech companies that are really governing our lives and really governing AI and the way we consume it in day -to -day, they are a handful of these companies, they are big, they have scale, they have a lot of financial resources, and they are very ambitious.

And so the initiative acknowledges that to be able to stand a chance to be an alternative, and to be able to do something about it, and to be able to do something about it, and to be able to do something about it, and to be able to do something about it, and to be able to do something about it, and to be able to do something about it, and a counterpart to these large companies, we must fight scale with scale. And so obviously there is lots of interesting work happening in public interest AI around the world, but oftentimes the work is distributed, the work is decentralized, and sometimes it’s duplicative, and it’s not always additive.

And so as a result, current AI has this vision that we need to sort of bring together and bring more collaboration into the space, but also raise the level of ambition and of financial scale that is taken on. So current AI is a public -private partnership between philanthropy, between the private sector and government. It has initial commitments of about $400 million, but the ambition is to get to $2 .5 billion and hopefully more. The initial… commitments are from the French government. There are also partners, multiple other governments, including the Indian government, the Kenyan government, Moroccan, and many others, as well as from MacArthur Foundation, Ford Foundation, McGovern, and a few others, and the private sector, so Google DeepMind, Salesforce, and others.

So it really is a public -private partnership with the intention of kind of bringing everybody around the table that has sort of the same commitment to public interest AI, to AI that works for individuals and for the public good, and one of the main vehicles of doing that is really investing in open source. Language has been a priority for current AI ever since its inception. The initiative was called Multilingual Diversity, which I know is something everybody here is committed to, and we’ve been hearing a lot about over the past few days. I joined about a month ago, and I’m myself very passionate about the topic, and I sort of expanded the topic to be about culture, diversity, and culture preservation.

So it’s really not just about language. It’s also about acknowledging that culture exists in many facets. Language is one of them, but there are also behaviors, there are norms, there are also artifacts, physical and digital artifacts, and there are many things that are digitized and non -digitized. And so we now talk about culture preservation as one of our big priorities, and it’s something that we’ll be doing a lot of work in. As part of the culture preservation work, also when I came in, there had already been conversations between Current AI and Bashni about doing a collaboration together for the summit. And to be honest, I fell in love with the work that Amitabh and his team were doing and the care that they were taking with their diaries.

And I was like, oh, my gosh, this is so cool. and really the fact that they were going to sort of the source and getting a lot of this knowledge, not just data, this knowledge about the language from individuals and from communities themselves, no matter how small they were. And so we ended up collaborating on a device that will launch later today at 3 .30 in Room 10. I hope you all can attend. I’m not going to say much about it because there’s a drumroll situation that will happen. So you guys can come see. You all can come see for yourselves. But the intention of the device is to really get as far, as close as possible to the individuals and the communities themselves.

There is one concern I have that could be kind of a negative repercussion, I think, of having so much attention on multilingual diversity in a society. And I think that’s a really important thing. And I think that’s a really important thing. And I think that’s a really important thing. And I think that’s a really important thing. And I think that’s a really important thing. And I think that’s a really important thing. And I think that’s a really important thing. that a lot of the big companies and big players have to do all the work. And so, you know, it’s interesting and positive that, you know, the big tech companies are saying we’re going to make commitments to more multilingual diversity and more languages.

That’s good. But oftentimes when they are kind of in the leadership taking these positions, there’s a brute force kind of methodology that they deploy because of the scale at which they operate in. And so oftentimes it’s about scraping data. Oftentimes it’s about taking data without licensing it. It’s about treating individuals and communities as data, whereas they are people and they are not data. And so that’s sort of my concern in this area, and I believe that we have to get as close as possible to the communities themselves and invite them and support them in doing that kind of work themselves. So it’s really about them preserving their own. Their own cultures and languages. and not about us doing it for them in this sort of like somewhat condescending way.

I’ll also say one last thing, which is I myself grew up in Beirut in Lebanon, a very tiny country, but that everybody has heard of sometimes for good and not good reasons. But, you know, Arab language is also very concerned about AI and representation in AI, and we have thousands of different cultures and dialects within Arab culture, and we also have varying degrees of resource availability across Arab countries. Some countries are very resourced financially from a government perspective. Others have very scarce access to resources. So I’m also very concerned about thinking about AI that is more resilient, that operates from scarcity. operates from frugality and operates from a limited amount of resources and looking at that as a positive as opposed to a negative.

So that’s something that current AI will be prioritizing in a big way and we’ll hope to do more of. So hope to see you all at 3 .30 and hope

Nina Frey

Thank you so much, Aya. Let me hand over because you mentioned obviously the many announcements that were made as well from private companies to start collecting data. I think it’s fantastic to see that governments can do that as well and that you also invest in this PPP so far. And allow me to hand over to my colleague sitting to my left because I think you can also showcase how also public institutions like universities can also train a model multilingual from scratch. Scratch, not stretch. It was probably a stretch sometimes. Let me introduce you to Dr. Alex Ilich. He founded and is the executive director of the AI Center, a co -founder also of ICANN. And please, could you share your experiences with Apertus, which is this multilingual model, and maybe also mention something on Swiss AI and how the Indian languages we can maybe then present next year in Apertus.

Alex, please.

Alex Ilic

but basically we were able to build this model and one of the key bottlenecks that we also identified is it’s not just the infrastructure where currently a lot of money is going in but also the talent. Outside of big tech, you have maybe 100 people on the planet who have the experience and capabilities to build such foundation models and that’s not enough. And I think that’s something where academia can change it and I think that’s why it’s important we not just need supercomputer and data centers for the companies, we need it to empower academia. This is very, very critical that we also push this very, very strongly. We named the model Apertus, Latin for open, because we want it to be a foundation where everyone can take it and build on top of it.

So it’s not something that we force up on someone but it’s something that can be a thriving community where each university, each project, each country gets a step further. And I think we will hear later a little bit from the perspective of the Apertus Foundation and also from Singapore, from India. We already heard… There are not many countries that recognize how important that is as a public infrastructure that you really take it serious to develop your own benchmarks and your own data sources as well. Because today, still, if you read LinkedIn, whatever, the majority is driven by benchmarks that the big companies are publishing. And surprise, in every benchmark they publish, they are, of course, the best because they pick whatever metric is usable.

And I think this metric should be driven by what do we want it to be in the cultures and the regions to empower this. And so we have 1 ,000 languages included because we trained it with data on the Internet. As you know, the Internet is not the most diverse data source there is. 60 % of the data in our training set is English. 40 % is non -English. And so what we’re thinking about now strategically is how can we… Increase… the number of languages that are close to the performance we see in English, step by step for the next hundred languages and so on. And this is, I think, like important because many companies that are going in that area and say, oh, we sponsor a data collection effort, they just do it on best effort.

Like you, let’s do something and you don’t know does it actually move the needle. So the next step for us is that, you know, with all the experiences, you know, in Boschini and other parts, I think we can find out now very strategically how much does it cost us to raise the bar significantly, not just make a check mark out of that. So that will be also the hope for connecting forward through the mission of ICAIN and also Geneva next year that we can present also, you know, how far of a progress could we make, like where do we stand today that is really usable and economically usable and to elevate this. I think that’s super critical on that side.

And, yeah, we’re also very happy to be here. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. of sea lion that’s already built on Apertus, and we want to extend the collaborations now globally. For the researchers, we also have a very strong international program where we share basically our compute infrastructure. So that’s also very unique, and we would like to see also other countries to do that because we know that for where we stand with AI today, we’re maybe at 5 % or 10 % of the potential to train the next models that are, you know, including more data, becoming more aware of the physical world and so on.

We need more compute. We need to team up, and I think that’s also a question of how can we collaborate more and share more. And with ICANN, I think, like, in the beginning, we had this, like, the bottlenecks, you know, our compute, that’s why we have strong compute representation. It’s data and benchmarks, and it’s talent. And I think on these three capabilities, we need to jointly increase, and whoever doesn’t have it should be able to easily get the data and the benchmarks. And I think that’s where we’re headed from. sides to do it themselves basically. Thank you.

Markus Reubi

Thank you so much Alex and also for emphasizing the talent need and I think if I can just add that you mentioned the talent or the capabilities in knowing how to train a model something that and again I’m looking at the supercomputing representatives I mean it’s also a talent in knowing how to build up such an HPC so I think that’s something we could also add to the table but allow me to come back to the models themselves and the really very concrete application. Now I’m turning first left to the north to the Finns. Petri you’re here as obviously as a founding member of ICANN but those representing ELIS Network but also the team Finland if I can say that but you were also member of the age lab of the UN Secretary General so where you one of the recommendation was also exactly this that we collaborate and could you mention maybe more from the perspective of the Nordics, you had already your own language models, but maybe also you can share some thoughts on why you recommended that to the world, if I can

Petri Myllymäki

Yes, thank you. So happy to be here. So indeed, I mean, as you all know, Nordic languages are not the biggest major languages in the world. So obviously we take the kind of the preservation of our languages and cultures very seriously. Talking about the H -Lab of the UN, there was just upstairs a kind of a handover to the new International Independent Scientific Panel on AI. So maybe one thing I learned in this UN advisory body was that, I didn’t know this, but like one thing I learned was that access to language and culture is a human right, one of the human rights that all the countries in the world have. And I think that’s something that we have to accept it.

So to me this was a surprise and pleasant surprise because as also like language is already important because we operate with language. But like what Aja was saying, like even more important is the culture behind the language. We have different value frameworks and norms in different countries. So if there’s one size fits all English version AI that we all start to use, what is the value framework behind that? So that’s kind of I think this is a very critical issue. Another thing I learned in UN was that I mean like and that’s why is that there are several global initiatives towards like making this more accessible to all countries. Seven of the UN member states, 193 countries are included in all these initiatives.

119 countries are included in none. So initiatives like ICANN or current AI. So all these. this summit are very important to make this more inclusive. So I now shamelessly steal a quote from Joshua Benjio, who was just upstairs, saying, like, we need to make sure that all the countries in the world are invited to the dinner table, but not part of the menu, but they are dining guests. So I think this was hilarious.

Markus Reubi

Thank you so much. Thank you so much for sharing. I didn’t hear, but I think it’s a good thing that we can take up. Because also, obviously, food is very cultural diverse. Thank you. So let me turn from the north first more to the south. And to Singapore, you were also quite recently, I think, at the NTU Singapore, which is also the newest member of ICANN. You had already developed, and I think you will share something on the sea lion model, which is obviously, for the ASEAN region, the famous language model. but you also had already collaboration with Apertus. If time allows, because I would also allow, I need to speak, but if time allows, you could also mention something on the importance of sovereignty and language.

Please.

Participant

It’s wonderful. At NTU Singapore, we’re the newest members of ICAIN, but it’s fantastic that the… And I’ve only been at NTU Singapore for six months, but the conversation that we’re having here is the same conversation that we’re having, about the importance of multilingual diversity, the importance of getting close to the ground, the importance of culture as well as tech. And I’m the dean of a college of humanities, arts and social sciences. I’m a historian. And it’s where my college is in the lead, collaborating with computer science at NTU and engaging with CLI and thinking about AI in the context we’re talking about. So… So, you know, I just point to very, very important… that it’s about culture and thinking about cultural diversity and how AI models, et cetera, reflect culture, how we engage with culture and history, et cetera, as well as simply technology.

And I think that’s something that’s very evident in this conversation. So it’s great to be part of this club. So C -Line is a language model that reflects 13 languages across Southeast Asia. In fact, it includes Tamil, because Tamil is a Southeast Asian language because it’s an important national language within Singapore, with aspirations actually to kind of expand, potentially connect beyond to other parts of Asia. It’s a nationally funded initiative, but it connects from Singapore. So it’s part of Singapore’s public infrastructure, but then it connects regionally and is used by, has good connections with private sector providers across Asia. Southeast Asia platform. in Indonesia on various things, etc. So it’s, and you know, as we were hearing a moment ago, I mean there’s a number of different versions of it, one of them is built on Apertus, so there’s a real synergy here.

And I think, you know, I just want to flag the connection between Singapore and Switzerland, they’re both multilingual, multicultural, you know, kind of relatively small societies, so there’s a very obvious kind of, there’s a very obvious collaboration there. And I think another, echoing again, something that I was saying is, when we’re thinking about AI and we’re thinking about the relationship between culture and language, we’re also interested in frugality, we’re also interested in using resources effectively, and in thinking about how we can you know, draw on sort of deep truths about language and culture without vast amounts of data, you know, kind of with relatively small amounts of data. I mean, one of the, you know, we have languages like Laotian, Khmer, etc.

within C -Line, and so, you know, colleagues are really thinking hard about how you leverage relatively small amounts of language to then produce an effective model in the discussions. Just a couple of additional points, and I’m looking at the clock. Sovereignty is the big word within the AI summit. I’m a historian actually in some ways of sovereignty at the moment. Sovereignty means power. It’s a power that we want for ourselves, for our communities, for our nations and states. But in a sense it’s also about individuals as well, and there’s a kind of complicated relationship between those two things. And so I just wanted to reflect on the importance of sovereignty in that we’re talking about the sovereignty of societies that are neither the US or China in this discussion.

The two big superpowers maybe. And this discussion is about how we can think about a world that is multipolar and is multicolored. and reflects the fact that sovereignty actually is dispersed in the world in which we live in, and that’s very important. And that’s Indian principles of non -alignment that go back to the 1940s and so on. So I don’t know if I’m allowed to use that phrase in today’s India. But anyway, it’s a similar set of principles that we’re talking about. So the dispersal of sovereignty that we’re talking about here and power is important, but it’s also as part of that, I think, to reflect the limits of the nation state, I suppose, and the limits of national approaches to language.

In that, we all live in environments in which people speak complicated… They’re multilingual societies in a minute -by -minute way. People code -switch. They’re speaking Hindi in one minute. They’re speaking English the next, Swiss, German, etc. Similarly, in Singapore, people speak Mandarin, Chinese, then they speak a Chinese dialect, and then they’ll speak English. And so the… Sovereignty is crucial, but we need to… If we’re interested in the… sovereignty of the individual and the power of individuals, then we need to have a more nuanced account of language that allows for things like code switching, and dialects, etc. And that’s something that we’re very much interested in in NTU.

Markus Reubi

Thank you so much, and thank you to all the speakers. Also allow Annie to speak, because I think you’re obviously from South Africa, but now living in Switzerland and the US, and you lead these linkages between medicine and the AI, and I always think you explain very concretely in your work what if you just take an English -speaking language and train it in a tiny set of local data lakes, how you experience that in the medical field in reality. So if you could share something on that, and obviously also your role in ICANN. Thank you, Annie. Professor Annie, she’s at EPFL at the moment in the interview. She’s in Yale. Thank you.

Annie Hartley

Yeah, so thank you very much, and I think I’ll take it down to the ground then about the consequences about what happened. happens like really when you are at a patient’s bedside and you ask questions that are high stakes. And something that I do to just test these models in different places because we are rolling out these tools in different hospitals around the world, I ask the same question which is a very high stakes question of how to treat diabetic ketoacidosis which is a diabetic crisis in a child. And I did this recently in Ethiopia in a language that’s not very well known, Afanaromo, and it responded to me, thou shalt not eat insulin on a Tuesday.

And I did share this advice with the, because I thought it was actually very good advice, you should not eat insulin any day actually. But I did share this advice. But it comes to something that’s really, really important. I’m stating the obvious. But it means that if you do not, because it’s obviously only trained on the Bible, right? That’s something that’s very available. That’s the one. book that is available in every single language in the world and so you have these biblical kind of terms but the Bible isn’t like very necessarily very accurate in medicine or other things but but depending on where you’re coming from but the thing is that you can’t rely on these models to make these decisions because they are inequitably inaccurate in the places that need it most so we know that they’ll be inaccurate but the point is that we actually have to if we collecting this kind of information we have to make an effort to collect it in the highest stakes environments and in those contexts so if you have use cases for collecting language it’s interesting to collect it in maybe like historical texts or to represent culture of course but I think something that has a much bigger urgency are the urgent questions these are high -stakes decisions that we are making and people will believe that the model performs well if it only speaks the language but they might get sense of security if we don’t really train it to be accurate in the questions that people are relying on these tools for the most.

And so this is why we actually have to, when we collect languages and when we are trying to test these tools in reality, we have to make sure that we represent those kinds of contexts. And that’s what we are doing. So I lead a lab called LIGHTS. It’s the Laboratory for Intelligent Global Health and Humanitarian Response Technology. So obviously I’m interested in these high -stakes environments and these cultures that are so underrepresented that they will never be represented with any kind of large commercial enterprise, right? No commercial entity has ever said there’s a great place to make money and it’s that war zone. Okay, unfortunately they have, I suppose. But the point is that people don’t want to represent that kind of place because it’s not in their interest.

And this means that it is so important for academia to play a role. We don’t just play a role because we’ve got expertise. We don’t just play a role because we have expertise. We play a role because we do something that commercial… entities cannot provide, it is, we are neutral and we create a neutral space for this kind of collection of data to represent the needs of people and also to make sure that we can test it in reality, right? This is why we can do open science, it’s because we don’t have like any money in the game to lose, right? And the most important thing to do is actually to see when we do represent these languages, not just to represent them and be happy about it, which is the first step, but to go the extra mile to actually test whether the languages are being represented as you expect them to be.

So some of like, like some of my patients for example, it might speak their language but does it speak their language in the way that they expect and do they follow the advice or don’t they? And this is a really important thing to test in these high -stakes environments. My patients will come to me in South Africa. In South Africa we speak 11 official languages, and in Kaga, a way of explaining certain things, it’s very different, and it gets translated into English in a strange way sometimes. And so one of my patients came to tell me, you know, I’ve got elephants running in my head, right? I know exactly how to respond because that’s my culture.

I’m South African. But what would an AI respond, right? And I have a pregnancy in my knee, right? I’m pregnant in my knee. That’s what the patient came to tell me. And actually it doesn’t come from a mistranslation. It comes from the way that people understand how their bodies work. And this is very, very cultural. What is the next most likely word after pregnancy in my knee, right? So it’s really important that we understand how it works when it’s in our body. And we understand how it works when it’s in our body. And we understand how it works when it’s in our body. and making sure that we get feedback from reality, this is what we’re trying to do.

So we have, starting with ICAIN, a flagship project that we made. It’s called MOVE. It stands for Massive Open Online Validation and Evaluation. And it’s about getting these real -world signals from real people in high -stakes decision -making processes, from our doctors, from the people on the ground in different countries around the world, and to get that information from how they are using any tool, because we are neutral. If any tool comes out, any new model, we can test it. And then we get how it works, and then when it breaks, we don’t just say, oh, this model is bad in the setting and this model is good. We really try to get that information and put it back into the model to continuously improve it.

And so learning from reality, learning from the real workflows of how people use models. And I think that’s important, to represent reality. And not just the language. but the reality that the language functions in. So last thing I’d like to say about this is this does cost a little bit more money and it’s not the traditional kind of way of working in science, and people don’t appreciate that implementation science is science. And it’s such a fantastic opportunity where we can actually do impact, like actually measure it, the impact of the models that we are making, we can measure it and feed that back into our models and really create impact driven models. And to run these trials, it’s ambitious, but we do need to start asking like different kinds of funding and being more ambitious, and I think academia does need to be more ambitious because we are representing something that’s actually very important these days, which is, and very rare, which is this neutrality.

When OpenAI updates it from 4 .5 to 5 or more, it’s a very, very good thing. And it’s a very, very good thing. And it’s a very, very good thing. And it’s a very, very good thing. And it’s a very, very good thing. And it’s a very, very good thing. And it’s a very, very good thing. And it’s a very, very good thing. And it’s a very, very good thing. And it’s a very, very good thing. And it’s a very, very good thing. And it’s a very, very good thing. And it’s a very, very good thing. And it’s a very, very good thing. And it’s a very, very good thing. point one, did they ask your commission? No, right?

Did they ask the doctors who had validated those models for their context? No. We need control. We need to know how these tools work in reality and we need to be able to control the tools and so sovereignty for me is control of tools and control of the environment and to understand how these models work in reality so that we

Markus Reubi

Thank you. Thank you so much. Thank you, honestly. Thank you, everyone. Thank you, everyone, for keeping the time and for making sure that we are actually creating the menu and controlling the menu to also steal Professor Benjo’s words and for contributing here and I think we will be more than happy to update you hopefully next year on our more important work. Thank you, everyone, for joining Collaborative Verge. Thank you, everyone, for coming. Thank you for coming, for staying with us, and on the speakers. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (13)
Factual NotesClaims verified against the Diplo knowledge base (7)
Confirmedhigh

“AI can only serve the public good if it serves all languages and all cultures.”

The knowledge base explicitly states that “AI can only serve the public good if it serves all language” confirming the claim.

Additional Contextmedium

“Linguistic exclusion is one of the most persistent barriers to digital participation.”

Language barriers are identified as a persistent challenge limiting participation in digital governance contexts, providing supporting context for the claim.

Confirmedhigh

“The Geneva AI Summit will take place in 2027.”

The knowledge base references the Geneva AI Summit in 2027, confirming the date.

Confirmedhigh

“The India AI Summit is scheduled for 2026.”

The India AI Impact Summit 2026 is mentioned in the knowledge base, confirming the year.

Additional Contextmedium

“The Paris 2025 public‑interest AI process is part of the international trajectory.”

A Paris AI Action Summit is referenced, though the knowledge base does not specify the 2025 date; it provides contextual support for a Paris‑based AI summit.

Confirmedhigh

“Switzerland’s contribution of the open‑source multilingual model Apertus, developed by ETH Zurich and EPFL, underpins inclusive digital public services.”

Apertus is described as a Swiss “radically open” multilingual model, confirming its existence and Swiss origin; the source adds detail about its open development process.

Additional Contextmedium

“ICANN provides equitable access to compute, data and multilingual models.”

ICANN’s role in fostering an inclusive and accessible internet infrastructure is highlighted, offering contextual support for its broader equitable‑access mandate, though the source does not mention compute or multilingual models specifically.

External Sources (76)
S1
How Multilingual AI Bridges the Gap to Inclusive Access — – Alex Ilic- Annie Hartley- Nina Frey – Markus Reubi- Amitabh Nag- Alex Ilic- Petri Myllymäki- Participant – Markus Re…
S2
Democratizing AI: Open foundations and shared resources for global impact — Bernard Maissen, Mary-Anne (“Annie”) Hartley, Mennatallah El-Assady Hartley specifically noted the challenge of competi…
S3
How Multilingual AI Bridges the Gap to Inclusive Access — – Alex Ilic- Annie Hartley – Aya Bedir- Annie Hartley
S4
How Multilingual AI Bridges the Gap to Inclusive Access — Thank you Amitabh. Thank you so much and I somehow assumed everyone knows it but obviously I should introduce you as wel…
S5
https://dig.watch/event/india-ai-impact-summit-2026/how-multilingual-ai-bridges-the-gap-to-inclusive-access — And I was like, oh, my gosh, this is so cool. and really the fact that they were going to sort of the source and getting…
S6
How Multilingual AI Bridges the Gap to Inclusive Access — – Markus Reubi- Amitabh Nag- Alex Ilic- Petri Myllymäki- Participant – Markus Reubi- Torsten Schwede- Aya Bedir- Alex I…
S7
IGF Retrospective – Past, Present, and Future — – **Markus Kummer** – Role/Title: Former MAG chair, head of the Secretariat from 2006-2010 | Area of expertise: Governme…
S8
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Participant** – (Role/title not specified – appears to be Dr. Esther Yarmitsky based on context)
S9
Leaders TalkX: Moral pixels: painting an ethical landscape in the information society — – **Participant**: Role/Title: Not specified, Area of expertise: Not specified
S10
Leaders TalkX: ICT application to unlock the full potential of digital – Part II — – **Participant**: Role/Title not specified, Area of expertise not specified
S11
Inclusive AI_ Why Linguistic Diversity Matters — -Amitabh Nag- CEO of Bhashini
S12
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — – Kritika K.R.- Amitabh Nag – Prasanta Ghosh- Amitabh Nag
S13
https://dig.watch/event/india-ai-impact-summit-2026/how-multilingual-ai-bridges-the-gap-to-inclusive-access — I will introduce you and hand over the mic to you. In a minute. Please have a seat. turned out there was a seat for me y…
S14
How Multilingual AI Bridges the Gap to Inclusive Access — As we prepare for Geneva 2027, Switzerland views multilingual AI as a foundation for inclusive digital public services a…
S15
How Multilingual AI Bridges the Gap to Inclusive Access — Petri Myllymäki from the Finnish Supercomputing Centre and ELIS Network emphasized that access to language and culture i…
S16
How Multilingual AI Bridges the Gap to Inclusive Access — -Torsten Schwede- President of the Swiss National Science Foundation, involved in Indo-Swiss research collaboration
S17
https://dig.watch/event/india-ai-impact-summit-2026/how-multilingual-ai-bridges-the-gap-to-inclusive-access — As we prepare for Geneva 2027, Switzerland views multilingual AI as a foundation for inclusive digital public services a…
S18
AI for Democracy_ Reimagining Governance in the Age of Intelligence — Global governance of AI is a precursor for a democratic development and evolution. And we need to continue to develop an…
S19
From High-Performance Computing to High-Performance Problem Solving / Davos 2025 — A key point of discussion was the need for ecosystem development around quantum computing, involving collaboration betwe…
S20
Multilingual Internet: a Key Catalyst for Access &amp; Inclusion | IGF 2023 Town Hall #75 — Education, government support, and enhanced infrastructure are also necessary to promote inclusivity and diversity in in…
S21
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — . . . . . . . . . . . . . . one of our keynote speakers, they said autonomous weapons are going to AI -based autonomous …
S22
What is it about AI that we need to regulate? — The discussions across multiple IGF 2025 sessions revealed that over-reliance on AI-powered content moderation systems p…
S23
Responsible AI for Shared Prosperity — The balance between open-source development and community sovereignty presents ongoing challenges. While open-source app…
S24
WSIS Action Line C8: Multilingualism in the Digital Age: Inclusive Strategies for a People-Centered Information Society — Low to moderate disagreement level with high strategic significance. While speakers agreed on fundamental goals of lingu…
S25
WS #144 Bridging the Digital Divide Language Inclusion As a Pillar — The speakers demonstrated remarkably high consensus across multiple dimensions: the need for paradigm shift from English…
S26
Leaders TalkX: Local to global: preserving culture and language in a digital era — Government-led national strategies are essential for language preservation Goyal presents India’s Bhasani program as a …
S27
Global Perspectives on Openness and Trust in AI — “It was this project that brought together over a thousand researchers … to try and create an open source large langua…
S28
Main Topic 2 –  European approach on data governance  — – The intricacies of data ownership in medical and biomedical research, with Merquiol discussing the current ambiguities…
S29
AI, Data Governance, and Innovation for Development — A key challenge identified was the lack of locally relevant datasets, with panelists stressing the importance of develop…
S30
Ethical principles for the use of AI in cybersecurity | IGF 2023 WS #33 — Increased transparency in software and AI-based solution composition is supported. The initiative of a “software bill of…
S31
WS #110 AI Innovation Responsible Development Ethical Imperatives — – Addressing data ownership and concentration issues
S32
How Multilingual AI Bridges the Gap to Inclusive Access — The discussion shows remarkable consensus on goals (multilingual AI, cultural preservation, community empowerment) but r…
S33
How Small AI Solutions Are Creating Big Social Change — Employ both text and speech-based approaches to address low-resource languages, recognizing that many languages may be b…
S34
S35
Responsible AI for Shared Prosperity — Hybrid approach combining open-source model development with community-governed deployment to balance innovation with lo…
S36
Building Scalable AI Through Global South Partnerships — The institute’s breakthrough came through systematic re-evaluation, leading to three critical insights. First, governmen…
S37
Open Forum #33 Building an International AI Cooperation Ecosystem — Development | Economic | Capacity development Innovation Ecosystems and Practical Implementation The speaker argues th…
S38
Democratizing AI Building Trustworthy Systems for Everyone — Financial mechanisms | Artificial intelligence | Capacity development Natasha describes a collaborative initiative with…
S39
WS #119 AI for Multilingual Inclusion — Jesse Nathan Kalange: Okay, all right, thank you very much. And PAIAG, we also promote gender equality. So we make su…
S40
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Tomiwa Ilori:Thank you very much, Michael. And quickly to my presentation, I’ll be focusing more on the regional initiat…
S41
Main Session on Artificial Intelligence | IGF 2023 — Policy influence often comes from multilateral systems. They strive to improve their AI tools through an iterative appr…
S42
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Demands on policy exist without the building blocks to support its implementation Lack of infrastructure, skills, compu…
S43
WS #150 Language and inclusion – multilingual names — The experts agreed that while progress has been made, significant work remains to be done in areas like improving user e…
S44
Multilingualism — The promotion of multilingualism requires appropriate governance frameworks. The initial elements of such frameworks hav…
S45
Green AI and the battle between progress and sustainability — AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. Th…
S46
Comprehensive Report: China’s AI Plus Economy Initiative – A Strategic Discussion on Artificial Intelligence Development and Implementation — One of the most striking revelations came from Yutong Zhang’s discussion of Moonshot AI’s resource efficiency in develop…
S47
Interdisciplinary approaches — AI-related issues are being discussed in various international spaces. In addition to the EU, OECD, and UNESCO, organisa…
S48
Artificial Intelligence &amp; Emerging Tech — In conclusion, the meeting underscored the importance of AI in societal development and how it can address various chall…
S49
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — Multi-stakeholder cooperation and inclusive governance frameworks are essential
S50
Can we test for trust? The verification challenge in AI — ## Rapid-Fire Policy Recommendations Adams emphasized that current testing paradigms fail to account for how AI systems…
S51
When language models fabricate truth: AI hallucinations and the limits of trust — AI has come far from rule-based systems and chatbots with preset answers.Large language models (LLMs), powered by vast a…
S52
How Multilingual AI Bridges the Gap to Inclusive Access — In that, we all live in environments in which people speak complicated… They’re multilingual societies in a minute -by…
S53
ElevenLabs Voice AI Session &amp; NCRB/NPMFireside Chat — Language barrier and need for multilingual inclusion
S54
Pre 11: Freedom Online Coalition’s Principles on Rights-Respecting Digital Public Infrastructure — Transparency and public participation are essential for democratic DPI
S55
Switzerland launches Apertus, an open multilingual AI model — Switzerlandhaslaunchedits first large-scale open-source language model, Apertus, developed byEPFL,ETH Zurich, and theSwi…
S56
Democratizing AI: Open foundations and shared resources for global impact — – Mary-Anne Hartley- Leslie Teo Development | Legal and regulatory | Sociocultural The Swiss-made LLM represents the l…
S57
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — 4. Establish research programmes and joint funding initiatives
S58
https://dig.watch/event/india-ai-impact-summit-2026/how-multilingual-ai-bridges-the-gap-to-inclusive-access — And two weeks ago, the Swiss National Science Foundation, together with the Indian Department of Biotechnology and the I…
S59
Open Forum #36 Challenges &amp; Opportunities for a Multilingual Internet — Government Initiatives for Promoting Multilingualism Need for government initiatives to promote multilingualism Pradee…
S60
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — The Bharat GPT consortium exemplifies this approach, bringing together nine academic institutions through a Section 8 no…
S61
WS #119 AI for Multilingual Inclusion — Athanase Bahizire: Thank you so much. Very good question. Actually it’s very quick. You know what, these big AI models w…
S62
Open Forum #33 Building an International AI Cooperation Ecosystem — **Professor Dai Li Na** from the Shanghai Academy of Social Sciences presented a comprehensive case study of Shanghai’s …
S63
IGF 2025: Africa charts a sovereign path for AI governance — African leaders at theInternet Governance Forum (IGF) 2025 in Oslocalled for urgent action to build sovereign and ethica…
S64
Discussion Report: Sovereign AI in Defence and National Security — Examples include the lack of transparency in ChatGPT’s training data and alignment process, with multibillion dollar law…
S65
[Parliamentary Session 6] Leading the digital transformation journey: Dialogue with youth leaders — Dansa Kourouma: Thank you, Honorable Moderator. For my part, I would like to start by thanking the Saudi authorities …
S66
Multistakeholder digital governance beyond 2025 — Language barriers were identified as a persistent challenge limiting participation. Ahmed Farag noted the complexity of …
S67
How can Artificial Intelligence (AI) improve digital accessibility for persons with disabilities? — In summary, the speaker underscored the need for a commitment to universal design in technological innovations, a cultur…
S68
Paris AI Action Summit shifts focus to innovation, employment, and public good in AI governance — The recentAI Action Summit in Parismarked a turning point in global AI governance, shifting the focus from long-term exi…
S69
India unveils MANAV Vision as new global pathway for ethical AI — Narendra Modipresentedthe new MANAV Vision during the India AI Impact Summit 2026 in New Delhi, setting out a human-cent…
S70
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — And that complements Micron’s manufacturing plan. in the U.S. Actually, as you look at our manufacturing plants in the U…
S71
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — And there was two examples during the gathering before. I want to give you, we have in the country the Center for Creati…
S72
Digital Embassies for Sovereign AI — Fasel highlighted Switzerland’s positioning, citing the country’s “neutrality, stability, data capabilities, and scienti…
S73
Opening remarks — Despite the fact that the principles and strategic path established at NetMundial in 2014 remain crucial, guiding curren…
S74
Leaders TalkX: Partnership Pivot: Innovating International Cooperation to Scale Digital Inclusion — Tripti Sinha:Thank you very much for the question. It’s a delight to be here. the panel. The fundamental power of the gl…
S75
Artificial intelligence (AI) – UN Security Council — Another critical area highlighted was the need forcreating inclusive platforms for global collaboration. This involves i…
S76
National Strategy for Artificial Intelligence — leverage our position as a nation with a digitally advanced population and business sector in order to take the lead in …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
M
Markus Reubi
2 arguments122 words per minute932 words458 seconds
Argument 1
AI must serve all languages and cultures to ensure democratic participation (Markus Reubi)
EXPLANATION
Markus Reubi argues that artificial intelligence can only benefit the public good if it is inclusive of every language and culture. He frames linguistic exclusion as a persistent barrier to digital participation, making multilingual access a democratic imperative.
EVIDENCE
He states that AI can only serve the public good if it serves all languages and all cultures [4] and emphasizes that linguistic exclusion remains a persistent barrier, so ensuring multilingual access is not just a technical challenge but a democratic imperative [5].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for AI to serve all languages and cultures as a democratic imperative is highlighted in [S1] and reinforced by inclusive AI discussions in [S11] and [S18].
MAJOR DISCUSSION POINT
AI must serve all languages and cultures to ensure democratic participation
AGREED WITH
Nina Frey, Petri Myllymäki, Aya Bedir, Alex Ilic, Participant, Annie Hartley, Amitabh Nag
Argument 2
Building multilingual models also requires expertise in high‑performance computing and talent development (Markus Reubi)
EXPLANATION
Reubi highlights that creating multilingual AI models demands specialized talent not only in model training but also in building and operating high‑performance computing infrastructure. He suggests that this expertise should be added to the discussion alongside model development.
EVIDENCE
He notes that talent is needed to know how to train a model and also to build supercomputing resources, pointing out the importance of HPC talent in addition to model expertise [220-222].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of high-performance computing talent for multilingual models is discussed in [S19] and the need for shared compute infrastructure is noted in [S1].
MAJOR DISCUSSION POINT
Building multilingual models also requires expertise in high‑performance computing and talent development
AGREED WITH
Alex Ilic, Nina Frey
DISAGREED WITH
Aya Bedir, Alex Ilic
N
Nina Frey
1 argument125 words per minute827 words394 seconds
Argument 1
ICAAN network links institutions worldwide to promote language diversity in AI (Nina Frey)
EXPLANATION
Nina Frey describes ICAAN as a global network that connects academic partners from Europe, Africa, and Singapore, facilitating collaboration on language diversity in AI. She underscores the breadth of representation on the panel as evidence of this worldwide linkage.
EVIDENCE
She explains that ICAAN is a network linking academic partners from Europe, Africa, and Singapore [57] and notes the presence of many representatives from the network on the panel, including a board member from the Finnish Supercomputing Center [58].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
ICAAN’s role in linking global academic partners and sharing resources for multilingual AI is mentioned in [S1].
MAJOR DISCUSSION POINT
ICAAN network links institutions worldwide to promote language diversity in AI
AGREED WITH
Torsten Schwede, Markus Reubi, Alex Ilic, Aya Bedir, Amitabh Nag
T
Torsten Schwede
1 argument141 words per minute800 words338 seconds
Argument 1
Launch of three joint research calls (geosciences, social sciences, One Health) and a new Indo‑Swiss Research Framework Program to deepen bilateral science cooperation (Torsten Schwede)
EXPLANATION
Schwede announces three new Indo‑Swiss joint research calls covering geosciences, social sciences, and One Health, and introduces a longer‑term Indo‑Swiss Research Framework Program to support collaborative research across disciplines. He frames these initiatives as a milestone in strengthening bilateral scientific ties.
EVIDENCE
He announces three new joint research calls in geosciences, social sciences, and One Health [27-34] and presents the Indo-Swiss Research Framework Program as a strategic long-term collaboration, noting AI as a high-priority topic within it [41-44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The announcement of three joint Indo-Swiss research calls and the new framework programme is documented in [S1].
MAJOR DISCUSSION POINT
Launch of three joint research calls (geosciences, social sciences, One Health) and a new Indo‑Swiss Research Framework Program to deepen bilateral science cooperation
AGREED WITH
Nina Frey, Markus Reubi, Alex Ilic, Aya Bedir, Amitabh Nag
A
Amitabh Nag
1 argument162 words per minute811 words300 seconds
Argument 1
Bhashini creates AI capabilities across 22 (now 36) Indian languages through grassroots data collection and delivers concrete applications such as farmer advisory and manuscript digitisation (Amitabh Nag)
EXPLANATION
Nag outlines the Bhashini initiative, which builds AI tools for 22 (expanding to 36) constitutionally recognized Indian languages by mobilising 70 research institutes for data collection. He cites practical use‑cases like a voice‑first farmer advisory service and the Gyan Bharatam manuscript digitisation project.
EVIDENCE
He explains that Bhashini targets 22 languages, covering ASR, text-to-text translation, TTS, OCR and a digital dictionary, with work coordinated across 70 research institutes [78-86]; the programme has already expanded to 36 languages and is adding tribal languages without scripts [89-95]; he describes the challenge of lacking digital data and the field-based data-collection effort that created monolingual and bilingual corpora [99-108]; finally, he gives examples of a farmer advisory voice interface and the Gyan Bharatam manuscript project as deployed applications [108].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bhashini’s multilingual AI development, grassroots data collection, and applications like farmer advisory are described in [S1].
MAJOR DISCUSSION POINT
Bhashini creates AI capabilities across 22 (now 36) Indian languages through grassroots data collection and delivers concrete applications such as farmer advisory and manuscript digitisation
AGREED WITH
Markus Reubi, Nina Frey, Petri Myllymäki, Aya Bedir, Alex Ilic, Participant, Annie Hartley
A
Aya Bedir
1 argument161 words per minute1197 words445 seconds
Argument 1
Scaling AI must be community‑led, avoid exploitative data scraping, and prioritize cultural preservation and frugal, resource‑efficient solutions (Aya Bedir)
EXPLANATION
Bedir argues that the dominant big‑tech model of scaling AI through massive data scraping is problematic; instead, AI should be scaled through community‑led public‑private partnerships that respect cultural heritage and operate with limited resources. She stresses the need for frugal, resilient solutions especially for under‑resourced regions.
EVIDENCE
She critiques big-tech’s brute-force scaling and calls for fighting scale with scale, emphasizing community-led approaches [126-132]; she raises concerns about companies scraping data without licences and treating communities as mere data points [158-164]; she highlights the importance of frugal, scarcity-aware AI design for regions with limited resources [169-172].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Community-led, frugal AI and concerns about data scraping are addressed in [S23] and the need for culturally respectful AI is echoed in [S22].
MAJOR DISCUSSION POINT
Scaling AI must be community‑led, avoid exploitative data scraping, and prioritize cultural preservation and frugal, resource‑efficient solutions
AGREED WITH
Torsten Schwede, Nina Frey, Markus Reubi, Alex Ilic, Amitabh Nag
DISAGREED WITH
Alex Ilic, Torsten Schwede
A
Alex Ilic
1 argument188 words per minute770 words245 seconds
Argument 1
Apertus demonstrates the need for talent, compute resources, and community‑driven benchmarks; academia should lead the development of open multilingual models (Alex Ilic)
EXPLANATION
Ilic presents Apertus as an open, multilingual foundation model and points out bottlenecks in talent, high‑performance compute, and benchmark creation. He argues that academia, rather than only big‑tech, must be empowered to develop and share such models, and outlines plans to expand language coverage and assess cost‑effectiveness.
EVIDENCE
He identifies talent scarcity as a bottleneck, noting only about 100 people worldwide have the expertise to build foundation models [183-186]; he describes Apertus as an open model intended for community use [187-189]; he discusses current benchmark dominance by big-tech and the need for culturally relevant metrics, noting the training data is 60 % English and 40 % non-English [190-196]; he outlines strategic steps to increase language performance, evaluate cost, and share compute infrastructure through ICAAN collaborations [197-218].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Apertus as an open multilingual model, talent scarcity, compute needs, and benchmark challenges are detailed in [S1].
MAJOR DISCUSSION POINT
Apertus demonstrates the need for talent, compute resources, and community‑driven benchmarks; academia should lead the development of open multilingual models
AGREED WITH
Markus Reubi, Nina Frey
DISAGREED WITH
Annie Hartley
P
Petri Myllymäki
1 argument150 words per minute332 words132 seconds
Argument 1
Access to language and culture is a human right; AI initiatives must involve all nations, not just dominant players (Petri Myllymäki)
EXPLANATION
Myllymäki stresses that language and cultural access are fundamental human rights, referencing UN findings. He calls for inclusive AI initiatives that invite every country to the discussion table, warning against a one‑size‑fits‑all English‑centric approach.
EVIDENCE
He notes that Nordic languages are small but their preservation is taken seriously, and that access to language and culture is a human right according to UN insights [224-229]; he adds that culture behind language matters, cites the need for inclusive global initiatives, and quotes Joshua Benjio about inviting all countries as guests rather than menu items [230-238].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Access to language and culture as a human right and the call for inclusive AI initiatives are emphasized in [S1] and [S11].
MAJOR DISCUSSION POINT
Access to language and culture is a human right; AI initiatives must involve all nations, not just dominant players
AGREED WITH
Markus Reubi, Nina Frey, Aya Bedir, Alex Ilic, Participant, Annie Hartley, Amitabh Nag
P
Participant
1 argument158 words per minute797 words301 seconds
Argument 1
The C‑Line model shows how resource‑efficient multilingual AI can respect national sovereignty, handle code‑switching, and serve diverse Southeast Asian societies (Participant)
EXPLANATION
The participant describes the C‑Line model, a multilingual system covering 13 Southeast Asian languages, built with limited data and designed to respect national sovereignty. He links the model to frugal AI, code‑switching realities, and the broader goal of multipolar, multicolored digital sovereignty.
EVIDENCE
He explains that C-Line reflects 13 Southeast Asian languages, is nationally funded, and built in synergy with Apertus, emphasizing cultural and resource-efficient design [250-263]; he then discusses sovereignty as power for societies and individuals, the need to accommodate code-switching and dialects, and the multipolar context of AI governance [265-281].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The C-Line model’s resource-efficient multilingual design and sovereignty focus are presented in [S1].
MAJOR DISCUSSION POINT
The C‑Line model shows how resource‑efficient multilingual AI can respect national sovereignty, handle code‑switching, and serve diverse Southeast Asian societies
AGREED WITH
Markus Reubi, Nina Frey, Petri Myllymäki, Aya Bedir, Alex Ilic, Annie Hartley, Amitabh Nag
A
Annie Hartley
1 argument185 words per minute1419 words459 seconds
Argument 1
Multilingual AI must be rigorously tested in critical health contexts; neutral, open‑science platforms like the MOVE project provide real‑world validation and feedback loops (Annie Hartley)
EXPLANATION
Hartley illustrates the risks of deploying multilingual AI in high‑stakes medical settings, citing a mis‑advice case in Ethiopia. She advocates for neutral, open‑science validation through the MOVE project, which gathers real‑world feedback from clinicians to continuously improve models.
EVIDENCE
She recounts testing an AI model on diabetic ketoacidosis in Ethiopia, where the model gave an incorrect, Bible-derived recommendation [288-295]; she describes leading the LIGHTS lab focused on high-stakes environments and the need for rigorous testing [300-311]; she emphasizes cultural nuances affecting medical advice [322-327]; and she introduces the MOVE (Massive Open Online Validation and Evaluation) project that collects real-world signals to validate and iteratively improve models [328-335].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The MOVE project for real-world validation of multilingual AI in health contexts is described in [S1].
MAJOR DISCUSSION POINT
Multilingual AI must be rigorously tested in critical health contexts; neutral, open‑science platforms like the MOVE project provide real‑world validation and feedback loops
AGREED WITH
Amitabh Nag, Participant, Alex Ilic
DISAGREED WITH
Alex Ilic
Agreements
Agreement Points
Multilingual AI is essential for democratic participation, human rights, and inclusive digital development
Speakers: Markus Reubi, Nina Frey, Petri Myllymäki, Aya Bedir, Alex Ilic, Participant, Annie Hartley, Amitabh Nag
AI must serve all languages and cultures to ensure democratic participation (Markus Reubi) ICAAN network links institutions worldwide to promote language diversity in AI (Nina Frey) Access to language and culture is a human right; AI initiatives must involve all nations, not just dominant players (Petri Myllymäki) Scaling AI must be community‑led, avoid exploitative data scraping, and prioritize cultural preservation and frugal, resource‑efficient solutions (Aya Bedir) Apertus demonstrates the need for talent, compute resources, and community‑driven benchmarks; academia should lead the development of open multilingual models (Alex Ilic) The C‑Line model shows how resource‑efficient multilingual AI can respect national sovereignty, handle code‑switching, and serve diverse Southeast Asian societies (Participant) Multilingual AI must be rigorously tested in critical health contexts; neutral, open‑science platforms like the MOVE project provide real‑world validation and feedback loops (Annie Hartley) Bhashini creates AI capabilities across 22 (now 36) Indian languages through grassroots data collection and delivers concrete applications such as farmer advisory and manuscript digitisation (Amitabh Nag)
All speakers stress that AI must support every language and culture, framing linguistic inclusion as a democratic imperative, a human right, and a prerequisite for equitable digital participation and culturally appropriate services. They cite initiatives ranging from ICAAN’s global network, the Apertus open model, the C-Line regional system, Bhashini’s Indian language platform, to health-sector validation, underscoring a shared belief that multilingual AI is foundational for inclusive societies. [4-5][57-58][224-229][126-132][158-164][190-196][250-263][288-295][78-86][89-95]
POLICY CONTEXT (KNOWLEDGE BASE)
This view aligns with UNESCO’s multilingualism governance framework and reflects calls for inclusive digital development in AI policy discussions such as AI for Development panels [S44][S29].
Building multilingual models requires specialised talent, high‑performance computing, and shared infrastructure
Speakers: Markus Reubi, Alex Ilic, Nina Frey
Building multilingual models also requires expertise in high‑performance computing and talent development (Markus Reubi) Apertus demonstrates the need for talent, compute resources, and community‑driven benchmarks; academia should lead the development of open multilingual models (Alex Ilic) ICAAN network links institutions worldwide to promote language diversity in AI (Nina Frey)
Both Markus and Alex highlight the scarcity of expertise needed to train foundation models and the parallel need for super-computing resources, while Nina points to ICAAN’s role in linking compute infrastructure across institutions, indicating consensus that talent and HPC are critical bottlenecks that must be addressed collaboratively. [220-222][183-186][197-218][57-58]
POLICY CONTEXT (KNOWLEDGE BASE)
The need for specialised talent and high-performance compute is highlighted in reports on infrastructure gaps and sustainability concerns, e.g., the Green AI debate and the identified lack of compute resources in the Global South [S45][S42].
Collaborative, multistakeholder frameworks and funding mechanisms are essential to advance multilingual AI
Speakers: Torsten Schwede, Nina Frey, Markus Reubi, Alex Ilic, Aya Bedir, Amitabh Nag
Launch of three joint research calls (geosciences, social sciences, One Health) and a new Indo‑Swiss Research Framework Program to deepen bilateral science cooperation (Torsten Schwede) ICAAN network links institutions worldwide to promote language diversity in AI (Nina Frey) Announcement of three new joint calls under the Indo‑Swiss Joint Research Programme and upcoming Geneva AI Summit (Markus Reubi) We have a very strong international program where we share basically our compute infrastructure (Alex Ilic) Scaling AI must be community‑led, avoid exploitative data scraping, and prioritize cultural preservation and frugal, resource‑efficient solutions (Aya Bedir) Bhashini’s grassroots data‑collection effort involving 70 research institutes and real‑world applications (Amitabh Nag)
All listed speakers converge on the need for joint research programmes, public-private partnerships, and multistakeholder networks (ICAAN, Indo-Swiss framework, Bhashini consortium) to fund, coordinate, and share resources for multilingual AI development. This reflects a shared belief that coordinated financing and governance structures are vital for progress. [27-34][41-44][57-58][17-19][22][211-218][130-133][81-86][99-108]
POLICY CONTEXT (KNOWLEDGE BASE)
Multistakeholder frameworks and joint funding are advocated in multiple policy fora, including IGF multistakeholder cooperation recommendations and the AI Innovation Responsible Development agenda [S32][S49][S35].
Real‑world validation and culturally appropriate testing are critical before AI deployment
Speakers: Annie Hartley, Amitabh Nag, Participant, Alex Ilic
Multilingual AI must be rigorously tested in critical health contexts; neutral, open‑science platforms like the MOVE project provide real‑world validation and feedback loops (Annie Hartley) Ground‑up data collection and pilot applications (farmer advisory, manuscript digitisation) demonstrate practical deployment (Amitabh Nag) Discussion of code‑switching, sovereignty, and real‑world usage in Southeast Asian societies (Participant) Need for culturally relevant benchmarks and evaluation of language performance (Alex Ilic)
Speakers agree that AI models should be evaluated in actual use-cases-health, agriculture, regional contexts-and that community-driven validation (e.g., MOVE) and culturally relevant benchmarks are necessary to ensure safety, relevance, and trust. [288-295][300-311][328-335][99-108][265-281][190-196][197-200]
POLICY CONTEXT (KNOWLEDGE BASE)
Real-world validation and culturally appropriate testing are emphasized in data-governance guidelines and testing-trust recommendations, noting the shortcomings of current testing paradigms across diverse contexts [S29][S50][S51].
Similar Viewpoints
Both stress that scarcity of specialised talent and access to high‑performance computing are major bottlenecks for multilingual AI, and that academia must be empowered to address them. [220-222][183-186][197-218]
Speakers: Markus Reubi, Alex Ilic
Building multilingual models also requires expertise in high‑performance computing and talent development (Markus Reubi) Apertus demonstrates the need for talent, compute resources, and community‑driven benchmarks; academia should lead the development of open multilingual models (Alex Ilic)
Both frame language and cultural access as a fundamental human right and argue for inclusive, community‑driven AI development that resists top‑down, exploitative practices. [126-132][158-164][224-229]
Speakers: Aya Bedir, Petri Myllymäki
Scaling AI must be community‑led, avoid exploitative data scraping, and prioritize cultural preservation and frugal, resource‑efficient solutions (Aya Bedir) Access to language and culture is a human right; AI initiatives must involve all nations, not just dominant players (Petri Myllymäki)
Both highlight ICAAN’s role in connecting global partners and sharing compute resources to enable multilingual AI research. [57-58][211-218]
Speakers: Nina Frey, Alex Ilic
ICAAN network links institutions worldwide to promote language diversity in AI (Nina Frey) We have a very strong international program where we share basically our compute infrastructure (Alex Ilic)
Both present concrete, region‑specific multilingual AI systems that are built with limited resources and aim to serve local communities while respecting cultural and sovereign contexts. [99-108][250-263]
Speakers: Amitabh Nag, Participant
Bhashini creates AI capabilities across 22 (now 36) Indian languages… practical applications (Amitabh Nag) C‑Line model shows resource‑efficient multilingual AI respecting sovereignty and handling code‑switching (Participant)
Both announce and endorse the Indo‑Swiss joint research initiatives as a mechanism to deepen bilateral scientific cooperation in AI and related fields. [27-34][41-44][17-19][22]
Speakers: Torsten Schwede, Markus Reubi
Launch of three joint research calls… Indo‑Swiss Research Framework Program (Torsten Schwede) Announcement of three new joint calls under the Indo‑Swiss Joint Research Programme and upcoming Geneva AI Summit (Markus Reubi)
Unexpected Consensus
Public‑private partnership and government‑led funding are both seen as essential to scale multilingual AI responsibly
Speakers: Aya Bedir, Torsten Schwede
Scaling AI must be community‑led, avoid exploitative data scraping, and prioritize cultural preservation and frugal, resource‑efficient solutions (Aya Bedir) Launch of three joint research calls… Indo‑Swiss Research Framework Program (Torsten Schwede)
Despite coming from different sectors-Aya representing a public-private initiative and Torsten representing a government-funded research programme-they converge on the view that collaborative funding models combining public resources and private expertise are necessary to advance multilingual AI, a convergence not explicitly anticipated earlier in the discussion. [130-133][41-44]
POLICY CONTEXT (KNOWLEDGE BASE)
Public-private partnerships and government-led funding are repeatedly cited as essential for scaling multilingual AI, as seen in Global South partnership models and private-sector collaborations like the Gates Foundation initiative [S36][S38][S37].
Overall Assessment

The discussion reveals a strong, cross‑regional consensus that multilingual AI is a democratic and human‑rights imperative, that talent and compute resources are critical bottlenecks, that collaborative funding and multistakeholder networks are essential, and that real‑world, culturally aware validation must precede deployment.

High consensus across technical, ethical, and policy dimensions, indicating a unified momentum toward coordinated, inclusive, and responsibly funded multilingual AI initiatives.

Differences
Different Viewpoints
How to scale multilingual AI – community‑led frugal approaches versus high‑performance computing and talent‑intensive approaches
Speakers: Aya Bedir, Markus Reubi, Alex Ilic
Scaling AI must be community‑led, avoid exploitative data scraping, and prioritize cultural preservation and frugal, resource‑efficient solutions (Aya Bedir) Building multilingual models also requires expertise in high‑performance computing and talent development (Markus Reubi) Apertus demonstrates the need for talent, compute resources, and community‑driven benchmarks; academia should lead the development of open multilingual models (Alex Ilic)
Aya argues that AI should be scaled through community-led public-private partnerships that avoid big-tech data-scraping and use frugal solutions, while Markus stresses that multilingual model development needs specialized talent for both model training and supercomputing infrastructure, and Alex highlights the scarcity of talent and compute and calls for academia-driven open models, showing a clash of preferred scaling strategies [126-164][220-222][183-186].
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between community-led frugal scaling and resource-intensive high-performance approaches mirrors debates in the Multilingual AI Bridge report and Green AI literature, with examples of low-resource strategies and efficient resource use in China [S32][S45][S46][S33].
Role of big‑tech versus academia/public sector in driving multilingual AI development
Speakers: Aya Bedir, Alex Ilic, Torsten Schwede
Scaling AI must be community‑led, avoid exploitative data scraping, and prioritize cultural preservation and frugal, resource‑efficient solutions (Aya Bedir) Apertus demonstrates the need for talent, compute resources, and community‑driven benchmarks; academia should lead the development of open multilingual models (Alex Ilic) Launch of three joint research calls (geosciences, social sciences, One Health) and a new Indo‑Swiss Research Framework Program to deepen bilateral science cooperation, with AI as a high‑priority topic (Torsten Schwede)
Aya warns against big-tech’s brute-force data-scraping and promotes community-led PPPs, Alex calls for academia to take the lead while noting big-tech dominance in benchmarks, and Torsten focuses on government-funded research programmes that do not address big-tech practices, revealing differing views on who should steer AI development [126-164][183-186][27-34][41-44].
POLICY CONTEXT (KNOWLEDGE BASE)
The role of big-tech versus academia/public sector is a recurring point of disagreement in multistakeholder discussions, with calls for balanced contributions from industry, academia, and governments [S32][S41][S48].
Approach to validation of multilingual AI in high‑stakes health contexts
Speakers: Annie Hartley, Alex Ilic
Multilingual AI must be rigorously tested in critical health contexts; neutral, open‑science platforms like the MOVE project provide real‑world validation and feedback loops (Annie Hartley) Apertus demonstrates the need for talent, compute resources, and community‑driven benchmarks; academia should lead the development of open multilingual models (Alex Ilic)
Annie stresses the necessity of real-world, high-stakes validation of AI models in healthcare, while Alex focuses on model development, talent, and benchmark creation without explicit emphasis on health-specific validation, indicating a methodological divergence [288-295][183-186].
POLICY CONTEXT (KNOWLEDGE BASE)
Validation in high-stakes health contexts is governed by European data-governance frameworks that balance GDPR with medical research needs, underscoring the need for rigorous testing standards [S28][S50].
Unexpected Differences
Resource‑intensive high‑performance computing versus frugal, low‑resource AI scaling
Speakers: Markus Reubi, Aya Bedir
Building multilingual models also requires expertise in high‑performance computing and talent development (Markus Reubi) Scaling AI must be community‑led, avoid exploitative data scraping, and prioritize cultural preservation and frugal, resource‑efficient solutions (Aya Bedir)
It is unexpected that a speaker from a high-resource nation (Switzerland) emphasizes the need for supercomputing talent, while another speaker advocates for low-resource, community-driven approaches, revealing a tension between resource-rich and resource-constrained visions for multilingual AI (see [220-222][126-164]).
POLICY CONTEXT (KNOWLEDGE BASE)
Resource-intensive versus frugal AI scaling is addressed in sustainability debates, highlighting the environmental impact of large models and the emergence of efficient training methods [S45][S46][S33].
Data ownership and ethical collection methods
Speakers: Aya Bedir, Amitabh Nag
Scaling AI must be community‑led… avoid treating individuals and communities as data (Aya Bedir) We built the monolingual and bilingual corpora by field workers collecting data from people; this was the first brute‑force digital data collection effort (Amitabh Nag)
While both aim to gather data for multilingual AI, Aya warns against treating communities merely as data sources, whereas Amitabh describes a large-scale field data collection that could be perceived as treating people as data points, highlighting an unexpected ethical tension (see [158-164][99-105]).
POLICY CONTEXT (KNOWLEDGE BASE)
Data ownership and ethical collection are central to AI governance, with EU GDPR considerations, ethical data-collection principles, and calls to document data provenance in AI systems [S28][S29][S30][S31].
Overall Assessment

The discussion shows strong consensus on the importance of multilingual AI for democratic participation, cultural preservation, and human rights. However, speakers diverge sharply on the means to achieve this—ranging from high‑performance, talent‑intensive, and compute‑heavy strategies to frugal, community‑led, and ethically cautious approaches. Additional disagreements concern the role of big‑tech versus academia/public sector and the necessity of rigorous health‑sector validation.

Moderate to high methodological disagreement. While goals are aligned, the contrasting visions on scaling, resource allocation, and ethical data practices could impede coordinated action unless a hybrid framework is adopted that balances high‑tech capabilities with community‑driven, frugal solutions.

Partial Agreements
All speakers agree that multilingual AI is essential for democratic participation, cultural preservation, and human rights, but they diverge on implementation pathways—field data collection (Amitabh), community‑led PPPs (Aya), open‑source academic models (Alex), resource‑efficient national models (Participant), and high‑performance compute (Markus) (see [4][5][78-86][126-164][250-263][224-229]).
Speakers: Markus Reubi, Amitabh Nag, Aya Bedir, Participant, Alex Ilic, Petri Myllymäki
AI must serve all languages and cultures to ensure democratic participation (Markus Reubi) Bhashini creates AI capabilities across 22 (now 36) Indian languages through grassroots data collection and delivers concrete applications such as farmer advisory and manuscript digitisation (Amitabh Nag) Scaling AI must be community‑led, avoid exploitative data scraping, and prioritize cultural preservation and frugal, resource‑efficient solutions (Aya Bedir) The C‑Line model shows how resource‑efficient multilingual AI can respect national sovereignty, handle code‑switching, and serve diverse Southeast Asian societies (Participant) Apertus demonstrates the need for talent, compute resources, and community‑driven benchmarks; academia should lead the development of open multilingual models (Alex Ilic) Access to language and culture is a human right; AI initiatives must involve all nations, not just dominant players (Petri Myllymäki)
Both speakers support strengthening bilateral cooperation and continuous collaboration, but Torsten focuses on discipline‑specific research funding, whereas Markus emphasizes a broader, globally inclusive AI governance agenda (see [27-34][41-44][8][9]).
Speakers: Torsten Schwede, Markus Reubi
Launch of three joint research calls (geosciences, social sciences, One Health) and a new Indo‑Swiss Research Framework Program to deepen bilateral science cooperation (Torsten Schwede) Our shared objective is continuity, cooperation and genuinely global approach to AI governance (Markus Reubi)
Takeaways
Key takeaways
Multilingual AI is framed as a democratic imperative; AI must serve all languages and cultures to ensure inclusive participation (Markus Reubi, Petri Myllymäki). The Indo‑Swiss partnership is deepening with three new joint research calls (geosciences, social sciences, One Health) and the launch of an Indo‑Swiss Research Framework Program, plus new Explore/Experiment/Expand grants (Torsten Schwede). India’s Bhashini initiative demonstrates a large‑scale, grassroots effort to create speech, translation, text‑to‑speech, OCR and lexical resources across 22 (now 36) languages, delivering concrete services such as farmer advisory and manuscript digitisation (Amitabh Nag). Current AI exemplifies a public‑private partnership that aims to scale responsibly, emphasizing cultural preservation, community‑led data collection, and frugal, resource‑efficient solutions while warning against exploitative data scraping (Aya Bedir). Open, academic‑driven multilingual foundation models like Apertus are needed; challenges include limited talent, compute, and community‑defined benchmarks (Alex Ilic, Markus Reubi). Language access is a human right; global AI initiatives must involve all nations, not just dominant tech players (Petri Myllymäki). Regional models such as Singapore’s C‑Line illustrate how multilingual AI can respect national sovereignty, handle code‑switching, and be built with limited data resources (Participant). High‑stakes applications, especially in health, require rigorous real‑world validation; neutral open‑science platforms like the MOVE project provide feedback loops to ensure safety and cultural relevance (Annie Hartley). Future milestones include the Geneva AI Summit 2027 and continued collaboration among ICAIN, Swiss institutions (ETH, EPFL), Indian partners, and other global stakeholders.
Resolutions and action items
Launch of three Indo‑Swiss joint research calls (geosciences, social sciences, One Health). Establishment of the Indo‑Swiss Research Framework Program for ongoing bilateral collaboration. Introduction of Explore, Experiment, and Expand grant schemes to foster new and existing collaborations. Increase of mobility funding for researchers within Indo‑Swiss projects. Announcement of upcoming flagship events in Switzerland and India to maintain network engagement. Deployment of Bhashini’s multilingual AI services (farmer advisory, Gyan Bharatam manuscript platform). Current AI to unveil a collaborative device (scheduled for 15:30, Room 10) with Bhashini. Commitment to develop and share open multilingual foundation model Apertus, including benchmarks and compute resources. Plan to expand Apertus to additional languages and improve performance beyond English baseline. MOVE project to collect real‑world validation data from high‑stakes medical settings and feed back into model improvement. Participants encouraged to follow websites and social media for updates and to submit proposals to the new calls.
Unresolved issues
Sustainable financing and cost‑effectiveness of scaling multilingual data collection beyond pilot languages. How to ensure community‑led data gathering without resorting to large‑scale, unlicensed scraping by big tech firms. Methods for achieving high model performance with limited data (frugal AI) across low‑resource languages. Standardisation of benchmarks that reflect cultural and contextual relevance rather than generic English‑centric metrics. Balancing national sovereignty with the need for interoperable, globally useful multilingual models. Technical solutions for handling code‑switching and dialectal variation in real‑time applications. Long‑term governance structure for open‑source multilingual models and the role of academia versus industry. Mechanisms to continuously validate and monitor AI safety in high‑stakes domains such as healthcare.
Suggested compromises
Adopt a public‑private partnership model (Current AI) that combines philanthropic, governmental, and industry resources to achieve scale while maintaining community control. Use the Explore/Experiment/Expand grant framework to allow both blue‑sky, high‑risk projects and incremental expansion of proven collaborations. Leverage existing open models (Apertus) as a shared foundation, enabling regional partners to fine‑tune for local languages without rebuilding from scratch. Implement frugal AI approaches that maximise impact with minimal data and compute, addressing resource constraints of low‑income regions. Encourage neutral, open‑science validation platforms (MOVE) to provide real‑world feedback, balancing rapid deployment with safety and cultural accuracy.
Thought Provoking Comments
AI can only serve the public good if it serves all languages and all cultures. Linguistic exclusion is a democratic imperative.
Frames multilingual AI not just as a technical challenge but as a fundamental democratic right, setting the ethical baseline for the whole discussion.
Established the overarching theme of the summit, prompting subsequent speakers to justify their projects in terms of inclusion and democratic access rather than pure innovation.
Speaker: Markus Reubi
One Health reflects a holistic approach needed to understand the interconnected health of humans, animals, and the environment. The challenges we face know no borders, and international collaboration is indispensable.
Introduces a concrete, cross‑disciplinary research area where AI can have global impact, linking health, ecology, and data sharing.
Shifted the conversation from abstract policy to a tangible research agenda, leading participants to consider how multilingual AI can support such interdisciplinary work.
Speaker: Torsten Schwede
We had about 200‑odd people go into the field, collect speech, pictures, and create monolingual or bilingual corpora – a brute‑force data collection because digital data simply didn’t exist.
Highlights the practical difficulty of building language resources from scratch and the innovative grassroots methodology used to overcome data scarcity.
Prompted recognition of the labor‑intensive nature of multilingual AI, influencing later remarks about the need for community‑driven data and validation.
Speaker: Amitabh Nag
Big tech often treats individuals and communities as data, a condescending approach. We must get as close as possible to the communities themselves and support them to preserve their own cultures and languages.
Challenges the prevailing corporate model of data collection, raising ethical concerns about agency, consent, and cultural respect.
Created a turning point toward ethical scrutiny; subsequent speakers (e.g., Alex Ilic, Annie Hartley) emphasized community involvement, open‑source, and neutral validation.
Speaker: Aya Bedir
Only about a hundred people worldwide have the expertise to build foundation models. Academia must empower talent, not just provide compute.
Identifies a critical talent bottleneck and calls for academic empowerment, expanding the discussion beyond infrastructure to human capital.
Deepened the analysis of barriers to multilingual AI, leading to calls for shared benchmarks, talent development, and collaborative compute resources.
Speaker: Alex Ilic
Access to language and culture is a human right. We must invite all countries to the dinner table as guests, not just as part of the menu.
Elevates multilingual AI to a rights‑based framework, reinforcing inclusion as a moral imperative rather than a technical optionality.
Reinforced the democratic framing introduced by Markus, and inspired later remarks on sovereignty and code‑switching from the NTU participant.
Speaker: Petri Myllymäki
Sovereignty is about power for societies and individuals; we need nuanced models that handle code‑switching and dialects, not just monolithic national languages.
Introduces the complex notion of linguistic sovereignty and the technical challenge of modeling fluid, multilingual realities.
Shifted the tone toward technical nuance and political context, prompting Markus to segue into medical implications and broader governance concerns.
Speaker: Participant (NTU Singapore)
When an AI model told me ‘thou shalt not eat insulin on a Tuesday’ in a low‑resource language, it showed how dangerous inaccurate models are in high‑stakes medical settings. We need real‑world validation (MOVE) and neutral, open‑science approaches.
Provides a vivid, concrete example of AI failure in a critical domain, underscoring the necessity of validation, neutrality, and community‑driven data.
Served as a climax of the discussion, moving the conversation from policy and research calls to immediate real‑world risk, reinforcing earlier ethical concerns and prompting the final wrap‑up.
Speaker: Annie Hartley
Overall Assessment

The discussion was shaped by a series of escalating insights that moved from a high‑level democratic framing of multilingual AI to concrete ethical, technical, and societal challenges. Markus Reubi’s opening set the tone of inclusion as a right; Aya Bedir’s critique of big‑tech data practices and Annie Hartley’s medical failure story acted as turning points that deepened the ethical dimension. Contributions from Amitabh Nag, Alex Ilic, and the NTU participant highlighted practical data‑scarcity solutions, talent bottlenecks, and sovereignty complexities, while Petri Myllymäki’s human‑rights articulation reinforced the moral urgency. Together, these comments redirected the conversation from abstract announcements to a nuanced, rights‑based, and implementation‑focused dialogue, culminating in a consensus that collaborative, community‑centered, and validated multilingual AI is essential for equitable global impact.

Follow-up Questions
How can we increase the number of languages with performance comparable to English in multilingual models?
Improving language coverage is essential for truly inclusive AI that serves all linguistic communities.
Speaker: Alex Ilic
What is the cost and resource requirements to raise language performance significantly for low‑resource languages?
Understanding financial and computational needs is crucial for planning scalable multilingual initiatives.
Speaker: Alex Ilic
How can benchmarks be designed to reflect cultural and regional needs rather than corporate metrics?
Culturally relevant evaluation criteria ensure models are useful and fair for diverse societies.
Speaker: Alex Ilic
How can we ensure ethical data collection that respects communities, avoiding treating them as mere data sources?
Ethical sourcing protects individual rights and builds trust with language‑speaking communities.
Speaker: Aya Bedir
How can AI be developed to operate effectively under scarcity and frugality?
Frugal AI enables deployment in low‑resource settings, expanding benefits beyond wealthy regions.
Speaker: Aya Bedir
How can communities be directly involved in preserving their own languages and cultures, avoiding top‑down approaches?
Community‑led preservation ensures authenticity and empowerment rather than paternalistic interventions.
Speaker: Aya Bedir
How can multilingual models be evaluated and validated in high‑stakes medical contexts to ensure safety and accuracy?
Medical applications demand rigorous testing to prevent harmful errors in patient care.
Speaker: Annie Hartley
What methods can be used to collect and validate real‑world usage data (e.g., via the MOVE project) to continuously improve models?
A feedback loop from actual deployments helps refine models and measure real impact.
Speaker: Annie Hartley
How should AI systems handle code‑switching and dialectal variation in multilingual societies?
Real‑world language use often mixes languages; models must reflect this to be usable.
Speaker: Participant from Singapore (NTU)
How does sovereignty intersect with multilingual AI governance and individual control over AI tools?
Understanding sovereignty issues is key for equitable power distribution and respecting national and individual autonomy.
Speaker: Participant from Singapore (NTU)
What strategies can increase the talent pool for building foundation models beyond big‑tech companies?
Addressing talent shortages is necessary for broader academic and regional participation in AI development.
Speaker: Alex Ilic, Markus Reubi
How can academia acquire compute resources comparable to industry for training large models?
Access to high‑performance computing is a bottleneck for university‑led AI research.
Speaker: Alex Ilic
How can the Bhasha Initiative scale to cover all 100+ Indian languages, especially those without scripts?
Extending coverage ensures linguistic inclusion for the full diversity of India’s population.
Speaker: Amitabh Nag
What effective methods exist for data collection in low‑resource languages (e.g., field work, community engagement)?
Robust data pipelines are foundational for building accurate multilingual models.
Speaker: Amitabh Nag
How can the impact of multilingual AI on agricultural advisory for farmers be measured and improved?
Demonstrating tangible benefits validates the utility of language technologies in livelihoods.
Speaker: Amitabh Nag
How can long‑term Indo‑Swiss AI collaborations be structured and funded, especially for AI research topics?
Sustained bilateral programs are needed to maintain momentum and address shared challenges.
Speaker: Torsten Schwede
How can public‑private partnership models balance public‑interest goals with private‑sector scale?
Effective PPPs can marshal resources while safeguarding societal values.
Speaker: Aya Bedir
How can we ensure that multilingual AI initiatives move beyond symbolic check‑marks to substantive improvements?
Measuring real performance gains prevents tokenism and drives meaningful progress.
Speaker: Alex Ilic
How can open‑source multilingual models like Apertus be made economically usable for diverse stakeholders?
Affordability and accessibility are vital for widespread adoption across academia and industry.
Speaker: Alex Ilic
How can AI incorporate broader cultural preservation (behaviors, artifacts, norms) beyond just language?
A holistic approach respects the full spectrum of cultural heritage.
Speaker: Aya Bedir
Why does Current AI focus on hardware as a strategic priority?
Understanding the emphasis on hardware clarifies the initiative’s infrastructure roadmap.
Speaker: Aya Bedir
What experiences does Alex Ilic have with Apertus and how can Indian languages be incorporated in the next year?
Sharing practical insights will guide future collaborative extensions of the model.
Speaker: Nina Frey
What are the Nordic recommendations for multilingual AI and why were they made?
Nordic policy perspectives can inform global governance and language‑preservation strategies.
Speaker: Petri Myllymäki
How does sovereignty relate to language models and AI governance?
Exploring sovereignty helps address power dynamics and control over AI technologies.
Speaker: Participant from Singapore (NTU)
How can multilingual AI be applied in high‑stakes medical scenarios, and what is Annie Hartley’s role in ICANN?
Linking medical use‑cases with governance roles clarifies pathways for responsible deployment.
Speaker: Markus Reubi

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

How nonprofits are using AI-based innovations to scale their impact

How nonprofits are using AI-based innovations to scale their impact

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel, moderated by Manohar Sreekanth, examined a four-month AI cohort program run from September to December that was anchored by Project Tech for Dev and involved seven nonprofit organizations building AI use cases [50-53]. Erica Arya explained that the cohort model was chosen to provide hands-on mentorship, capture learnings from a small group, enable peer sharing, and reuse intellectual property while lowering barriers such as lack of engineering resources [64-71][73-79][82-84][85-94]. Tamina Madon described how the Agency Fund applied the accelerator-style cohort approach common in Silicon Valley, creating a pooled pool of technical and product-management staff to supply resources that individual NGOs could not afford [106-129][130-138].


Mainak Roy illustrated the Simple Education Foundation’s challenge of giving teachers context-specific pedagogical advice and how they built a WhatsApp-based chatbot to deliver evidence-based recommendations [155-163][164-168]. Steven Suting highlighted the Quest Alliance’s problem of heterogeneous learners and teacher shortages, prompting development of an AI system that emulates human cues to personalize learning pathways [166-180][181-186]. Preetam Sukumar recounted Avanti Fellows’ shift from improving student test reports to generating AI-summarized mentor scripts, noting pilot deployment with 15 teachers and issues with LLM hallucinations that required prompt-engineering [185-200][207-214][216-228].


The program incorporated a mentorship model pairing each nonprofit with one or two mentors, and introduced guardrails such as slur-lists and conversation filters to ensure responsible AI behavior [236-244][245-247]. Erica added that two knowledge partners-Digital Future Labs and Statle-were engaged to embed responsible-AI principles and safety considerations from the outset of the cohort [300-308][309-312]. Participants reported that having a modular blueprint and shared frameworks accelerated prototyping, prevented analysis paralysis, and facilitated cross-NGO collaboration, exemplified by health NGOs jointly building a high-risk pregnancy predictor [319-323][368-381].


Across the NGOs, a common learning was to focus on existing pain points rather than chasing AI hype, to leverage open-source tools, and to adopt evaluation frameworks such as the “golden dataset” for quality control [400-404][410-419][426-433][449-454]. Manohar noted that such early-stage collaboration is rare in the for-profit sector, underscoring the unique value of the social-sector cohort model [387-389], and invited audience members to explore the program’s published documentation for further guidance [455-456]. The discussion concluded that the cohort’s blend of mentorship, shared resources, responsible-AI integration, and collaborative learning provided a practical roadmap for nonprofits seeking to adopt AI responsibly and at scale [455-456].


Keypoints

Major discussion points


Purpose and structure of the AI Cohort program – The moderator frames the session around a four-month AI cohort run by Project Tech for Dev, describing its goals, the number of NGOs involved, and the agenda for the discussion [50-55]. Erica then outlines the program’s origins, its open-application screening, free pilot nature, and the emphasis on mentorship and peer learning [70-78][84-93].


Why a cohort model and funder involvement – Both Erica and Tamina explain that cohort-based learning enables deep-rooted feedback, shared resources, and faster iteration, drawing on experiences from the Data Catalyst program and Silicon-Valley accelerator models [70-78][106-118]. Tamina adds that the Agency Fund creates a pooled pool of technical staff and product managers to lower barriers for NGOs [123-130].


AI use-case implementations in education


Simple Teacher Buddy (Mainak Roy) – a WhatsApp-based chatbot that helps teachers select evidence-based classroom strategies [151-162]; later challenges with the bot’s “hi” trigger and need for guardrails are discussed [241-246].


Quest Alliance (Steven Suting) – a data-driven digital learning platform that personalises pathways for learners and explores behaviour-change through AI-driven signalling [165-180][253-266].


Avanti Fellows (Preetam Sukumar) – an AI-generated mentorship script to replace teacher-student conversations, currently in pilot/roll-out with issues around LLM hallucinations [185-200][207-214].


Technical and ethical challenges plus learnings – Hallucinations in LLM outputs and bot glitches are highlighted as major hurdles [209-214][241-246]. The program’s built-in responsible-AI and AI-safety partners (Digital Future Labs, Statle) are cited as a way to embed safeguards from the start [300-307]. Participants stress the importance of evaluation frameworks (user, impact, “golden set”) to ensure trust and scalability [272-279][418-420].


Practical recommendations for NGOs – Panelists advise NGOs to start from concrete pain points rather than chasing AI for its own sake, to leverage open-source tools, avoid reinventing solutions, and use collaborative learning to reduce duplication [400-404][426-433][405-407].


Overall purpose / goal


The discussion was convened to review the AI Cohort program (its genesis, design, and funding model), showcase concrete AI pilots from participating NGOs, surface the practical challenges they faced, and distil actionable insights that can guide other nonprofit organisations in adopting AI responsibly and effectively [50-55][290-307][400-404].


Tone of the conversation


Opening – Light-hearted and experimental, with the moderator joking about delegating introductions to an AI [5-7].


Middle – Collaborative and reflective; speakers share detailed narratives of their projects, acknowledge difficulties, and discuss technical/ethical nuances in a constructive manner [151-214][253-266][300-307].


Closing – Summative and encouraging, emphasizing lessons learned, offering concrete advice, and inviting further engagement [400-433][455-456]. The tone remains supportive throughout, shifting from informal banter to focused problem-solving and finally to a forward-looking, advisory stance.


Speakers

Pritam Sukumar – Technology and Research Leader, Antifellows (Avanti Fellows); focuses on AI-driven educational tools and impact measurement. [S1][S2]


Erica Arya – CEO, Project Tech for Dev; leads tech advisory for NGOs, develops open-source platforms and cohort-based AI programs. [S3]


Temina Madon – Co-founder, Agency Fund; works at the intersection of technology, philanthropy, and economic mobility, investing in services that empower human agency and create social impact at scale. [S4][S5]


Steven Suting – Director of Technology and Product, Quest Alliance; oversees AI-powered digital learning platforms for youth across India, emphasizing data-driven innovation. [S6]


Mainak Roy – Co-founder & CEO, Simple Education Foundation (Simple Education); builds AI-enabled teacher support tools (e.g., Simple Teacher Buddy) and focuses on scalable education solutions. [transcript]


Elizabeth – AI conversational assistant used to introduce panelists; no formal title or affiliation mentioned.


Manohar Sreekanth – Moderator; Partner and CTO at Satva Consulting, working at the intersection of tech, data, and AI for social-sector impact. [S11]


Additional speakers:


None identified beyond the listed speakers.


Full session reportComprehensive analysis and detailed insights

The session opened with moderator Manohar Sreekanth welcoming the audience and, in keeping with the AI-centric theme of the summit, delegating the introductions of the panelists to the conversational assistant Elizabeth[1-13]. After a brief technical glitch that was quickly resolved [10-17], Manohar asked Elizabeth to introduce each participant, setting a tone of experimentation and collaboration.


Manohar then framed the core of the panel: a four-month AI cohort programme that ran from September to December, anchored by Project Tech for Dev, and involving seven nonprofit organisations developing AI use-cases [50-55]. He outlined the agenda – the genesis of the programme, its structure, the NGOs’ learnings, and actionable take-aways for the wider sector [56-59].


Rationale for a cohort model

Erica Arya explained that the cohort approach was chosen because it enables hands-on mentorship, peer learning and deep-rooted feedback from a small, focused group of NGOs [68-74]. She highlighted that the programme built on the earlier Data Catalyst cohort, which showed that intensive work with a limited set of organisations yields valuable insights that can be fed back into the platform [70-73]. The design also deliberately lowered barriers such as the lack of engineering resources by providing mentors and ensuring leadership buy-in within each NGO [84-93].


Tamina Madon reinforced this logic from a funder’s perspective. Drawing on the Silicon-Valley accelerator philosophy, she described how the Agency Fund creates a pooled pool of ten technical staff and product-management experts that can be shared across participating NGOs, a model far more efficient than each organisation hiring full-time AI engineers [106-138]. She further noted that product-management talent is scarce in the social sector, making the shared-resource approach especially valuable [135-138].


Programme design details

* Open-application & screening process – NGOs submitted applications, then participated in calls to verify fit and to confirm a requirement for a dedicated internal resource [84-93].


* Mentorship model – each nonprofit was paired with one or two mentors drawn from Project Tech for Dev [84-93].


* In-person workshops – the cohort included workshops where each NGO presented its progress in person, fostering direct peer exchange [84-93].


AI use-cases developed by the NGOs

1. Simple Teacher Buddy (Simple Education Foundation)Mainak Roy described teachers’ need for context-specific pedagogical advice and how the team built a WhatsApp-based chatbot that delivers evidence-based recommendations [151-162][164-168]. A practical glitch where the bot failed to recognise a simple “hi” greeting broke the data-capture flow, leading the team to implement a conversation-guardrail that treats any first message as a greeting [241-246]. Mainak also noted the use of a “golden dataset” to validate AI outputs [418-419].


2. Quest Alliance’s personalised learning platformSteven Suting outlined the challenge of heterogeneous learners and teacher shortages, prompting the development of an AI system that emulates human cues (tone, confidence, etc.) to personalise learning pathways and nudge behaviour [165-180][181-186][253-266]. He emphasized that the problem quickly became a behaviour-science issue rather than a purely technical one and warned against “pie-in-the-sky” thinking that neglects the need for behavioural expertise [258-266].


3. Avanti Fellows’ AI-generated mentorship scriptPreetam Sukumar recounted an initial attempt to improve student test reports, which pivoted to an AI-summarised script guiding teacher-student conversations and providing low-touch mentorship for online learners [185-200]. The pilot involved 15 teachers conducting 57-75 scripted conversations, but the team faced LLM hallucinations (e.g., incorrect performance trends) that required intensive prompt-engineering[207-214][225-227]. Preetam also highlighted that AI can support grant-writing and code generation for internal nonprofit work [185-200].


Technical and ethical challenges

Across the projects, participants reported LLM hallucinations that undermined trust [209-214][225-227] and bot interaction glitches such as the missing “hi” trigger [241-245]. Steven noted the difficulty of translating human behavioural signals into AI responses, expanding the solution space beyond software engineering into psychology [258-266].


To address these risks, the cohort engaged two knowledge partners: Digital Future Labs, which supplied expertise on responsible-AI design, and Statle, which focused on AI safety[300-304]. Erica stressed that these safeguards were baked in from day one, rather than added later [308-312]. The panel also referenced the use of slur-lists and guard-rail plugins that filter inappropriate outputs, a practice demonstrated in the chatbot design [324-334].


Erica further described concrete open-source integrations: the team used Superset for dashboards and integrated the Glyphic chatbot with OpenAI, illustrating a pragmatic reuse of existing tools [449-454].


Collaboration and cross-NGO learning

The cohort’s collaborative ethos produced concrete synergies. Erica recounted how two health NGOs, initially working in isolation on a high-risk pregnancy predictor, discovered overlapping goals and chose to collaborate[368-374]. A third NGO developing an assessment-model was also brought into the conversation, illustrating how the programme facilitated ecosystem-wide learning beyond the immediate participants [375-381].


Evaluation and impact measurement

Tamina introduced a four-level evaluation framework (user evaluation, monitoring, impact assessment, and learning) that she argues is rare in the private sector but essential for social-impact work [270-277]. This framework links directly to the cohort’s emphasis on continuous impact measurement. Manohar described the programme’s operational checkpoints – open applications, screening, mentorship, and leadership commitment – which together create a feedback loop for ongoing learning [84-93]. The NGOs themselves employed a golden dataset to validate AI outputs, a practice highlighted by Mainak [418-419].


Practical recommendations for NGOs

Panelists converged on several actionable insights:


* Start from real pain points rather than chasing AI for its own sake – map existing inefficiencies before considering AI [400-402][405-408].


* Leverage open-source tools and existing platforms that meet most requirements before building custom solutions – use open-source platforms that satisfy 60-70 % of needs, then customise the gaps [426-433]; integrate tools like Superset and Glyphic-OpenAI [449-454].


* Embed responsible-AI safeguards early through partnerships or built-in guardrails – collaborations with Digital Future Labs and Statle [300-307]; in-house slur-lists and guard-rail plugins [324-334].


* Adopt systematic evaluation (user, monitoring, impact) and maintain a golden dataset to ensure output quality [270-277][418-419].


Differing perspectives

The panel offered differing perspectives on capacity provisioning: Tamina advocated a shared pool of technical staff[106-138], whereas Mainak and Preetam argued that scaling to thousands of users will require additional dedicated AI engineers and a curated golden dataset[418-419][225-227]. A second divergence concerned evaluation emphasis: Tamina’s detailed four-level framework [270-277] contrasted with Manohar’s focus on mentorship and leadership buy-in without an explicit evaluation stage [84-93]. Finally, the responsible-AI implementation approach varied – Erica preferred external knowledge-partner collaborations, while Steven highlighted in-house guard-rail plugins[300-304][324-334].


Key take-aways

* Cohort-based AI programmes enable hands-on mentorship, peer learning and pooled technical resources, making AI adoption feasible for NGOs with limited engineering capacity.


* Embedding responsible-AI and safety from the start (via partners and guard-rails) helps avoid later ethical and quality issues.


* Collaboration among NGOs uncovers overlapping use-cases, reducing duplication and fostering joint development (e.g., health-risk prediction, assessment models).


* Practical solutions built during the cohort included a WhatsApp chatbot for teacher guidance, a behaviour-nudging learning platform, and an AI-generated mentorship script.


* Common challenges were bot interaction glitches, LLM hallucinations, and the difficulty of capturing human behavioural signals.


* NGOs are advised to identify pain points first, reuse open-source tools, and apply systematic evaluation frameworks (including a “golden dataset”) to ensure trust and scalability.


Thought-provoking remarks

* Erica’s observation that “cohort programmes let us learn deeply from a small set of NGOs, feeding those learnings back into our platforms” set the strategic tone [64-71].


* Tamina’s comparison of the cohort to Silicon-Valley accelerators, emphasizing the loneliness of founders and the value of a shared resource pool, highlighted a novel funding model for the social sector [110-118][123-129].


* Mainak’s decision to use WhatsApp-already embedded in teachers’ workflows-demonstrated pragmatic technology adoption [155-163].


* Steven’s comment that “when we try to emulate a human teacher, the problem becomes a behaviour-science challenge rather than just a software problem” broadened the conversation to interdisciplinary design [258-266].


* Tamina’s four-level evaluation framework underscored the importance of impact measurement beyond typical private-sector metrics [270-277].


* Erica’s note on partnering with Digital Future Labs and Statle to embed responsible AI from day one illustrated proactive ethical design [300-307].


* Preetam’s reminder to “stop looking at AI use-cases first; start with pain points” refocused the panel on needs-driven innovation [400-402].


* Erica’s final advice to “don’t reinvent the wheel; start with open-source platforms that meet 60-70 % of your needs” offered a concrete, cost-effective strategy [426-433].


Follow-up questions raised

At the close, Manohar posed several follow-up questions to the panel, including inquiries about evaluation of Gemini or Claude models, the on-the-ground challenges of the WhatsApp chatbot, the scale and rollout status of the AI solutions, and the use of guard-rail plugins for AI safety [440-442][447-454].


Closing remarks

Manohar thanked the participants, highlighted the rarity of early-stage collaboration in the for-profit world, and noted that next year a digital assistant will help moderate events, reinforcing the summit’s AI theme [455-456]. He invited the audience to consult the published documentation and blogs from Project Tech for Dev and the participating NGOs for deeper insights [455-456].


Session transcriptComplete transcript of the session
Manohar Sreekanth

Not sure why exactly you came here, but I’m glad you’re here. We’ll make it worthwhile. So my name is Manohar, and I’ll be moderating this session. Usually what happens is in the beginning of such a panel discussion, you know, we tend to introduce the panelists. And since we are at an AI summit, so I thought, and we are all in the habit of delegating things to AI these days, I thought, why not let the AI itself introduce some of the panelists, right? We don’t know what they’re going to say. Hello, Elizabeth. How are you doing today?

Elizabeth

Hi, Manohar. I’m doing great and ready to help. What’s on your mind today?

Manohar Sreekanth

You couldn’t hear that, right? One second. Can you speak a little louder, Elizabeth?

Elizabeth

Of course, I can definitely speak up for you. Just let me know what you need, and I’ll match your pace.

Manohar Sreekanth

All right. No, right? Can’t hear. Let’s try again. And I can hear you. Usually this should work.

Elizabeth

Yes, absolutely. This usually works just fine. If there’s anything specific you want to try again or clarify, just let me know.

Manohar Sreekanth

Okay, Elizabeth, I am sitting in front of a panel right now and I need your help to introduce the panel members. Can you help me do that?

Elizabeth

Absolutely. I’ve got their profiles right here. Just let me know how you want to frame it and I can help you introduce them smoothly.

Manohar Sreekanth

Okay, let’s start with Tamina.

Elizabeth

So, introducing Tamina Madon. She’s the co -founder of the agency fund. Tamina works right where technology, philanthropy and economic mobility. She invests in services that empower human agency. create social impact at scale

Manohar Sreekanth

all right thank you next up is erica

Elizabeth

next we have erica aria she’s the ceo of project tech for dev which collaborates across the global south to help non -profits use tech and data at scale under her leadership they’ve supported over 200 ngos with open source platforms and advisory

Manohar Sreekanth

very nice thank you next we have my neck

Elizabeth

now we have my neck roy he’s the co -founder and ceo of simple education foundation my neck partners with state governments across india to enhance teaching and learning at scale he’s also exploring how ai can responsibly support teachers and public education

Manohar Sreekanth

thank you next up is steven

Elizabeth

up next is steven suiting you he’s the director of technology and product at quest alliance steven leads ai powered digital learning platforms that reach youth across india using data -driven innovation to improve learning outcomes.

Manohar Sreekanth

Thank you. And then we have Preetam Sukumar.

Elizabeth

And he is Preetam Sukumar. He’s the technology and research leader of Antifellows. Preetam drives tech and research efforts to use data and digital platforms to uplift public school students’ life outcomes across India.

Manohar Sreekanth

Excellent. And now finally we have this chap moderating this session. His name is Manohar. Can you tell me something about him?

Elizabeth

And finally our moderator, Manohar Srikanth. He’s partner and CTO at Satva Consulting. Manohar works at the intersection of tech, data, and AI, creating solutions that help social sector organizations scale their impact.

Manohar Sreekanth

Excellent, Elizabeth. I think you did a stellar job. Thank you so much. You wanted to do the panel for us. Next year, I’m sure, we’ll have some. digital assistant sitting here. But anyway, I hope that was interesting and it did a good job as well. So the main theme for this discussion is AI cohort program that was run from September to December. It was about a four -month program, which was actually anchored by Project Tech for Dev. And about seven nonprofits built their AI use cases in the course of these four months. So we’re going to talk about the genesis of the program, why it was structured that way, and also some of the learnings and experiences.

And finally, depending on, I’m not sure how many NGOs are there, but if you want some takeaways on how a nonprofit can also take away approaches towards AI, that’s also something which we’ll cover. I’m going to begin with Erika, because Project Tech for Dev was the entity which came up with the idea for such a program. Erika, can you give us a brief overview? of maybe start a little bit about Tech4Dev itself, right? And then why did you think about such a program in a cohort model, right? And what are the genesis, what are the motivation for this? And something about the structure also.

Erica Arya

Thanks, Manoj. So, good morning everyone. Hello. So, good morning everyone. The AI did a bit of introduction for me. And I’ll just add a little human element to it. So, yes, we at Project Tech4Dev, we develop open source tech platforms for the social sector. That’s one part of our work. And in addition, we do a lot of tech advisory for the non -profits. And as we had been working with, you know, more than 200 non -profits in India mostly, what we realized was that yes, we have these open source platforms which nonprofits come, they can use it as a SaaS platform, and we also provide consulting. And through our tech advisory, what we learned was that there are times when, you know, you really need to handhold nonprofits in taking that first step towards technology or those who have already taken that step to help them move to that next level.

And we have in past, you know, run a data catalyst program also, which was in collaboration with Dasra, and I’m just assuming you all would know this name, in the social sector. And they’ve been running these cohort -based programs, and we felt that, you know, when we do these deep -rooted programs with a small set of NGOs, we’re able to learn a lot from them. And that actually… That actually feeds into, you know, the platforms that we built. Because the way we… our platforms is not like a top -down, but it is always listening from the ground what the non -profits need, and I feel these cohort -based programs, at least for Tech4Dev, enables us to get those learnings.

In addition to that, you know, even this small cohort that comes together, there is a lot of peer learning and sharing that happens. It helps us see the commonalities across the use cases the NGOs are working on, and it helps them to further collaborate on them. So why do you rebuild things? Or if we are building new things, can we make it for the sector? Can we reuse the intellectual property that we have, you know, working with one or two other partners in the cohort? How can we leverage that and make it available more for the sector? So learning from our data catalyst program, which was a cohort -based program, and a program that agency funded, was also running on AI.

and it was called AI for Global Development, we felt that maybe while agency fund program was working more with the nonprofits who were much ahead in their journey in technology and in using AI, we felt that can we come down a level and work with those nonprofits who were wanting to take that step of integrating AI into their work. So they had a good use case, but maybe funding was a challenge, or maybe it was the resources were a challenge, because in the sector, especially in the NGO space, we know that having engineering resources is a very big challenge, and many nonprofits do not have that. So is that a barrier for them to take that step towards adopting AI?

So we wanted to bring down those barriers by introducing mentors into this program. So just… I’ll quickly tell you about the structure of the program. It was like we got… seven NGOs together. It was an open application. Screening was done. We had calls with the NGOs to really understand that their use case was really fitting the program. It was a free program for the NGOs because as Tech4Dev, what we believe, it was a pilot for us. So when we are running a pilot, we are learning together in it. And if we have the funds to encourage these NGOs to come together, then we should play that role. So there was a proper evaluation of the use case, also ensuring that the nonprofit has a resource who can devote time to running this for the period of the four months.

So who could anchor it within the organization. There was leadership buy -in into onboarding onto the program because for technology to get integrated, it is very important that the leadership buy -in is there because there is investment that Tech4Dev has. So I think that’s what we’re trying to do. the reason why we felt that can we, you know, just unlock these couple of barriers that these non -profits have and today we have three of such non -profits on this panel who are going to share with you, you know, what does this cohort enable for them. So I think, yeah.

Manohar Sreekanth

Right. Thank you, Erica. I mean, you talked a lot about the goals and the intent. We will hear from the non -profits how much of that intent also was realized, I think. And you can be candid, I’m sure, all of you. Right. We’ll move on to Tamina now. Tamina from the agency fund. Do give a brief overview of agency fund itself. But also talk to us about the motivation of funding such a cohort model because, you know, typically funding you can give to specific non -profits themselves for the AI use cases. But this is a different model. What is the motivation for that? Maybe we

Temina Madon

Sure. Thank you, everyone. I live in Silicon Valley and I started life as an engineer. I’ve never pursued that career. Over the last 20 years. One thing I saw in Silicon Valley is that a lot of new ventures are started in cohorts. Y Combinator is a quite famous incubator for profitable companies. I was working before we started the agency fund in South Park Commons, which is a venture capital fund and tech community. There’s one in Bangalore, one in San Francisco, one in New York City. And the way SPC operates is to bring founders together at the earliest stages in their journeys as they’re trying to figure out what to build and how to build it and who the user will be.

And that’s a very lonely journey. It is filled with frustrations and failures, and it helps to have a cohort of colleagues in the same stage of life or in the same stage of product development. And so we brought that philosophy into the agency fund when we started, that we bring together cohorts for accelerators. And we have participated in a lot of accelerators, including the data catalyst program that Tech4Dev has, which we have done. run the AI for Global Development Accelerator as well. The AI one is a year long for us and we bring established companies, non -profit companies together, but they’re all building AI for the first time. So it’s that same experience of needing to understand your user, figuring out what to build and going through the practice of putting together pipelines, experimenting, experimenting, iterating, refining.

It’s a journey and it’s difficult. And so I do think that the cohort support helps a lot. The other thing is as a funder, our perspective has been that while we don’t know everything, there are certain resources that are hard for non -profits to pay for themselves. And if we can create those resources in a pool format and make them available, we can perhaps be more efficient. For example, not every NGO is ready to hire two full -time AI engineers. Maybe they only need a part -time person right now as they’re building out the pipeline. So we’ve put together a pool of, of 10 technical staff who sit alongside the organizations that we fund in the accelerator.

We include product management in that as well. We’ve just added some additional people in product management because the AI technology is actually easy, especially there are a lot of talented engineers in India and elsewhere. What is difficult is to fit the technology to the pain points we all experience in life and build a product that achieves social impact. That’s really difficult. It’s so much further from what the typical Silicon Valley startup has to deal with. They just need to make money from people who already have money, right? We are trying to do something much more difficult with a population that’s much more excluded. And so, you know, there aren’t a lot of product managers in nonprofits.

If you’re in a nonprofit, raise your hand if you have a product manager. Oh, I’m surprised. Okay, there’s a decent number. But it’s not a role that commonly has been found in nonprofits. This is probably a… a special group. Some of them are from the organizations in the panel.

Manohar Sreekanth

Okay, fair enough. Not counted. Anyway, thank you, Tamina. Alright, so that set the stage. So now you got a sense of why the organization actually, both these organizations thought of a cohort model, why they set it up and so on. Now let’s get into the trenches in terms of the experience of the non -profits themselves, right? So it would be good to start with the problem space, right? So what problems were you trying to solve in the context of this particular program? And at a high level, how did AI come into the picture and overview? We’ll get to the experiences and learnings a little later, but just to set the context of what problems because again, we should remember that AI is a means to an end, right?

So understanding the problem that you’re trying to solve and was it scale, was it inclusion, was it reach, right, in the context of your program would be good to understand. Let’s start with you, Myna Croy.

Mainak Roy

Thanks. I’m audible right? Yeah. Okay. So I’ll also add a bit to the intro that the AI did. So I’m also a teacher. And that’s how I started my journey into this space. And one of the things that as a teacher, I’d always struggled with is figuring out what is the right classroom pedagogical strategy that I need to use for a certain sort of class. And that often changed with the topics or the objectives that I was covering on the day. Right. And for a teacher, it’s often difficult when I’m doing eight lessons a day to sharpen my sort of focus and ensure that I get the perfect strategy. And that really was our sort of problem statement in many ways that can we build a platform or a tool which enables teachers to actually say that this is the class I’m teaching today.

This is the kind of classroom that I need. This is the kind of classroom that I have. And then can the. AI throw out something which is evidence based and has worked in other similar classes before and support the teacher so that’s what we wanted to do and in India I think if I asked how many of you use WhatsApp probably all of you will put your hands up and that’s the case for teachers in some of the most remote areas of the country as well so we chose WhatsApp as a platform and we used a chatbot kind of a mechanism to solve that problem for our teachers

Manohar Sreekanth

ok thank you Manak for that intro so that’s Simple Education Foundation so they built this Simple Teacher Buddy we’ll come back to you a little later Stephen, Quest Alliance, so what was the problem you were trying to solve and high level overview of the solution

Steven Suting

thank you it’s happening no? I’m not sure if I can yeah it’s happening so we work so we work with young people from grade 8 to class 12 and then younger people who work in the TVET who study in the TVET ecosystem who are between about 16 to 25 right And what we realized in the work that we did, and this wasn’t necessarily revolutionary, was that no two learners are the same, right? And they’re different in the way they think, in the way they learn, in the confidence levels that they have. And that’s one dimension of difference. And when you think about the demographic difference, their opportunity of access, the gender inequalities, and if you put all of this together, you have a system that’s a classroom that’s fairly heterogeneous.

These are not homogeneous systems, right? But unfortunately, the educational structure around this is a chalk and talk mechanism. It largely treats all the students in the same way, right? And as a result of that, you’re not really meeting the learner where they are. And so that’s one dimension. The other side of this problem is also that there’s a lot of stress with the availability of teachers. Because in general, there’s about a 1. 30 ratio. between student and teacher. And that’s in a good scenario, right? So what tends to happen is these students don’t necessarily have access to the information that they need. You’re not really meeting the learners at where they are. And so what we try to do or what we’re trying to do is really try and place a system that is able to hear the learners’ problems with respect to the area of career development.

Manohar Sreekanth

Okay, so you’re directly targeting the learners, right? Whereas my next use case was targeting the teachers, right? So we have yet another education use case from Avanti Fellows. Preetam, tell us about it.

Pritam Sukumar

So hi, I’m Preetam from Avanti Fellows. And what Tamina said and what Stephen said kind of resonated because we struggled a lot to find a good use case for AI because we have a lot of pain points. Like our teachers have pain points, our students have pain points, our program managers have pain points, and they’re all spread across the country in remote locations. So we started with a use case to improve student report. So that because a lot of, I mean, we reach about 200 ,000 students and 98 % of those are online learners. And they don’t get useful feedback from the test report. They just get score and this and that. But whereas in our physical school, there is a teacher talking to them.

There is a mentor talking to them. So we thought we’ll improve the student report first, but that didn’t work out so well. So we, our mentor actually helped us a lot, Akilesh from Tech4Dev. And we iterated through a few use cases till we landed up on actually replacing the teacher -student mentor conversation with an AI summary that can be useful for our low -touch, I mean, our online students. So it kind of collects all the data of the student’s performance in the tests about how they’re doing, what, what, how their attendance is, what chapters they’re weak in, what chapters they’re strong in. And generates kind of a script. And for the teachers, the script is used to guide their conversation.

And for the online learners, the script can be used to like just give them a proper mentors, sort of mentorship guidance so that they have actionable things to do for their next test or whatever they’re doing.

Manohar Sreekanth

Right, now let’s dig a little deeper. So now where are we on this journey, Pritam? And you know, how was the solution deployed? How many people are using it? Is it being in the pilot phase? Can you tell us more?

Pritam Sukumar

It’s, I think it’s somewhere between the pilot and the rollout. So we, around 15 teachers I think have had 57 or 75, 57 to 75 conversations with students with these scripts. And we are working on feedback because there are hallucinations as LLMs do. But the hallucinations in this case are a little weird. The numbers are correct but instead of saying increased it says decreased. So we are like, there is a lot of prompt engineering that we are doing to get this sort of fixed. And in the low touch, in the online program it is being, once the rollout is kind of ready and yet we are just waiting to fix this, these small issues. Okay. Before rolling out.

Manohar Sreekanth

Okay, and what’s been the response from the teachers themselves who are getting these reports?

Pritam Sukumar

The teachers are very happy. I mean, it saves them a lot of time. Like in one of the main feedbacks, one of the main points of feedback that we got from the teacher is that it saves them really a lot of time because before such a conversation, they would have to look at the last four reports of the student. They would ask, I don’t know, what is happening at home? Like, are you getting device access and all this? Now all this, the students fill a form before the mentorship conversation and the teacher, we have the test data and we give this all in a script to the teacher. So that has been the positive side.

On the negative side, the hallucinations really bother the teachers because there isn’t 100 % trust yet in what we’ve given them. And that we are still working through. I mean, the solution is not clear because to me, maybe we need more AI engineers like Tamina said. But because it’s not clear exactly what to do with these hallucinations because to me and us, the LLMs are like a bug. Black box. You just put something, you get it. something out and you don’t know what change will bring the change that you want. Yeah, that’s where we are now.

Manohar Sreekanth

Right. Yeah, I guess the error rate, the hit ratio and what kind of an impact it has depends on the use case. And if the teachers can be trained to address some of this, it might be one way out. Mainak, you talked about WhatsApp being the channel, right? So that’s probably something which, you know, eases adoption because they don’t have to learn something new. It’s there. It’s part of the conversation. So now they’re getting some additional feedback and advice. But at the same time, and by the way, I know this because I was mentoring some of the folks here. So by the way, this whole program had a mentorship model where each of these nonprofits are hooked up with one or two mentors and those mentors guided the teams.

Right. So I happen to be one of the mentors for Simple Education. And we did face some interesting challenges also, even if it was simplified from the WhatsApp point of view. So what were some of those challenges on the ground? Right. Because when the rubber meets the road, that’s. when you see what I mean you would have imagined something every product person imagines and starts with a hypothesis but then needs to be validated right so what were some of those things that we learned

Mainak Roy

yeah I think there are a bunch of challenges but you know my favorite challenge is actually we wanted teachers to start the conversation with the bot with a simple hi but with the moment teachers would get the QR code and scan it they would start putting their problems in right and then the bot would glitch because the bot did not get that first hi and then the whole process of the data that we needed to collect to give the teacher very specific guidance was not happening so we ended up saying that whatever is the first conversation that the teacher does the bot sort of recognizes it as a hi and then starts the conversation so it although we’ve kept telling the teachers that started with a hi but it does not happen so we’ve sort of worked around it but yeah that’s been one of the challenges and then I think I think So we’ve seen a bit of also teachers figuring out what kind of questions they can ask.

So sometimes there’s been some straying away from the kind of questions we want the teachers to ask. And because it’s AI, they would still throw out some response. Right. So over there also, we’ve had to put some guardrails around what kind of conversations the chatbot will entertain. What are the conversations that we would completely sort of say no to. So, yeah. Right.

Manohar Sreekanth

Thank you, Stephen. In your case, I think you had this slightly more ambitious plan for behavior change. Right. You’re looking at nudging these learners towards a different kind of behavior. And you had a certain approach to that. Can you tell us a little bit about it? And how successful has it been? I

Steven Suting

think we’re still largely pie in the sky thinking right now. So we still hope to be ambitious through the entire course of the project and not be deterred by, how many users and, you know, do we have, how many sessions do we have? We’re trying to be steadfast with that. I think for us, the fundamental premise was that we didn’t want to create a system that was really an information dissemination system, right, because there’s enough and more of that. We wanted to create something that emulated a human being or emulated a teacher. And the intent over there was really to be able to pick up signals. If you imagine a conversation between a teacher and a student, there are multiple signals that a human being is able to pick up when they’re having a conversation, right?

Sometimes it can just be the tonality. It can be the speed at which they’re speaking. You can gauge confidence levels, right? But when you’re working with an interface like a bot, for example, you have limited access to the kind of signals that you can pick up. So then how do you decrypt conversation to be able to pick up these different signals? And then how do you use those signals to then create a response that intends to or is intended to emulate a human being? So we’re going to create a system that emulates a human being. right um and and interestingly so when we started thinking like that it stopped becoming a software problem it started becoming a behavior science problem right and so for us that was really the elevation of that situation and so right now we’ve got a plethora of like people who are specialists in that space we’ve got you know the engineering and software folks to help us with the scaffolding but the heart of the problem is really in understanding that human being right no

Manohar Sreekanth

it’s taking it really to the next level that’s so nice um so tamina you’ve now heard some of these challenges experiences and so on and i think you’ve worked with previous um cohorts as well i want to get a sense of also how these indian non -profits you’ve been working with and engaging with how does this differ from the others probably you’ve engaged with you know in the u .s and abroad in terms of adoption in terms of maturity in terms of the kind of use cases they’re looking at can you just give us a lay of the land comparing the two

Temina Madon

then going to learn it. It’s about my belief, my confidence in myself as a new mom to take care of my baby and know what caregiving I need to provide. It’s about my role as a farmer, feeling that I’m going to be able to experiment with a new input and feeling more confident or thinking maybe I’ll try livestock because now I have better information and support. So we call that the user evaluation. That requires survey data collection. Most companies don’t do many surveys. Maybe they buy some survey data from a firm, but all of these orgs, once they’re deploying to enough people, will have monitoring and evaluation systems that really try to understand the user.

And then the fourth motor we switch on, we call impact evaluation, and that’s when you have tens of thousands, hundreds of thousands of users, and you want to understand whether the product that they’re using is helping them make a change in their health outcomes, in their education or livelihood outcomes. So that’s what we call impact evaluation. And so we almost never see that fourth stage of evaluation in the private sector. That’s something we do in the social sector. But we think that it’s a nice way, this kind of four -level framework, to think about how to continuously learn from the product you’re deploying and then prove its impact to society and hopefully bring in more funding to scale.

Manohar Sreekanth

Right. No, thank you, Tamina. In fact, if those of you who are interested in that framework, it’s a very powerful, interesting framework. The Search for Agency Fund Evaluation Framework, and I’m sure you’ll find that. It’s a very useful resource. Right. So, in fact, this is one such topic. Evaluation is one such orthogonal topic. But there were a couple of others as well. So, Erika, also from a program structure point of view, right, you people had some knowledge partners come in, right? So it was not just these nonprofits hooked up with mentors, but you also had some others. Can you tell us a little bit more about those knowledge partners, their roles, and so on, and then we’ll get into how they impact.

the work they were doing.

Erica Arya

Thanks, Manohar. I think this is a very critical part of the entire program design. So Tech4Dev, while we are like a bunch of engineers, CTOs in the organization, but again, we know that the problem that we are trying to solve is not something that we can do alone. And what we try to do is, in whatever we do, we don’t want to reinvent the wheel. And that’s where and we know we don’t know it all. So we always work in our programs through collaborations and partnerships with people who have expertise in those areas so that we can get the best to the cohort. And so as part of this AI cohort too, we wanted, what was the overall goal of the program?

It was that the non -profits could unlock those barriers which was keeping them away from building and deploying. AI solutions to help them bring in more efficiency, help them scale and create more impact on the ground. But we wanted, as a technologist, we could say that, okay, we’re just building something, but are we building it in the right way? That’s what we wanted to do. And so we collaborated with two knowledge partners. One was Digital Future Labs, who brought in expertise around building, you know, on integrating responsible AI principles into your design as you are working on a use case. And the second knowledge partner was Statle, which works in AI safety. So we actually wanted that when the nonprofits are working as part of this cohort, they have that thinking of responsible AI and AI safety built into their design right from the get -go.

It’s not something that you need to think. Later, okay, let me just build something. And then I’ll come to, you know, responsible AI practices or think about AI safety, you know, a couple of after I’ve done my pilot or, you know, after a couple of months when actually I see things breaking. So we wanted to build that right from the get go. And that’s why we collaborated with partners because we knew we did not know this bit. We definitely contributed through, you know, what Manohar said. This was a mentorship program. So a lot of mentors came in from Project Tech for Dev because we knew that is one piece that we feel we do good at and collaborated with these partners.

And I hope that this helps the nonprofits in designing better solutions.

Manohar Sreekanth

Right. No, that is a very unique part of this program, actually, because also anyone who’s in software would know that quality and all these things will look later. Let’s build and get the functionality out. That’s a typical attitude. And in the context of AI, AI safety, responsible AI seems to be the most important thing. to be an afterthought, but here it was baked into it. So now let’s hear from some of the non -profits and how did this influence the way you approached, you know, the building out this and how did AI safety or responsible AI kind of thinking influence and how did it actually help on the ground itself? Stephen, would you like to say something?

Steven Suting

Sure. I think for us, it was supremely helpful, just primarily because, you know, the artifacts that these organizations helped put together in some ways were like a blueprint for us, right? And when you have a blueprint in front of you, you have a sense of what the different moving parts are. And what that allowed for us to do is to rapid prototype in a fairly modular way, which allowed us to fail fast, right? And if you were failing fast, we were learning faster, right? So yeah, I think. and I think what can typically happen in this space of human development because the dimensionality of problems can be fairly large and this phenomenon of analysis paralysis can happen and it can be overwhelming to decide what do you pick and choose because they all seem like really important problems but having frameworks like what Tamina was talking about helps you sort of chunk things into where is this problem best suited and where is it most relevant and then you solve for that in that section and that helps influence what you’re going to do in the next step and then you kind of shuttle back and forth till you’ve crossed a certain critical threshold of it making sense to the problem that you’re solving

Manohar Sreekanth

One of the things I remember from the AI safety part was what Tatl came up with with these plugins where they have created a slur list a list of words which the AI is not supposed to use and that’s something which they have crowdsourced and so on and there are also some guardrails plugins from Lama and so on and And the fact that those things can be plugged into your bot to make sure that it’s behaving the right way was, I thought, was an interesting thing. Did any one of you use any of those things? Not yet, Right. For the simple, yes. I mean, I remember the guardrail. So the bot which Minak was talking about.

So there you could ask any question. And then the whole thinking process behind making sure that even if a random question comes. So let’s say a teacher asks, what do I do? It’s not a question the bot is supposed to answer. But how does it respond to that? It’s something we started thinking about and baked into the whole design itself. And that was used for testing. Okay. Now, one of the unique features of this program was also the whole collaboration piece. Because. they are not working in isolation. So we had in -person workshops where each of them actually came and presented, and they had sessions where, you see, all three of them are actually working on education use cases, and there are some others as well.

So how could they learn from each other? That’s one. The other thing is, and I think I’m sure you’ll recognize, you probably are also looking at similar problems in this space. How do you not reinvent the wheel? How can you leverage some of the work that’s been done by others? So this collaboration aspect also came in in a very strong way. So I want to understand from some of you, so whom else did you speak to? How did this play out as a part of the program? How did this program structure influence the way you went about it? And how was it in comparison to, let’s say, you were going about it by yourself? What was the difference?

Anyone? Anyone?

Steven Suting

I think. One thing that was nice about the program design was that. by nature of the conversations that we were having, there was this sense of porousness that we had, that we weren’t holding our cards too close to our chest. It was okay for me to learn from what you were doing well and what you weren’t doing well. And if I had that conversation with you, it would elevate or it would sort of catalyze and expedite certain problems that we were trying to solve. For example, when we were thinking about how do you like if you have to do responsible, if you have to do classification, if you have to do behavior. So these are all different parts of a fairly large problem.

And we had a certain approach in mind. But through the cohort, we realized that people were trying to solve similar things, but they did it in a different way. And like, for example, we didn’t think about this through the genetic way. And so then when people started saying that this is what worked for us in that model of deployment, that in many ways kind of expedited the way we solved for that problem. And I think there are multiple examples like that. Also, the idea of responsible, for example, right? Like when you say responsible, it can be a whole gamut of what you mean by responsible. But what does responsible mean for a 25 -year -old versus, say, a grade 8 student?

It’s significantly different. Now, if you’re, you know, if you’re, let’s say, for example, an Avanti and you’re working with a certain cohort that is similar to your cohort, right? What are they doing with responsible that you want to do, right? And what is that conversation that we want to have? So I feel that was certainly helpful.

Manohar Sreekanth

Interesting. Erica?

Erica Arya

Yeah, I’ll just add in. So while these were like seven NGOs, out of which four were from the education and three are sitting here, there were two health NGOs also, and they were both working on building a predictive model to predict high -risk pregnancies. And they didn’t know about, like, you know, each one of them building something like that. So but when the cohort came together, they discussed. And then we felt, you know, the mentors also got together, and then that’s when they felt that we are building the same thing. So then are we happy to collaborate? And that’s when, you know, the future calls that happened was getting these two NGOs together and, you know, building on at least sharing and building it together was one thing.

Second, while we were working with these seven NGOs, one of them, which is not represented here, was working on an assessment model. How can we do assessments of, say, answer sheets which are coming from students because that takes in a lot of time of the teachers. And then we also, since we’ve been working with 200 nonprofits, we were also aware of two other nonprofits who were working in, you know, trying to work on similar solution. So just going beyond the cohort, we brought all three of them together, and we just discussed on, you know, the assessment models that they were building, the rubrics. And we were seeing that there was a lot of commonality. So could we build this together, learn from each other’s mistakes?

So I’m just trying to share here that the learning and sharing goes beyond the cohort also. And that is where, you know, platforms or organizations like us who are more ecosystem players have a role to play. So, yeah, I think these are like two such examples where I could see collaboration happening with just bringing a set of seven NGOs together in a room.

Manohar Sreekanth

Right. No, thank you, Erica. Those are very interesting and powerful examples. Also building on what Teminat said earlier, contrasting this and the corporate space. I’ve spent a couple of decades in the corporate side, and this kind of collaboration at an early stage of technology is very, very rare. You don’t see that on the for -profit side. And so it’s fascinating to see this kind of thing. And, of course, we are in a room where I’m sure we’re working on similar kind of problems. And just to exchange notes is so, so important. All right. Just now wrapping up, we have about six and a half minutes. I’d like to understand how many of you in the audience are from nonprofits.

Can you raise your hands? OK, quite a few. All right. So now we have. keen non -profit listeners. So the question to the non -profits and also probably others over here is what are some of the learnings that you came away with which surprised you or which you didn’t know of before which could be useful for the non -profits in the audience as well as they go about their AI journeys. If you could share some of your nuggets. Pritam, please.

Pritam Sukumar

So I think one thing is that one thing we learned is to stop looking at use cases for AI but look at pain points and troubles we already have and see if AI is a good fit there. It doesn’t have to be like AI is a new technology, let’s put it and build something amazing and innovative. There’s already a lot of trouble in the way even our non -profit runs. There’s a lot of inefficiency, there’s a lot of stuff so you just, there is a lot of stuff that AI can do to help daily operations like our fundraising team write grants or our programmers write code which of course is the number one use case right now.

But yeah, that is one thing we really believe, at least I really believe now.

Steven Suting

You know when AI started becoming really popular it almost became like knee jerk reactions for organizations to jump on that bandwagon and if you weren’t doing it then you couldn’t get into the right rooms with the right kind of funders and then you would be lost in that race right not always but more often than not there is that pressure and I think what has happened as a consequence of that is some part of our DNA was forgotten like I feel like civil society organizations have always been you know a species that has really always asked questions sometimes over indexed on always asking questions but then when software development became so much more easier to do AI became so much easier to do we started leading with solutions and we started deploying solutions and then and this is true for our space as well that when we go to a classroom there will be five different NGOs who are solving the same problem and there’s that learner who we’re all saying that we’re trying to work for the learner we’re putting the learner at the center of design but there are five not -for -profits who are all trying to solve the same problem for that learner without recognizing what we’re doing to that learner.

And we need to have that conversation, right? So sometimes I think it’s important to just take a moment and have that conversation or do that assessment before you start leading to solutions.

Manohar Sreekanth

Yeah, it reminds me of multiple people building an app for ASHA workers and they have to interface with multiple apps themselves, right? Yes, Roy?

Mainak Roy

Yeah, no, I think just sort of definitely plus one into what both of them have mentioned, I think don’t build what is sexy, build what is needed. And I think we had a very different use case to start off with. And I think we ended up building something very different because we recognized that that was actually the need of the hour. So I think that’s definitely one thing that I would say. And the second piece is I feel like… especially when we are building for the non -profit space in India, scale becomes a very important metric that we want to reach out to like thousands and hundreds and thousands of teachers or students or whoever.

And I think in that whole space, we miss out on some fundamentals like responsibility, that responsible AI that we were talking about. So I think how do we build that? And that’s something that I think I have learned through the process. There’s something that we’ve been working on now called the golden set to ensure that the responses the AI is giving is actually more aligned to what we actually want the AI to give. And I think those pieces are things that if we fix early on, then I think it’s just easier. And that’s something that I think we’ve learned through this process.

Manohar Sreekanth

Great. Thank you, Manak. I mean, the golden data set you talked about is part of an evaluation framework, also something which is very strongly, emphasized in the context of the program. We have a couple of minutes. We’d like to open up for questions, but before that, any last words? Erica, Tamina?

Erica Arya

Y es. So, totally resonate with what all my three panelists said. So, don’t just get on to using technology because everyone else is doing it. Only if it really meets your needs, that is when you think about technology. And when you think about technology, please look, there might be already solutions existing, so don’t jump on to building custom solutions because I think what you are working on is also something many others are working on. And that will just help you save time, effort, and money. So, maybe just, even if an existing solution, I’m just saying, like an open source platform, is meeting 60 -70 % of your needs, just start with that. And then you can improvise it and then decide on whether I want to use it or not.

want to build up something which is custom. So yeah, those are like my thoughts.

Manohar Sreekanth

It’s a fantastic note to end on. Thank you. Thank you for the panel. Big hand to them. Thank you. We have one minute and three seconds. Questions? I have a few hands up. Like a Gemini gem or a Claude project. Did you all evaluate that? Curious to know the journey of that build versus buy or just adopt for any one of them. Yeah.

Erica Arya

Yeah, I can just add and maybe, you know, as you are using it. So like when we were talking about out say a chatbot, right? So it’s not, it’s like those models, like when we are using, we built the Glyphic chatbot. So we’ve obviously integrated it with open AI. So it’s not that you are reinventing anything, but, and even we are building an AI platform. Now, even in that platform, what are we doing? We are integrating different open source elements which are there. So we ourselves as an organization, even if we’re building something new, we are picking up things which are already existing stitching it together and then adding layers when we speak to our NGOs of, you know, what is missing in it and then just build that part of it.

So it is always there in our working that we integrate open source solutions that are existing and not built. Even if it is like for dashboards, we are like, we have a data management platform, but we’ve not been creating dashboards on it. We’ve integrated it with Superset, which is another open source solution. I’m just, I know you may not know these names, but what I want to call out. here is that even as a tech organization who’s building platforms we’re not building things from scratch we are integrating with other tools you know which are meeting the needs even when we are building our own platforms

Manohar Sreekanth

thank you actually it’s 0 -0 but we’ll stay back if you have questions we’ll have to you know have a conversation with you but thank you for your time it was great to see a big audience also engaged hope it was useful and do reach out to anyone of us if you want to find out more about the program there’s a lot of documents which have been written and published as blogs as well from project tech for dev and some of the non -profits so happy to be engaging with you also going forward in this journey thank you so much and thank you to the panel Thank you. Thank you. you Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (33)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“The session opened with moderator Manohar Sreekanth welcoming the audience.”

The knowledge base confirms that the moderator’s name was Manohar, indicating a moderator was present at the start of the session [S1].

Confirmedhigh

“The cohort programme involved seven nonprofit organisations.”

The source states that seven NGOs were brought together for the programme [S2].

Confirmedmedium

“Open‑application & screening process – NGOs submitted applications, then participated in calls to verify fit and to confirm a requirement for a dedicated internal resource.”

The knowledge base describes an open application followed by screening calls with the NGOs to assess fit [S2].

Additional Contextlow

“In‑person workshops – the cohort included workshops where each NGO presented its progress in person, fostering direct peer exchange.”

A related source notes that workshops are often transformed into blended, multi-month programmes to enable deeper engagement, providing context on the use of in-person and hybrid workshops in similar initiatives [S34].

Additional Contextlow

“The moderator delegated the introductions of the panelists to the conversational assistant Elizabeth.”

General descriptions of moderator duties indicate that moderators typically introduce speakers and manage the flow of the discussion, which aligns with the moderator handling introductions, though the delegation to an AI assistant is not mentioned in the knowledge base [S89].

External Sources (99)
S1
How nonprofits are using AI-based innovations to scale their impact — And he is Preetam Sukumar. He’s the technology and research leader of Antifellows. Preetam drives tech and research effo…
S2
https://dig.watch/event/india-ai-impact-summit-2026/how-nonprofits-are-using-ai-based-innovations-to-scale-their-impact — And he is Preetam Sukumar. He’s the technology and research leader of Antifellows. Preetam drives tech and research effo…
S3
How nonprofits are using AI-based innovations to scale their impact — – Erica Arya- Pritam Sukumar – Erica Arya- Steven Suting- Mainak Roy- Manohar Sreekanth – Erica Arya- Temina Madon- St…
S4
How nonprofits are using AI-based innovations to scale their impact — -Temina Madon- Co-founder of the Agency Fund, works at the intersection of technology, philanthropy and economic mobilit…
S5
https://dig.watch/event/india-ai-impact-summit-2026/how-nonprofits-are-using-ai-based-innovations-to-scale-their-impact — So, introducing Tamina Madon. She’s the co -founder of the agency fund. Tamina works right where technology, philanthrop…
S6
How nonprofits are using AI-based innovations to scale their impact — – Erica Arya- Steven Suting – Temina Madon- Steven Suting
S7
How nonprofits are using AI-based innovations to scale their impact — – Mainak Roy- Steven Suting- Pritam Sukumar – Steven Suting- Pritam Sukumar- Mainak Roy
S8
Open Forum #22 Citizen Data to Advance Human Rights and Inclusion in the Di — – Elizabeth Lockwood: Representative of the UN Stakeholder Group of Persons with Disabilities for Sustainable Developmen…
S9
Global AI Governance: Reimagining IGF’s Role &amp; Impact — Elizabeth Orembo: Thanks, Paloma. And I’m going to be unfair by not giving you a chance to have a parting shot. If I was…
S10
Masterclass#1 — The enquiry probes into the recent changes to the newest Geneva manual, particularly concerning the exclusion of the “wh…
S11
How nonprofits are using AI-based innovations to scale their impact — – Erica Arya- Steven Suting- Mainak Roy- Manohar Sreekanth
S12
Seismic Shift — 1. International Monetary Fund, ‘India’s Economy to Rebound as Pandemic Prompts Reforms’, November 11, 2021, https://www…
S13
https://dig.watch/event/india-ai-impact-summit-2026/how-nonprofits-are-using-ai-based-innovations-to-scale-their-impact — All right. No, right? Can’t hear. Let’s try again. And I can hear you. Usually this should work. And finally, depending…
S14
CISA unveils comprehensive roadmap for responsible AI integration — Cybersecurity and Infrastructure Security Agency (CISA)has introduced its comprehensive Artificial Intelligence Roadmapi…
S15
Industry leaders partner to promote responsible AI development — Anthropic, Google, Microsoft, and OpenAI, four of the most influential AI companies,have joined to establishtheFrontier …
S16
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Alaa Abdulaal: So hello, everyone. I think I was honored to join the session. And I have seen a lot of amazing conver…
S17
Developing a Road Map for Engaging Diasporas in Development — Collective action also presents challenges. For example, when migrant organizations pool the donations of many individua…
S18
How AI Drives Innovation and Economic Growth — “First, model evaluation.”[124]. “Second, user impact.”[134]. “Second… scalability and usage at scale that’s more like…
S19
How AI Drives Innovation and Economic Growth — Both Kremer and Dhaliwal emphasised rigorous evaluation methodologies. Kremer outlined a four-stage framework: model eva…
S20
A Resource Guide to Public Diplomacy Evaluation By Robert Banks — That the Youth Ambassador program in Brazil has now achieved brand status there seems clear. Keys to this effort…
S21
Conversational AI in low income &amp; resource settings | IGF 2023 — Ashish Atreja:That’s a great one. I think there is a clause that if you’re not intentional about something, then that’s …
S22
State of play of major global AI Governance processes — Regarding South Korea’s proactive engagement, the government showcased its dedication to the ethics of AI by embracing O…
S23
AI for Good Technology That Empowers People — Ambassador Reintam Saar from Estonia outlined the structure and objectives of the first Global Dialogue on AI Governance…
S24
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — ### Initiative Background Jungwook Kim: Thank you. So Korea is ranked as one of the leading countries in OECD Digital G…
S25
WSIS Action Line Facilitators Meeting: 20-Year Progress Report — Development | Human rights | Online education UNESCO is providing policy guidance on AI in education, focusing on frame…
S26
AI &amp; Child Rights: Implementing UNICEF Policy Guidance | IGF 2023 WS #469 — Steven:Thanks, Vicky. And good afternoon, everyone. Good morning to those online. It’s a pleasure to be here. So I’m a d…
S27
AI (and) education: Convergences between Chinese and European pedagogical practices — ## Future Considerations ## Implementation Challenges and Considerations Jovan Kurbalija: Definitely, just building on…
S28
Teachers see AI as an educational tool — Teachers have longworriedabout ChatGPT enabling students to cheat, with its ability to produce essays and solve problems…
S29
Advancing Scientific AI with Safety Ethics and Responsibility — -Global South Perspectives and Adaptation: A significant focus was placed on how emerging scientific powers can shape AI…
S30
Diplomatic policy analysis — Digital divides:Not all countries have equal access to advanced analytical tools, perpetuating inequalities in diplomati…
S31
personalised online learning driven by AI — Artificial intelligence (AI) is becoming relevant for online education, especially in terms of monitoring and personalis…
S32
Accelerating Structural Transformation and Industrialization in Developing Countries: Navigating the Future with Advanced ICTs and Industry 4.0 — Very low level of disagreement. The speakers were largely aligned on goals and strategies, with differences mainly in em…
S33
MASTERPLAN FLAGSHIP PROGRAMMES — – (iv). No. of automated government services. – (v). No. of Portal for government online services. – (vi). No. of ICT …
S34
Policies and platforms in support of learning: towards more coherence, coordination and convergence — 59. Most organizations in the United Nations system still rely primarily on face-to-face learning, which accounts for mo…
S35
IGF 2024 Opening Ceremony — This comment provided a structure for subsequent speakers to address specific aspects of AI governance and inequality. I…
S36
How nonprofits are using AI-based innovations to scale their impact — This programme specifically targeted nonprofits wanting to take their first step into AI, distinguishing it from Agency …
S37
Democratizing AI Building Trustworthy Systems for Everyone — “of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control …
S38
GOVERNING AI FOR HUMANITY — – a. Catalyse and align regional and global AI capacity efforts by supporting networking among them; – b. Build AI gover…
S39
WS #266 Empowering Civil Society: Bridging Gaps in Policy Influence — Financial constraints: Limited funding severely restricts CSOs’ ability to participate in governance processes, especial…
S40
WS #294 AI Sandboxes Responsible Innovation in Developing Countries — Jai Ganesh Udayasankaran: Thanks, Sophie. I just wanted to quickly add what was shared by the speaker from data sphere. …
S41
Negotiating and Implementing MEAs: A Manual for NGOs — – ß Pooling resources in implementation: Similar benefits can be had in the implementation phase. For exampl…
S42
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — He emphasised the need for policy that balances principle-level guidance with practical guardrails whilst avoiding overl…
S43
Safe and Responsible AI at Scale Practical Pathways — “guardrails human in the loop risk assessment these are the tools which are available today …”[95]. “If we immediately…
S44
Agentic AI in Focus Opportunities Risks and Governance — “If the data can be manipulated, if the lineage of data is not properly understood, if it is not really governed, if the…
S45
Why science metters in global AI governance — helping member states move from philosophical debates to technical coordination, and anchor choices in evidence so polic…
S46
29, filed Jan. 22, 2010, at 9-10. — Longitudinal design takes into account the fact that the impacts of BTOP grants are likely to unfold over a longer time …
S47
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Speakers demonstrated consensus on the viability of cooperative models for AI governance, with support spanning civil so…
S48
GOVERNMENT CLOUD POLICY — – i. Internet Access : Cloud computing resources are available over a network and can be accessed from a variety of dev…
S49
AI: Lifting All Boats / DAVOS 2025 — The main areas of disagreement revolve around the specific approaches to AI development and adoption in emerging markets…
S50
Comprehensive Report: Preventing Jobless Growth in the Age of AI — -Impact on young workers: Brynjolfsson’s data showing employment decline versus Kumar’s experience of increased graduate…
S51
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — The discussion revealed a common theme across different contexts: the gap between policy ambition and implementation cap…
S52
Press Conference: Closing the AI Access Gap — The goal is to move from a narrative to action, where concrete steps are taken in both the policy side and the private s…
S53
How Multilingual AI Bridges the Gap to Inclusive Access — This comment identifies a critical bottleneck in AI development that goes beyond resources to human expertise. It highli…
S54
How nonprofits are using AI-based innovations to scale their impact — This programme specifically targeted nonprofits wanting to take their first step into AI, distinguishing it from Agency …
S55
AI for Good Technology That Empowers People — Ambassador Reintam Saar from Estonia outlined the structure and objectives of the first Global Dialogue on AI Governance…
S56
How to make AI governance fit for purpose? — – **Gabriela Ramos**: Moderator of the panel discussion, mentioned as running for a position at UNESCO and having worked…
S57
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Adham Abouzied: Yeah, yeah, very clear. I think we’ve been mainly looking at basically how to inspire what is the right …
S58
Open Forum #33 Building an International AI Cooperation Ecosystem — Sajid Rahman: Thank you, and good afternoon. You know, it’s a great pleasure to speak about something which is not only …
S59
https://dig.watch/event/india-ai-impact-summit-2026/how-nonprofits-are-using-ai-based-innovations-to-scale-their-impact — And that’s a very lonely journey. It is filled with frustrations and failures, and it helps to have a cohort of colleagu…
S60
WS #219 Generative AI Llms in Content Moderation Rights Risks — Marlene Owizniak: Yeah, and I’ll just add briefly on the AI lifecycle. ECNL has been working with Discord on piloting wh…
S61
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion — “For the health sector, we’re looking at our frontline health workers… giving them decision support tools that enable …
S62
How Small AI Solutions Are Creating Big Social Change — Household income, the inputs get better, and to see how they can access the markets and agriculture credit. Similar in h…
S63
Teachers see AI as an educational tool — Teachers have longworriedabout ChatGPT enabling students to cheat, with its ability to produce essays and solve problems…
S64
Generative AI and Synthetic Realities: Design and Governance | IGF 2023 Networking Session #153 — Heloisa Candello:Thank you, Diogo. I’m going to start. Hello everyone. I’m going to share my screen, and then we can sta…
S65
personalised online learning driven by AI — Artificial intelligence (AI) is becoming relevant for online education, especially in terms of monitoring and personalis…
S66
WSIS Action Line C7: e-Learning: Empowering Educators and learners: Enhancing Teacher Training and e-Learning for Digital Inclusion — Emmanuel Manasseh:Thank you very much, UNESCO, for preparing this important session, and I’m really grateful for the ITU…
S67
WS #134 Data governance for children: EdTech, NeuroTech and FinTech — Aki Enkenberg: Yes, absolutely. And I’m very happy to be here. Thanks, UNICEF, for inviting me to be part of the pane…
S68
AI revolutionises academic writing, prompting debate over quality and bias — In a groundbreaking shift for the academic world, AI now contributes to at least 10% of research papers, soaring to 20% …
S69
Open Forum #30 Harnessing GenAI to transform Education for All — Antonio Saravanos: So you bring up an excellent point, right? Unfortunately, it’s quite easy to detect the use of TAT…
S70
Advancing Scientific AI with Safety Ethics and Responsibility — And then people in biosecurity frameworks, like doing biosecurity frameworks, thinks that AI governance is like a tool. …
S71
When language models fabricate truth: AI hallucinations and the limits of trust — AI has come far from rule-based systems and chatbots with preset answers.Large language models (LLMs), powered by vast a…
S72
Open Forum #47 Demystifying WSis+20 — The tone was collaborative and constructive throughout, with panelists demonstrating shared commitment to the multi-stak…
S73
WS #51 Internet &amp; SDG’s: Aligning the IGF &amp; ITU’s Innovation Agenda — NGOs should consider creating platforms or scenarios where they can connect with potential partners and brainstorm solut…
S74
Keynote-Rishad Premji — Artificial intelligence Opening framing by the moderator
S75
Keynote-Vinod Khosla — This transcript contains only a single speaker (Vinod Khosla) presenting his vision for AI applications in India, with b…
S76
Setting the Rules_ Global AI Standards for Growth and Governance — The discussion maintained a consistently collaborative and constructive tone throughout. Panelists demonstrated remarkab…
S77
Ethical AI_ Keeping Humanity in the Loop While Innovating — The discussion maintained an energetic and collaborative tone throughout, with panelists frequently building on each oth…
S78
Emerging Markets: Resilience, Innovation, and the Future of Global Development — The tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized oppor…
S79
Process coordination: GDC, WSIS+20, IGF, and beyond — Their struggle is seen as symptomatic of the broader challenges of unequal opportunities to contribute to international …
S80
High-Level Track Inaugural Leaders TalkX: Forging partnerships for purpose: advancing the digital for development landscape — The tone was collaborative and forward-looking, with speakers sharing both achievements and ongoing challenges in a cons…
S81
Informal multistakeholder session — It is a reflection of a collaborative ethos, endorsing the proposal as a catalyst for political unity and development, i…
S82
WS #138 Empowering End Users Voices in Internet Governance — The tone was largely constructive and collaborative, with participants building on each other’s ideas. There was a sense…
S83
Agenda item 6: other matters/OEWG 2025 — The overall tone was constructive and diplomatic, with most delegations expressing willingness to compromise and find co…
S84
Driving Indias AI Future Growth Innovation and Impact — The discussion maintained an optimistic and forward-looking tone throughout, characterized by enthusiasm for India’s AI …
S85
WS #305 Financing Self Sustaining Community Connectivity Solutions — The tone was consistently professional, collaborative, and optimistic throughout. Speakers demonstrated deep expertise w…
S86
Event page with the recording — The tone remains consistently enthusiastic, informative, and encouraging throughout the conversation. Both speakers main…
S87
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — The moderator introduces himself at the start of the session, establishing his presence for the audience.
S88
The Global Power Shift India’s Rise in AI &amp; Semiconductors — -Moderator: Role not specified in detail, appears to be the session moderator who introduced the panelists and managed t…
S89
Panel Discussion AI &amp; Cybersecurity _ India AI Impact Summit — The moderator opens, transitions, and closes the session, guaranteeing that speakers are introduced, the discussion proc…
S90
Delegated decisions, amplified risks: Charting a secure future for agentic AI — – **Moderator**: Role mentioned as moderator of the session
S91
Global Standards for a Sustainable Digital Future — – **Participant**: Multiple unidentified participants who asked questions during the session Dimitrios Kalogeropoulos: …
S92
Panel Discussion Next Generation of Techies _ India AI Impact Summit — I think for my co -founder and I both it started off as this deep intellectual interest more than anything, being colleg…
S93
Closure of the session — The EU applauds the idea of a programme of action, which has had backing since 2020 from a broad range of states and reg…
S94
United Nations — 36. The international community has come a long way in its deliberation on the new development agenda. In July 2013,…
S95
Agenda item 5 : Day 4 Afternoon session — Thailand endorses the proposal by the Philippines to establish a cyber security capacity-building catalogue, designed to…
S96
The Mind and the Machine — Consented-to process of data sharing for limited purposes, such as specific studies, can help gain significant insights
S97
Main Session | Dynamic Coalitions — She highlighted the potential for these coalitions to contribute to data-driven policymaking and national assessments. …
S98
How can Artificial Intelligence (AI) improve digital accessibility for persons with disabilities? — Rosemary Kayess:Hello, thank you for the invitation to speak today. Article 27 of the Universal Declaration of Human Rig…
S99
https://dig.watch/event/india-ai-impact-summit-2026/transforming-health-systems-with-ai-from-lab-to-last-mile — And, you know, that hasn’t really been tested, and all of the policies around is that a disclosure, which is acceptable …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
E
Erica Arya
4 arguments160 words per minute1921 words717 seconds
Argument 1
Need for cohort model to provide hands‑on support and peer learning (Erica Arya)
EXPLANATION
Erica explains that many nonprofits need direct assistance when adopting technology, and a cohort format enables hands‑on mentorship and peer learning among a small group of NGOs. This approach allows the program to gather ground‑level insights that inform platform development.
EVIDENCE
She describes how Tech4Dev’s advisory work revealed the need to hand-hold nonprofits in their first technology steps and how cohort-based programs foster peer learning and sharing of common use-case challenges [68-74].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External sources describe how the cohort model provides hands-on mentorship and peer learning among NGOs [S1] and highlight its role in reducing isolation for nonprofit teams [S2].
MAJOR DISCUSSION POINT
Rationale and design of the AI cohort program
AGREED WITH
Temina Madon, Steven Suting, Manohar Sreekanth
Argument 2
Partnership with Digital Future Labs and Statle to integrate responsible AI from the start (Erica Arya)
EXPLANATION
Erica notes that the cohort partnered with two knowledge partners to embed responsible AI principles and AI safety into project design from day one. This ensures that ethical considerations are not an afterthought but built into the solutions.
EVIDENCE
She identifies Digital Future Labs for responsible AI integration and Statle for AI safety, emphasizing their role in the program’s design [300-304].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Responsible AI integration is emphasized in CISA’s AI roadmap [S14], in industry collaborations such as the Frontier Model Forum [S15], and in multi-stakeholder ethical AI events [S16].
MAJOR DISCUSSION POINT
Embedding responsible AI and safety
AGREED WITH
Steven Suting, Manohar Sreekanth
DISAGREED WITH
Steven Suting
Argument 3
NGOs discovered overlapping projects (e.g., high‑risk pregnancy prediction) and chose to collaborate (Erica Arya)
EXPLANATION
Erica shares that two health NGOs in the cohort were independently building predictive models for high‑risk pregnancies, and the cohort environment enabled them to recognize the overlap and collaborate. This illustrates how cohort meetings can surface synergies and reduce duplicated effort.
EVIDENCE
She recounts the two NGOs realizing they were working on similar predictive models and deciding to collaborate during the cohort discussions [370-374].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The cohort revealed duplicate high-risk pregnancy predictive models and prompted the NGOs to collaborate, as reported in the case study [S1].
MAJOR DISCUSSION POINT
Value of cohort collaboration and knowledge sharing
Argument 4
Prefer open‑source or existing platforms that meet most needs before custom development (Erica Arya, Mainak Roy)
EXPLANATION
Erica advises NGOs to first evaluate open‑source solutions that already satisfy a large portion of their requirements, customizing only where gaps remain. This approach saves time, money, and effort while leveraging community‑maintained tools.
EVIDENCE
She recommends using existing open-source platforms that meet 60-70 % of needs before building custom solutions, emphasizing integration of existing tools and adding layers only where necessary [426-433].
MAJOR DISCUSSION POINT
Practical advice for NGOs embarking on AI
T
Temina Madon
3 arguments174 words per minute805 words276 seconds
Argument 1
Cohort approach inspired by VC incubators; pooling technical resources for NGOs (Temina Madon)
EXPLANATION
Temina draws on her experience with Silicon Valley incubators, explaining that cohort models bring founders together early to share challenges and resources. She applies this philosophy to the Agency Fund by creating a shared pool of technical staff for NGOs that cannot afford full‑time AI engineers.
EVIDENCE
She describes Y Combinator-style cohorts, the role of South Park Commons, and how the Agency Fund brings founders together, then explains the creation of a pool of ten technical staff and product managers to support funded NGOs [110-118] and [123-129].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The program created a pool of ten technical staff for funded NGOs, mirroring VC-style cohorts and addressing resource constraints [S1][S2].
MAJOR DISCUSSION POINT
Rationale and design of the AI cohort program
AGREED WITH
Erica Arya, Steven Suting, Manohar Sreekanth
Argument 2
Four‑level evaluation framework (user, monitoring, impact) to ensure responsible outcomes (Temina Madon)
EXPLANATION
Temina outlines a four‑stage evaluation process—user evaluation, monitoring, and impact assessment—to continuously learn from deployed products and demonstrate societal impact. She notes that the fourth stage, impact evaluation, is rarely seen in the private sector but is essential for social‑sector initiatives.
EVIDENCE
She details the user evaluation, monitoring systems, and impact evaluation stages, emphasizing their role in measuring outcomes for health, education, or livelihood [270-277].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A four-stage evaluation framework covering user experience, monitoring, impact assessment and continuous improvement is described in the literature [S18][S19].
MAJOR DISCUSSION POINT
Embedding responsible AI and safety
AGREED WITH
Manohar Sreekanth, Pritam Sukumar
Argument 3
Cohort model brings a collaborative culture uncommon in for‑profit sector (Temina Madon)
EXPLANATION
Temina argues that the cohort model fosters early‑stage collaboration among NGOs, a practice that is rare in for‑profit environments where companies typically work in isolation. This collaborative culture accelerates learning and solution sharing across organizations.
EVIDENCE
She references the cohort philosophy of bringing founders together and Manohar’s observation that such early collaboration is rare in the corporate world [115-118] and [386-389].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Cohort-driven collaboration among NGOs is highlighted as rare in for-profit environments, emphasizing the value of shared learning [S2][S1].
MAJOR DISCUSSION POINT
Value of cohort collaboration and knowledge sharing
M
Manohar Sreekanth
1 argument160 words per minute2355 words882 seconds
Argument 1
Program structure: open application, screening, mentorship, leadership buy‑in (Manohar Sreekanth)
EXPLANATION
Manohar describes the AI cohort’s operational design, highlighting that NGOs applied openly, were screened for fit, received mentorship, and required leadership buy‑in to ensure organizational commitment. This structure aimed to create a pilot environment where learning could be shared among participants.
EVIDENCE
He outlines the open application, screening calls, free pilot nature, evaluation of use cases, resource commitment, and leadership buy-in as key components of the program [84-93].
MAJOR DISCUSSION POINT
Rationale and design of the AI cohort program
AGREED WITH
Erica Arya, Temina Madon, Steven Suting
M
Mainak Roy
2 arguments170 words per minute729 words256 seconds
Argument 1
WhatsApp chatbot “Simple Teacher Buddy” for pedagogical guidance (Mainak Roy)
EXPLANATION
Mainak explains that teachers in remote Indian contexts often rely on WhatsApp, so the cohort built a chatbot on that platform to provide evidence‑based pedagogical suggestions tailored to each class. The bot aims to reduce teachers’ workload by delivering relevant strategies instantly.
EVIDENCE
He details the problem of teachers needing the right pedagogical strategy, the choice of WhatsApp as the delivery channel, and the development of a chatbot to address this need [155-163].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The WhatsApp-based Simple Teacher Buddy chatbot for teachers is documented in the program overview of the AI cohort [S1].
MAJOR DISCUSSION POINT
AI solutions built by participating NGOs
Argument 2
Bot failing to recognize initial “hi” and need for conversation guardrails (Mainak Roy)
EXPLANATION
Mainak reports a technical issue where the chatbot did not treat the initial “hi” as a start of conversation, causing data collection failures. The team introduced guardrails to ensure the bot only engages in approved dialogue flows.
EVIDENCE
He describes teachers scanning a QR code, the bot not recognizing the first “hi”, and the subsequent implementation of conversation guardrails [241-245].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Technical issues with the initial greeting and the subsequent implementation of conversation guardrails are reported in the cohort case study [S1].
MAJOR DISCUSSION POINT
Implementation challenges and technical issues
S
Steven Suting
5 arguments182 words per minute1399 words461 seconds
Argument 1
AI‑driven personalized learning and behavior‑nudging platform for youth (Steven Suting)
EXPLANATION
Steven outlines that learners are heterogeneous in ability, confidence, and access, yet the education system treats them uniformly. Their solution uses AI to personalize learning pathways and nudge behavior, aiming to meet each learner where they are.
EVIDENCE
He discusses learner heterogeneity, the shortcomings of chalk-and-talk teaching, teacher-student ratios, and the goal of building a system that addresses career-development needs through AI [165-180].
MAJOR DISCUSSION POINT
AI solutions built by participating NGOs
Argument 2
Difficulty translating human signals into AI responses; need for behavior‑science expertise (Steven Suting)
EXPLANATION
Steven notes that human conversations contain nuanced signals (tone, speed, confidence) that are hard for bots to capture. He argues that solving this requires behavior‑science expertise to design systems that can interpret and respond appropriately.
EVIDENCE
He explains the challenge of extracting signals from conversations, the shift from a software problem to a behavior-science problem, and the need for specialists to interpret these signals [258-266].
MAJOR DISCUSSION POINT
Implementation challenges and technical issues
Argument 3
Use of slur‑list and guardrail plugins to prevent inappropriate outputs (Steven Suting)
EXPLANATION
Steven describes integrating safety mechanisms such as slur lists and guardrail plugins into the chatbot to block undesirable language and ensure appropriate behavior. These tools were considered during design and testing phases.
EVIDENCE
He references the creation of a slur list, crowdsourced guardrails, and how these plugins were incorporated to manage bot responses [324-334].
MAJOR DISCUSSION POINT
Embedding responsible AI and safety
AGREED WITH
Erica Arya, Manohar Sreekanth
DISAGREED WITH
Erica Arya
Argument 4
Peer learning accelerated problem solving and avoided reinventing solutions (Steven Suting)
EXPLANATION
Steven highlights that having a shared blueprint and modular prototyping allowed the team to fail fast and learn quickly. Interaction with other cohort members helped avoid duplicated effort and sped up solution development.
EVIDENCE
He mentions using a blueprint to rapid-prototype, learning faster through fail-fast cycles, and observing how cohort peers solved similar problems, which accelerated their own progress [319-322] and [350-359].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Peers sharing blueprints and rapid-prototype cycles accelerated development and prevented duplicated effort, as described in the cohort outcomes [S1][S2].
MAJOR DISCUSSION POINT
Value of cohort collaboration and knowledge sharing
AGREED WITH
Erica Arya, Temina Madon, Manohar Sreekanth
Argument 5
Avoid knee‑jerk AI adoption; assess needs before building (Steven Suting)
EXPLANATION
Steven cautions that NGOs often jump on AI trends to impress funders, which can lead to redundant solutions. He recommends pausing to evaluate real needs before committing to AI development.
EVIDENCE
He describes the pressure to adopt AI for visibility, the tendency to build duplicate solutions, and the importance of assessing needs first [405-408].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The recommendation to assess real needs before adopting AI aligns with guidance to avoid building unnecessary or “sexy” solutions [S1].
MAJOR DISCUSSION POINT
Practical advice for NGOs embarking on AI
AGREED WITH
Pritam Sukumar, Erica Arya
P
Pritam Sukumar
3 arguments186 words per minute774 words249 seconds
Argument 1
AI‑generated student‑report scripts to aid teacher‑student mentorship (Pritam Sukumar)
EXPLANATION
Pritam explains that their AI aggregates student performance data to generate a concise script that guides teachers during mentorship conversations and provides online learners with actionable feedback. This replaces manual report review and aims to improve mentorship efficiency.
EVIDENCE
He details the collection of test scores, attendance, and chapter strengths, and how the AI creates a script used by teachers and online learners [188-200].
MAJOR DISCUSSION POINT
AI solutions built by participating NGOs
Argument 2
LLM hallucinations causing incorrect performance statements (Pritam Sukumar)
EXPLANATION
Pritam reports that the language model sometimes produces factual errors, such as stating a performance metric decreased when it actually increased, despite correct numbers. This issue requires prompt‑engineering to correct.
EVIDENCE
He notes that hallucinations produce correct numbers but wrong descriptors (e.g., “decreased” instead of “increased”) and that the team is working on prompt engineering to fix this [209-212].
MAJOR DISCUSSION POINT
Implementation challenges and technical issues
Argument 3
Start from concrete pain points, not from “AI for its own sake” (Pritam Sukumar)
EXPLANATION
Pritam advises NGOs to first identify operational pain points and then evaluate whether AI is a suitable solution, rather than pursuing AI for its novelty. He emphasizes that many routine tasks, like grant writing or coding, can benefit from AI.
EVIDENCE
He states that NGOs should stop looking for AI use cases and instead focus on existing inefficiencies, citing examples such as fundraising and code generation [400-402].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Advice to focus on genuine operational pain points rather than pursuing AI for novelty is echoed in the cohort discussion [S1].
MAJOR DISCUSSION POINT
Practical advice for NGOs embarking on AI
AGREED WITH
Steven Suting, Erica Arya
E
Elizabeth
1 argument163 words per minute305 words111 seconds
Argument 1
Role of AI assistant in panel moderation (Elizabeth)
EXPLANATION
Elizabeth acts as a digital assistant, responding to the moderator’s prompts, adjusting volume, and introducing panelists on demand. Her real‑time assistance demonstrates how AI can support event facilitation.
EVIDENCE
She greets the moderator, offers to help, and confirms readiness to assist with introductions and pacing throughout the opening segment [8-13] and later confirms she can speak louder when asked [22-23].
MAJOR DISCUSSION POINT
Role of AI assistant in panel moderation
Agreements
Agreement Points
The cohort model provides hands‑on mentorship, peer learning and collaborative culture that helps NGOs overcome isolation and share solutions.
Speakers: Erica Arya, Temina Madon, Steven Suting, Manohar Sreekanth
Need for cohort model to provide hands‑on support and peer learning (Erica Arya) Cohort approach inspired by VC incubators; pooling technical resources for NGOs (Temina Madon) Peer learning accelerated problem solving and avoided reinventing solutions (Steven Suting) Program structure: open application, screening, mentorship, leadership buy‑in (Manohar Sreekanth)
Erica describes how cohort-based programs enable hands-on mentorship and peer learning among a small group of NGOs [68-74]. Temina explains that the cohort philosophy, drawn from VC incubators, brings founders together and creates a shared pool of technical staff [110-118][123-129]. Steven highlights that sharing blueprints and modular prototyping within the cohort let participants fail fast and learn from each other, avoiding duplicated effort [319-322][350-359]. Manohar outlines the program’s operational design – open applications, screening, free pilot, mentorship and leadership buy-in – which underpins the collaborative environment [84-93].
POLICY CONTEXT (KNOWLEDGE BASE)
The cohort‑style mentorship aligns with UN learning coordination findings that emphasize face‑to‑face and peer learning methods (S34) and with AI‑for‑nonprofits programmes that deliberately embed mentorship to overcome barriers (S36). Cooperative AI governance models also highlight shared learning communities (S47).
Embedding responsible AI and safety mechanisms from the outset, using knowledge partners and guard‑rail tools.
Speakers: Erica Arya, Steven Suting, Manohar Sreekanth
Partnership with Digital Future Labs and Statle to integrate responsible AI from the start (Erica Arya) Use of slur‑list and guardrail plugins to prevent inappropriate outputs (Steven Suting) Mention of slur‑list and guardrail plugins during discussion (Manohar Sreekanth)
Erica notes that the cohort partnered with Digital Future Labs for responsible AI integration and Statle for AI safety, embedding these principles from day one [300-304]. Steven describes incorporating a slur-list and other guard-rail plugins to keep the chatbot’s language appropriate [324-334]. Manohar also references these guard-rail mechanisms while summarising the program’s safety design [324-334].
POLICY CONTEXT (KNOWLEDGE BASE)
This approach reflects AI governance guidance that calls for principle‑level policies combined with practical guardrails (S42) and the availability of concrete guardrail toolkits such as human‑in‑the‑loop risk assessment (S43, S44). It also matches capacity‑building recommendations that promote partnerships with knowledge experts (S38).
Advice to start from concrete organisational pain points and assess existing solutions before building custom AI applications.
Speakers: Pritam Sukumar, Steven Suting, Erica Arya
Start from concrete pain points, not from “AI for its own sake” (Pritam Sukumar) Avoid knee‑jerk AI adoption; assess needs before building (Steven Suting) Prefer open‑source or existing platforms that meet most needs before custom development (Erica Arya)
Pritam urges NGOs to identify operational pain points first and then consider AI, rather than chasing AI for novelty [400-402]. Steven cautions against jumping on AI trends to impress funders and recommends a needs assessment before development [405-408]. Erica advises evaluating open-source tools that satisfy 60-70 % of requirements before embarking on custom builds [426-433].
POLICY CONTEXT (KNOWLEDGE BASE)
Best‑practice recommendations for NGOs stress needs‑driven assessments before custom AI development, as outlined in AI‑for‑nonprofits programmes targeting first‑step adopters (S36) and in evidence‑based policy design discussions (S51).
The importance of systematic evaluation and impact measurement throughout the program.
Speakers: Temina Madon, Manohar Sreekanth, Pritam Sukumar
Four‑level evaluation framework (user, monitoring, impact) to ensure responsible outcomes (Temina Madon) Program structure includes evaluation of use case, leadership buy‑in and pilot learning (Manohar Sreekanth) Feedback loops and monitoring of hallucinations to improve the AI solution (Pritam Sukumar)
Temina outlines a four-stage evaluation process-user evaluation, monitoring, and impact assessment-to continuously learn and demonstrate societal impact [270-277]. Manohar describes the program’s evaluation steps: screening, resource commitment, leadership buy-in and pilot learning to generate insights for the cohort [84-93]. Pritam reports ongoing monitoring of LLM hallucinations and iterative prompt-engineering based on feedback [209-214].
POLICY CONTEXT (KNOWLEDGE BASE)
Systematic impact measurement is consistent with longitudinal evaluation designs recommended for technology grants (S46) and with the four‑level evaluation framework advocated in capacity‑building dialogues (S32).
Resource constraints for NGOs and the benefit of shared technical resources or pools.
Speakers: Temina Madon, Pritam Sukumar
Pooling technical resources for NGOs (Temina Madon) Need for more AI engineers to address hallucination issues (Pritam Sukumar)
Temina explains that many NGOs cannot afford full-time AI engineers, so the Agency Fund creates a pool of ten technical staff and product managers to support them [123-129]. Pritam highlights that the lack of AI engineering capacity leads to challenges such as LLM hallucinations, indicating a need for additional AI engineers [225-227].
POLICY CONTEXT (KNOWLEDGE BASE)
Financial and technical constraints for civil society have been highlighted in multiple forums (S39) and the benefits of resource pooling are documented in NGO implementation manuals (S41). Cooperative AI infrastructure models further endorse shared pools (S47).
Similar Viewpoints
Both stress that responsible AI safeguards must be built into the solution design rather than added later, using external expertise and technical guard‑rails [300-304][324-334].
Speakers: Erica Arya, Steven Suting
Partnership with Digital Future Labs and Statle to integrate responsible AI from the start (Erica Arya) Use of slur‑list and guardrail plugins to prevent inappropriate outputs (Steven Suting)
Both view the cohort model as essential for providing mentorship, sharing resources, and reducing isolation among NGOs [68-74][110-118].
Speakers: Temina Madon, Erica Arya
Cohort approach inspired by VC incubators; pooling technical resources for NGOs (Temina Madon) Need for cohort model to provide hands‑on support and peer learning (Erica Arya)
Both caution against adopting AI without a clear problem definition, urging NGOs to focus on real operational pain points first [405-408][400-402].
Speakers: Steven Suting, Pritam Sukumar
Avoid knee‑jerk AI adoption; assess needs before building (Steven Suting) Start from concrete pain points, not from “AI for its own sake” (Pritam Sukumar)
Both highlight the necessity of implementing conversation guardrails and safety mechanisms to ensure reliable chatbot interactions [241-245][324-334].
Speakers: Mainak Roy, Steven Suting
Bot failing to recognize initial “hi” and need for conversation guardrails (Mainak Roy) Use of slur‑list and guardrail plugins to prevent inappropriate outputs (Steven Suting)
Unexpected Consensus
Both a funder (Tamina) and a beneficiary NGO (Pritam) explicitly call for a shared pool of AI engineering talent to address capacity gaps.
Speakers: Temina Madon, Pritam Sukumar
Pooling technical resources for NGOs (Temina Madon) Need for more AI engineers to address hallucination issues (Pritam Sukumar)
It is notable that a funder’s strategy of creating a shared technical staff pool aligns directly with an NGO’s expressed shortage of AI engineers, indicating a convergence of perspectives across funding and implementation sides [123-129][225-227].
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for pooled AI engineering talent echo NGO guidance on resource pooling (S41) and broader AI policy discussions that stress shared talent pools to bridge skill shortages (S53, S47).
Overall Assessment

The panel shows strong convergence on the value of cohort‑based programs for mentorship, peer learning and collaborative solution building; on embedding responsible AI safeguards from the outset; on the need to ground AI projects in real organisational pain points and to leverage existing open‑source tools; and on systematic evaluation and shared technical resources to overcome capacity constraints.

High consensus across funders, program designers and implementing NGOs, suggesting that future AI for development initiatives should institutionalise cohort models, responsible‑AI partnerships, needs‑driven design and shared technical pools to maximise impact.

Differences
Different Viewpoints
How to provide technical capacity to NGOs – shared pool of technical staff versus hiring dedicated AI engineers (or building a "golden set")
Speakers: Tamina Madon, Mainak Roy, Pritam Sukumar
Cohort approach inspired by VC incubators; pooling technical resources for NGOs (Tamina Madon) Need for more AI engineers and a “golden set” to ensure AI outputs are aligned (Mainak Roy) Solution needs more AI engineers; current hallucinations indicate lack of expertise (Pritam Sukumar)
Tamina proposes a shared pool of ten technical staff and product managers to support funded NGOs, arguing that this collective resource model is efficient for NGOs that cannot afford full-time AI engineers [123-129]. In contrast, Mainak stresses that scaling to thousands of users requires additional AI engineers and a curated “golden set” of data to improve model reliability [418-419], while Pritam reports ongoing hallucination problems and explicitly states the need for more AI engineers to fix them [225-227]. The speakers agree on the need for technical capacity but disagree on the optimal provisioning model.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between ecosystem‑wide capacity building and dedicated staffing mirrors debates in structural transformation forums over ecosystem mapping versus local capacity building (S32) and differing demand‑driven strategies for AI adoption in emerging markets (S49).
Approach to evaluation and impact measurement – four‑level evaluation framework versus a program design focused on mentorship and leadership buy‑in without explicit evaluation stages
Speakers: Tamina Madon, Manohar Sreekanth
Four‑level evaluation framework (user, monitoring, impact) to ensure responsible outcomes (Tamina Madon) Program structure emphasizing open application, screening, mentorship and leadership buy‑in (Manohar Sreekanth)
Tamina outlines a detailed four-stage evaluation process (user evaluation, monitoring, impact assessment) as essential for learning and demonstrating societal impact [270-277]. Manohar describes the cohort’s operational design (open application, screening, mentorship, leadership buy-in) but does not mention a comparable systematic evaluation framework [84-93]. Both aim for effective program outcomes, yet they differ on the importance and inclusion of formal evaluation mechanisms.
POLICY CONTEXT (KNOWLEDGE BASE)
The four‑level framework is documented in capacity‑building literature (S32), while mentorship‑centric designs without formal metrics are characteristic of early‑stage nonprofit AI programmes (S36).
Methods for embedding responsible AI and safety – partnership with knowledge partners versus in‑house guardrail plugins
Speakers: Erica Arya, Steven Suting
Partnership with Digital Future Labs and Statle to integrate responsible AI from the start (Erica Arya) Use of slur‑list and guardrail plugins to prevent inappropriate outputs (Steven Suting)
Erica emphasizes collaborating with external knowledge partners (Digital Future Labs for responsible AI and Statle for AI safety) to embed ethical considerations from day one [300-304]. Steven focuses on technical safeguards such as slur-lists and guardrail plugins integrated into the chatbot to ensure appropriate behavior [324-334]. Both seek responsible AI but propose different implementation pathways-external partnership versus internal tooling.
POLICY CONTEXT (KNOWLEDGE BASE)
Policy discussions advocate hybrid models that combine external expertise with internal guardrails (S42) and reference existing guardrail toolkits for safe AI deployment (S43, S44).
Unexpected Differences
Resource provisioning model – shared pool versus hiring dedicated engineers
Speakers: Tamina Madon, Mainak Roy, Pritam Sukumar
Cohort approach inspired by VC incubators; pooling technical resources for NGOs (Tamina Madon) Need for more AI engineers and a “golden set” to ensure AI outputs are aligned (Mainak Roy) Solution needs more AI engineers; current hallucinations indicate lack of expertise (Pritam Sukumar)
Given that all speakers operate within the same cohort ecosystem, it is surprising that they propose contrasting solutions to the same capacity constraint: Tamina’s shared‑resource pool versus Mainak’s call for additional dedicated engineers and a curated data set. This divergence was not anticipated from the initial description of the cohort’s collaborative intent.
POLICY CONTEXT (KNOWLEDGE BASE)
The shared‑pool versus dedicated‑staff debate is reflected in AI ecosystem mapping dialogues that weigh pooling against bespoke staffing (S32) and in cooperative infrastructure approaches that favor shared resources (S47).
Evaluation emphasis – structured four‑level framework versus implicit mentorship focus
Speakers: Tamina Madon, Manohar Sreekanth
Four‑level evaluation framework (user, monitoring, impact) to ensure responsible outcomes (Tamina Madon) Program structure emphasizing open application, screening, mentorship and leadership buy‑in (Manohar Sreekanth)
While both aim for effective program outcomes, the lack of an explicit evaluation component in Manohar’s description contrasts sharply with Tamina’s detailed evaluation stages, an unexpected gap given the program’s emphasis on learning and impact.
POLICY CONTEXT (KNOWLEDGE BASE)
The contrast aligns with the four‑level evaluation model presented in capacity‑building frameworks (S32) and the mentorship‑focused pilot designs that rely on informal impact tracking (S36).
Overall Assessment

The panel largely converged on the value of cohort‑based programs, open‑source solutions, and responsible AI. However, key disagreements emerged around how to provision technical capacity (shared pool vs hiring engineers), how to embed evaluation and impact measurement, and the preferred mechanisms for ensuring AI safety (external partnerships vs internal guardrails). These divergences reflect differing operational priorities and experiences among funders, technologists, and NGOs.

Moderate – while there is broad consensus on goals (effective, ethical AI for NGOs), the participants differ on implementation pathways. This suggests that future cohort designs will need to reconcile resource models, evaluation frameworks, and safety strategies to achieve cohesive outcomes.

Partial Agreements
Both agree that NGOs should not adopt AI merely to follow trends, but they differ in emphasis: Erica stresses evaluating existing open‑source solutions and customizing only where gaps exist [426-433], while Steven highlights the pressure from funders and the risk of duplicate solutions, urging a pause for needs assessment [405-408].
Speakers: Erica Arya, Steven Suting
Avoid knee‑jerk AI adoption; assess needs before building (Erica Arya) Avoid knee‑jerk AI adoption; assess needs before building (Steven Suting)
Both advocate grounding AI projects in real needs rather than novelty. Erica recommends using open‑source platforms that satisfy 60‑70 % of requirements before building custom solutions [426-433], whereas Pritam advises identifying operational pain points first and then evaluating AI suitability [400-402]. Their goals align, but the former focuses on technology selection while the latter emphasizes problem identification.
Speakers: Erica Arya, Pritam Sukumar
Prefer open‑source or existing platforms that meet most needs before custom development (Erica Arya) Start from concrete pain points, not from “AI for its own sake” (Pritam Sukumar)
Takeaways
Key takeaways
A cohort‑based AI program provides hands‑on mentorship, peer learning, and shared technical resources, making AI adoption feasible for NGOs with limited engineering capacity. Embedding responsible AI and safety considerations from the start (via partners Digital Future Labs and Statle) helps avoid later ethical and quality issues. Collaboration among NGOs revealed overlapping use cases (e.g., high‑risk pregnancy prediction) and enabled joint development, reducing duplicated effort. Practical AI solutions built during the cohort included a WhatsApp chatbot for teacher guidance, a personalized learning/behavior‑nudging platform for youth, and AI‑generated student‑report scripts for mentorship. Common implementation challenges were bot interaction glitches, LLM hallucinations, and difficulty translating human behavioural signals into AI responses. NGOs learned to start from concrete pain points rather than “AI for its own sake,” to leverage existing open‑source tools before building custom solutions, and to incorporate evaluation frameworks (user, monitoring, impact) for continuous learning. The AI assistant (Elizabeth) successfully supported real‑time moderation and panelist introductions, demonstrating the utility of conversational agents in event facilitation.
Resolutions and action items
NGOs will continue iterating on their prototypes (e.g., fixing the ‘hi’ detection in the Simple Teacher Buddy and reducing LLM hallucinations in the student‑report script). Project Tech4Dev will maintain the mentorship pool (technical staff and product managers) for ongoing support to the cohort NGOs. Participating NGOs agreed to adopt the four‑level evaluation framework (user, monitoring, impact) for future AI deployments. Organizations will prioritize use of open‑source platforms and existing tools, adding custom layers only where gaps remain. Documentation, blogs, and program materials will be shared publicly for broader community learning.
Unresolved issues
How to systematically eliminate LLM hallucinations and establish a reliable “golden dataset” for validation remains an open technical problem. The optimal method for capturing nuanced human signals (tone, confidence, etc.) in AI‑driven behavior‑nudging platforms is still under investigation. Scalability of the solutions (e.g., extending the chatbot to thousands of teachers) and the required additional AI engineering resources have not been fully resolved. Decision criteria for build‑vs‑buy (e.g., when to adopt a large‑language‑model service like Gemini/Claude versus an in‑house solution) were raised but not concluded.
Suggested compromises
Pooling part‑time technical staff and product managers across NGOs instead of each NGO hiring full‑time AI engineers. Using a cohort model that combines funding, mentorship, and shared resources to lower the barrier for NGOs lacking internal capacity. Adapting the chatbot to treat any first teacher message as a greeting, mitigating the strict ‘hi’ requirement. Incorporating guard‑rail plugins and slur‑lists as lightweight safety layers rather than building extensive custom safety systems.
Thought Provoking Comments
Cohort programs let us learn deeply from a small set of NGOs, feeding those learnings back into our platforms and enabling peer learning and shared resources.
Highlights the strategic value of cohort models for iterative product development and community building, moving beyond one‑off consulting.
Set the foundation for the discussion on why both Project Tech4Dev and Agency Fund chose a cohort approach; prompted Tamina to compare with startup accelerators.
Speaker: Erica Arya
We bring founders together early in their journey because the journey is lonely and fraught with failures; a cohort provides shared resources like a pool of 10 technical staff and product managers that NGOs can tap into.
Introduces the novel idea of resource pooling for NGOs, mirroring Silicon Valley accelerator practices, and points out the scarcity of product management in the social sector.
Shifted the conversation to funding models and the practical mechanisms (shared staff) that make the cohort work; influenced later discussion on evaluation frameworks.
Speaker: Tamina Madon
We chose WhatsApp as the channel because it’s already part of teachers’ daily workflow, reducing adoption friction, and built a chatbot to give evidence‑based pedagogical advice.
Demonstrates a pragmatic, context‑aware technology choice that leverages existing habits rather than forcing new tools.
Introduced a concrete use‑case that grounded the abstract discussion; later led to talk about bot interaction challenges and guardrails.
Speaker: Mainak Roy
When we try to emulate a human teacher, the problem becomes a behavior‑science challenge rather than just a software problem; we need to decode signals like tone and confidence.
Broadens the scope from technical implementation to interdisciplinary design, emphasizing the complexity of human‑centered AI.
Created a turning point that moved the dialogue from pure tech solutions to the need for behavioral insights and responsible AI considerations.
Speaker: Steven Suting
We think of evaluation in four levels – user evaluation, monitoring, impact evaluation, and finally proving societal outcomes – which is rarely seen in the private sector.
Offers a comprehensive framework for continuous learning and impact measurement, highlighting a key difference between social and for‑profit sectors.
Prompted participants to reflect on how they assess their solutions; reinforced the importance of embedding evaluation early in the program.
Speaker: Tamina Madon
We partnered with Digital Future Labs and Statle to embed responsible AI and AI safety principles from day one, rather than as an afterthought.
Shows proactive integration of ethics and safety into the development lifecycle, a practice still uncommon in many AI projects.
Steered the conversation toward ethical design, leading to mentions of guardrails, slur lists, and how teams incorporated them.
Speaker: Erica Arya
Stop looking for AI use‑cases first; start with existing pain points and see if AI is a good fit. AI should solve real inefficiencies, not be added for its novelty.
Counters the hype‑driven approach, urging NGOs to be needs‑driven and realistic about AI’s role.
Reoriented the panel’s mindset, influencing later remarks about avoiding “sexy” solutions and focusing on scalability and trust.
Speaker: Pritam Sukumar
Don’t reinvent the wheel; start with open‑source platforms that meet 60‑70 % of your needs and then customize only what’s missing.
Provides a practical, cost‑effective strategy for NGOs, emphasizing reuse over custom development.
Closed the discussion with actionable advice, reinforcing earlier points about resource efficiency and collaborative development.
Speaker: Erica Arya
Overall Assessment

These pivotal comments collectively shaped the panel from a high‑level overview of the AI cohort program to a nuanced exploration of its underlying philosophy, practical implementation, and ethical considerations. Erica’s and Tamina’s early insights established the cohort’s learning‑centric purpose, while Tamina’s resource‑pooling model and evaluation framework introduced concrete mechanisms. Mainak’s WhatsApp example grounded the talk in real‑world constraints, and Steven’s shift to behavior‑science highlighted interdisciplinary challenges. Subsequent remarks on responsible AI, the pitfalls of hype, and leveraging open‑source solutions deepened the conversation, steering participants toward a pragmatic, impact‑focused, and ethically aware approach to AI in the nonprofit sector.

Follow-up Questions
Did you evaluate Gemini or Claude models? What was the build vs. buy decision?
Understanding whether the NGOs chose existing large language models or built custom solutions helps inform resource allocation and technology strategy for future projects.
Speaker: Manohar Sreekanth
What were the on‑the‑ground challenges faced when implementing the WhatsApp chatbot?
Identifying practical deployment issues (e.g., greeting detection, question scope) is crucial for improving chatbot reliability and user adoption in remote education settings.
Speaker: Manohar Sreekanth (to Mainak Roy)
How many users are currently using the AI solution, and is it in pilot or rollout phase?
Clarifying adoption scale and program stage is needed to assess impact, plan scaling, and allocate further support.
Speaker: Manohar Sreekanth (to Pritam Sukumar)
What are the key learnings or surprising insights from the AI cohort program?
Capturing lessons learned helps other nonprofits avoid pitfalls and replicate successful practices in their own AI journeys.
Speaker: Manohar Sreekanth (to Pritam Sukumar)
Did any participants use guard‑rail plugins or slur‑list tools for AI safety?
Knowing whether safety mechanisms were adopted informs best‑practice guidelines for responsible AI deployment in the sector.
Speaker: Manohar Sreekanth
How can LLM hallucinations be mitigated in educational AI tools, especially when generating teacher‑student scripts?
Hallucinations undermine trust; research into prompt engineering, validation datasets, or post‑processing is needed to ensure reliable outputs.
Speaker: Pritam Sukumar
What is the comparative effectiveness of a cohort‑based model versus direct funding for NGO AI projects?
Evaluating which model yields better learning, impact, and scalability can guide funders and program designers in future initiatives.
Speaker: Tamina Madon, Erica Arya
What role should product managers play in NGOs developing AI solutions, and how can this gap be addressed?
Product management expertise is scarce in nonprofits; studying how to integrate this function could improve product design and impact.
Speaker: Tamina Madon
How can the four‑level evaluation framework (user, monitoring, impact, etc.) be operationalized for AI interventions in the social sector?
A systematic framework is needed to measure outcomes, iterate, and demonstrate impact at scale.
Speaker: Tamina Madon
What are the advantages and limitations of using WhatsApp as the delivery channel for AI chatbots in remote education contexts?
WhatsApp’s ubiquity is promising, but technical and usability constraints need systematic study to optimize effectiveness.
Speaker: Mainak Roy
How can a ‘golden set’ of reference data be created and validated to ensure AI output quality for education use cases?
A curated benchmark can reduce errors and improve consistency; research is needed on its design, maintenance, and impact.
Speaker: Mainak Roy
What are the measurable benefits of cross‑organization collaboration among NGOs working on similar AI use cases?
Understanding how sharing models, data, and lessons reduces duplication and accelerates innovation can inform ecosystem‑level strategies.
Speaker: Erica Arya, Steven Suting
How does AI adoption maturity differ between Indian NGOs and NGOs in the U.S. or other regions, and what implications does this have for program design?
Comparative analysis can reveal contextual barriers and inform tailored support mechanisms for different markets.
Speaker: Tamina Madon
How should responsible AI frameworks be adapted for different learner age groups (e.g., 25‑year‑olds vs. grade‑8 students)?
Age‑specific ethical considerations affect data handling, feedback, and behavior‑change strategies; research is needed to define appropriate safeguards.
Speaker: Steven Suting
What is the overall impact of AI tools on teacher workload, trust, and instructional quality in nonprofit education programs?
While time savings are reported, hallucinations erode trust; systematic impact evaluation is required to balance benefits and risks.
Speaker: Pritam Sukumar
How effective is the mentorship model within AI cohort programs at overcoming technical and organizational barriers for NGOs?
Assessing mentorship outcomes can reveal best practices and areas for improvement in capacity‑building initiatives.
Speaker: Manohar Sreekanth
What are the best practices for integrating open‑source components (e.g., Superset, other libraries) into AI platforms for NGOs?
Understanding modular integration strategies can reduce development effort and promote sustainable, maintainable solutions.
Speaker: Erica Arya

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

How AI Is Transforming Indias Workforce for Global Competitivene

How AI Is Transforming Indias Workforce for Global Competitivene

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel convened to examine how artificial intelligence is reshaping the workforce in India and the UK, focusing on both disruption and opportunity [2][8-9]. Organisers split the discussion into three parts: identifying the nature of disruption, exploring skill requirements, and considering policy and education responses [13-16].


Srikrishna argued that AI capability is rapidly expanding and already displacing large chunks of work, especially in software engineering, which he now sees as the most affected area compared with testing or infrastructure [23-28][34-37][39-42]. He warned that infrastructure-related roles are on a plateau, while coding is becoming increasingly abstracted and cost-free, turning code into a commodity [39-42][160-166].


For fresh graduates, Srikrishna highlighted enormous opportunities but stressed the need to acquire a new skill set that goes beyond traditional coding [45]. Ravi added that the critical competencies include system-level judgment of AI outputs, interdisciplinary fluency across engineering, risk and regulation, a continuous-learning mindset, and deep contextual awareness of India’s linguistic diversity [57-71].


The UK panelist Sue noted widespread anxiety about job loss but described a national AI Skills Partnership that aims to upskill over a million people and turn anxiety into agency through reskilling and conversion courses [96-104]. She also emphasized that effective upskilling must combine technical, governance and “human” skills, and that schools and curricula need to be aligned with AI adoption [90-95][112-115].


Both Indian and UK speakers agreed that there is no single silver-bullet solution; instead, an iterative, collaborative ecosystem of government, industry and academia is required [109-112][354-358]. Role redesign is already underway, with software squads shrinking from ten members to three and delivery cycles accelerating, but adoption remains slow because AI still lacks contextual understanding [238-242][246-252].


Ravi described Mastercard’s governance model-chief AI officer, privacy-by-design, and horizontal AI teams-as a template for embedding interdisciplinary expertise early in product design [191-201]. The panel warned of concentration risk if only elite institutions control data and compute, urging broader access, tier-2/3 university involvement, and inclusive AI education [324-328][332-339][365-373].


Participants called for a new, interoperable skills taxonomy and lifelong-learning infrastructure to keep pace with rapid AI change [354-359][321-323]. The discussion concluded that while AI will transform many jobs, the priority is to equip the workforce with adaptable, cross-disciplinary skills and inclusive policies to harness the technology responsibly [351].


Keypoints

Major discussion points


AI is reshaping the IT services landscape, with software engineering now the most disrupted function.


Srikrishna notes that the “direction of travel… there is disruption” and that the impact has shifted from testing to software engineering as the biggest area of change [23-30][31-37]. He also stresses that “opportunities for a young technically-savvy person is enormous” if they acquire the right new skills [45-46].


New AI-driven roles demand a blend of technical, judgmental and interdisciplinary capabilities.


Ravi outlines four core skill clusters: system-level judgment, interdisciplinary fluency, a continuous-learning mindset, and deep contextual awareness [57-71]. He later reinforces that governing AI at scale “requires interdisciplinary skill” and early integration of AI governance into product design [190-202].


Role redesign and robust AI governance are essential to realise value while maintaining oversight.


Srikrishna describes how a typical software squad is being reduced from 7-10 people to as few as three (product owner, developer, tester) and that “role redesign” is a prerequisite for AI value [238-246]. Ravi details Mastercard’s AI governance framework-including a chief AI officer, privacy-by-design, and cross-functional AI governance teams-to embed oversight from the start [191-202]. Sue adds that even with AI-generated code, “someone needs to check the code” and that governance and assurance roles will evolve [169-177].


National-level strategies (especially the UK’s) focus on coordinated upskilling, infrastructure and adoption pathways.


Sue explains the UK’s AI Skills Partnership, the goal to train over one million people, and the push to turn “anxiety into agency” through reskilling, conversion courses and industry-government collaboration [90-98][101-108][109-124][125-131]. She later highlights investments in data and compute infrastructure, AI growth zones, and a shift from building foundations to accelerating adoption [313-321].


Inclusion, equity and risk mitigation (e.g., concentration risk) are seen as critical safeguards.


Ravi warns of “concentration risk” if only a few institutions control data, compute and talent, urging broader access to tools, curricula for tier-2/3 institutions and safeguards against over-automation [326-340]. Sue calls for “interoperability of skills credentials” and a national taxonomy to ensure mobility and recognition of learning [354-359]. Srikrishna caps the discussion by stating that “inclusiveness has to be by design” and that academia should make AI resources freely available [365-373].


Overall purpose / goal of the discussion


The panel was convened to explore how AI is transforming the workforce, diagnose the resulting disruptions, identify the new skill sets and governance structures required, and share how governments, industry and academia can coordinate to up-skill workers, mitigate anxiety, and ensure inclusive, sustainable adoption of AI technologies.


Overall tone and its evolution


– The conversation began informative and exploratory, with panelists mapping the scope of AI disruption.


– As the dialogue progressed, the tone shifted to solution-oriented and collaborative, highlighting concrete skill frameworks, governance models, and policy initiatives.


– Throughout, there was a balanced mix of optimism (about opportunities for young talent and economic growth) and caution (about anxiety, job displacement, concentration risk), maintaining a professional and forward-looking atmosphere until the closing remarks.


Speakers

Sangeeta Gupta – Panel moderator (moderated the AI and workforce transformation discussion) [S1]


Srikrishna Ramakarthikeyan – Senior executive in the IT services sector (provides perspective on software engineering disruption) [S2]


Ravi Aurora – Mastercard executive focusing on AI governance and responsible AI (discusses Mastercard’s AI governance framework and chief AI & data governance roles) [S3]


Speaker – Generic placeholder; no specific individual details provided in the transcript.


Sue Daley OBE – Director, Tech and Innovation, Tech UK (recognised with an OBE) [S7]


Additional speakers:


President, Global Public Policy and Government Affairs, Mastercard – Leads Mastercard’s public policy and government affairs globally (title listed in the opening speaker line) [S4]


Vishnu R. Dusar – Co-Founder and Managing Director, Nucleus Software (title listed in the opening speaker line) [S4]


Sue Daly – Director, Tech and Innovation, Tech UK (title listed in the opening speaker line) [S4]


Full session reportComprehensive analysis and detailed insights

The panel opened with moderator Sangeeta Gupta welcoming the participants – Vishnu R. Dusar (Mastercard), Srikrishna Ramakarthikeyan (Indian IT services), Ravi Aurora (Mastercard) and Sue Daley OBE (Tech UK) – and explicitly laid out a three-segment structure for the discussion: (1) the nature of AI-driven disruption, (2) emerging skill requirements, and (3) policy and education responses [1-8][13-16].


Nature of disruption – Srikrishna argued that AI capability is expanding rapidly and is reshaping software engineering more than testing or infrastructure [23-30][31-37]. He noted that coding costs are approaching zero, turning code into a low-cost commodity that can address problems previously considered too complex or expensive [39-42][160-168]. Adoption, however, will be gradual, with an estimated 1-2 % annual impact on employment, potentially rising to 2-3 % as organisations catch up with the technology [240-244]. He emphasized that AI’s value lies in enabling solutions that were impossible before, creating huge opportunities for technically-savvy graduates who acquire new problem-solving capabilities [45-46]. At the same time, he warned that a generation raised on AI tools may lack traditional coding fundamentals and will “think differently”, relying on “white-coding” approaches rather than deep algorithmic understanding [141-148].


Emerging skill taxonomy – Ravi outlined four key capability areas needed in regulated, high-stakes environments: (i) system-level judgement to detect model drift and assess outputs, (ii) interdisciplinary fluency across engineering, risk, regulation and user behaviour, (iii) a continuous-learning mindset to keep pace with evolving models, and (iv) deep contextual awareness of India’s multilingual and informal-sector realities [52-61]. He stressed that these capabilities must be embedded early in product design [84-90].


Role redesign and governance – Srikrishna described how a typical agile squad is shrinking from seven-ten members to as few as three (product owner, developer, tester), accelerating delivery cycles from two weeks to two days, and argued that without such redesign the AI value proposition cannot be realised [238-246]. Ravi illustrated Mastercard’s governance model: a chief AI and data-governance officer, a privacy-by-design approach, and a horizontal AI governance team that spans data science, product, legal, compliance and engineering [191-201]. Sue added that even when AI generates code, human verification remains essential, shifting many roles toward assurance and governance [169-177].


Education, upskilling and infrastructure – Sue detailed the UK AI Skills Partnership, which aims to train more than one million people, offers one-year conversion courses for non-AI graduates, and seeks to turn worker anxiety into agency [96-104][105-108]. TechUK’s TechSkills programme provides a “Gold Accreditation” degree recognised by employers, signalling a trusted pathway for graduates [112-118]. The UK is also investing in a national data library and establishing AI growth zones to supply compute resources for innovators [313-321]. She called for a national taxonomy of skills and interoperable credentials so that learning is portable across sectors [354-359].


Policy coordination – Sangeeta contrasted India’s fragmented, state-wise AI initiatives with the UK’s whole-of-government, coordinated approach, asking whether the UK model could inform India’s strategy [108-109]. Sue confirmed that the UK adopts an iterative, flexible policy framework rather than a single “silver-bullet” solution [109-112].


Risks and opportunities – Ravi warned of a concentration risk if data, compute and talent remain confined to a few institutions, which could marginalise tier-2/-3 universities and smaller firms, urging deliberate inclusion of these players [120-128]. Both speakers highlighted widespread worker anxiety and argued that structured reskilling, lifelong-learning pathways and human-in-the-loop governance can convert anxiety into agency [98-104][345-347]. Srikrishna and Sue stressed that inclusiveness must be “by design”, calling for free AI resources and open curricula to democratise access [365-373].


Closing remarks – The panel reiterated that AI will transform many jobs, but the decisive factor will be how quickly education, industry and government co-create inclusive, interdisciplinary pathways for the emerging talent pool. Across the three-segment discussion, there was strong consensus that interdisciplinary upskilling, early governance integration and coordinated policy are essential to harness AI’s potential while mitigating concentration, over-automation and exclusion risks [354-359][365-373].


Session transcriptComplete transcript of the session
Speaker

President, Global Public Policy and Government Affairs Mastercard, Vishnu R. Dusar, Co -Founder and MD, Nucleus Software, Sue Daly, Director, Tech and Innovation, Tech UK.

Sangeeta Gupta

Thank you so much, Pragya, and a very good morning to my wonderful panelists. We have a few audience in the room, but we have a lot more online. So I’m looking forward to, you know, yeah, we can get out. You are here, Ravi, next to me. And Vishnu is just on his way. He should be here shortly. I think the theme of our panel is AI and workforce transformation. And clearly, from a, you know, India perspective, the AI is obviously creating a number of opportunities. It’s also creating a lot of anxiety amongst the youth. And I think it’s important. It’s important to decode what does AI really mean and how do we navigate these shifts that are ahead of us.

So in terms of structuring the panel, I thought we’ll try and break it into. three different segments. The first segment is clearly about what is the disruption and how are we designing for it? So try and get perspectives from each of the panelists on how are you seeing this disruption? Are we shaping this disruption or is this disruption really shaping us? So Kish, if I can start with you maybe, right? From one of the sectors that’s most hotly debated is IT services and you’re a leading company in that space. How are you seeing this change for your employees? Do you see software coding now only being done through AI tools? So what is the job of the coder if you look at it?

But how real is this disruption and how are you staying ahead of the shifts that are there?

Srikrishna Ramakarthikeyan

So I think the direction of travel is indisputable. That there is disruption. There’s a lot of there’s an issue of technology capability and there’s an issue of adoption. And there’s always that technology capability leads adoption. Adoption is going to impact, is going to determine workforce displacement or disruption. But the capability, there’s no doubt that this capability that exists today, actually this capability that existed three months ago, six months ago, where there’s quite a large chunk of work that is done by the industry that could potentially be displaced or improved or in some way impacted by AI. What is it that is getting impacted is changing very rapidly. So you would ask me at the beginning of 24, right?

What services will get impacted? What services will get most impact? Out of say testing, actually I’ll put BP of India. I am saying in tech I would put testing first and I would have put software engineering last. Today I will flip that. I will say software engineering is the most. So the direction of travel I think…

Sangeeta Gupta

So you really think software engineering is bigger disruption than testing and infra management or other stuff, right?

Srikrishna Ramakarthikeyan

That is true. So I think whatever disruption we saw I thought would be there in infra. I think it is there but it is a plateau. I am not seeing leaps and bounds of change. What we saw as a potential change like a year ago and now is not so different. I think the massive difference is in software engineering.

Sangeeta Gupta

So you know if you are a young software professional… How do you see… What does this mean for me as that young fresher out of college right now?

Srikrishna Ramakarthikeyan

I’ll say opportunities for a young technically savvy person is enormous now there are things they need to think of and do differently for that opportunity to become real for them because the real value of AI is not in reducing headcount in blah functions whatever it is where it’s in BPO or some functional work that’s not the real value the real value is in being able to solve problems that you could not solve before and I think you need to arm yourself with a completely different set of skills to make that real but if you do that I think the opportunities are enormous for a young age

Sangeeta Gupta

Thanks Kish, I’ll come back to you Ravi if I can come to you, MasterCard is very strong obviously in financial services but you have a very strong data and technology play how are you seeing this workforce disruption and for a company like yours which has a very large GCC in India what are the different kind of skill sets that you’re thinking about today

Ravi Aurora

Sure, thank you very much and thanks to NASCOM great to be here on the panel with Sue and Sri Krishna so I think like I mean a lot of change right over the last two decades when I look at our industry I guess if you look at it like all the professionals in privacy, cyber security data protection, technology risk they’ve all been enablers of digital transformation right? They have, I mean, create what we enjoy today in terms of digital empowerment and the ability, let’s say, talking from a payment lens, you know, very seamless in terms of wherever in the world you are, right? All that is riding on trust, right? And there’s a lot that goes in, you know, to build that trust, right?

So now we are seeing as artificial intelligence, AI is being embedded into a kind of decision -making, public infrastructure, service delivery, right, and governance. So it’s no longer kind of a downstream compliance function as such. So I think that’s why we need, you know, the shift is in kind of the fintech disruption that came about before. I think what we are… We are seeing a bigger shift that AI is bringing in terms of the kind of skill sets, you know, that are required. So, you know, to your question on what kind of skills are required, right? I think the skills I would say is that the, what do you call, the capability for system level judgment is needed.

So what we mean by that is that are you able to, you know, take what outputs are coming? You know, from AI. And you need to have the capability to understand is the model drifting, you know, in high stakes and regulated industry like ours. It becomes essential because decisions scale very instantly and as do the systemic errors, right? And the impact of those errors if left unchecked. So I think that it’s important to have that system. level judgment. Then, interdisciplinary fluency is important because the AI challenges are not just technical, right? They are at the intersection of engineering, of regulation, of risk, you know, user behavior. So, if we have professionals who are across those domains, right, that’s important and to have that interdisciplinary approach rather than working in silos as such.

Then, it comes to need for a very continuous learning mindset because the AI systems are evolving with data, right? And the workforce needs to evolve that too. And the ability to learn from live environments, right? What’s happening to adapt models, kind of, to be able to refine the decision -making. So, that’s important. So, system -level judgment, interdisciplinary fluency. continuous learning mindset and I think last but not the least is a deep contextual awareness is needed now in a country like ours in India you know multiple languages dialects informal systems so if an AI agent is interacting with the user the question is does it understand the context and the intent and the kind of the real -life realities or is it just a language right so because the context is shaped by the whole models are being trained which means that engineers have to consciously design for it so that contextual ability and awareness is very important

Sangeeta Gupta

so the typical engineer who was the coder as we know knew it obviously has to build a very differentiated set of skills is what you’re really talking about right so understanding interdisciplinary learning understanding context the ability to continuously learn I think that in itself is becoming a skills. So clearly, I think there’s a lot of change that is needed at a college level and school level on how, you know, even how you’re learning so that you are ready for this very, very changing world. So if I can come to you, right, how are you, you know, you represent Tech UK here, how are you seeing the AI disruption in the UK workforce? Is there anxiety?

Is there opportunities that you are seeing? And how are you as an organization, and of course, the UK government supporting this transition that’s

Sue Daley OBE

Well, thank you. That’s a question in a panel all in itself. It’s a real pleasure to be here. Thank you so much for the invitation to be part of the summit. Just to say to everybody, you’ve done an amazing job. So thank you. But also to this really important panel discussion. And it is absolutely a discussion that we’re having in the UK. And what I found really useful this week, if I can be slightly selfish for a moment, is that I think it’s a really important discussion that we’re be having in the UK. And I think it’s a really important discussion that we’re going to be having in the UK. And I think it’s a really important discussion that we’re going to be is to listen to the conversations that you guys are having here and the other global people that are here at the summit and to kind of compare notes.

Are we having the same conversations? Are we facing the same kind of issues? I think what I’ve just heard from my fellow panellists are some of the conversations that are happening in the UK. Yes, there is change. Yes, there is disruption happening. And to your point, absolutely, what we’re seeing is a lot of roles, not just in our industry and sector, but across industries and sectors, moving from very much admin tasks, very much cognitive tasks. Those are being increasingly automated. But then that’s freeing people up to do more problem -solving and to look at more client advisory governance and using and being able to shift those skills to look at AI governance. But also I would say client -facing as well, which goes to your point around skills.

I’ll come back to your broader question but yes it’s technical, yes it’s governance looking at other skills but it’s also those people skills, those human skills if we are shifting people, if jobs are shifting towards more of yes this automation can do the job but what’s the added value that I can provide and it’s my human skills which sounds very weird to say human skills, you know what I mean it’s that ability to interact, it’s that social, more social skills then are we teaching those as well as the technical as well as the legal, the governance as well as the software, as well as the technical skills, are we also teaching people and the young people coming through how to interact with people as well if they’re more client facing so absolutely the disruption we’re feeling it in the UK, we’re having that discussion in the UK, definitely in the industry is questioning what will my role be, where will I sit government is in the UK is focusing very squarely on this so as part of its AI opportunities action plan the UK government has created an AI skills partnership bringing together the government bodies that are looking at how do we upskill, how do we retrain, how do we get society ready for this next wave of AI that’s coming, not just the one we have now, but the one that’s coming down the line, and bringing together with companies and bodies such as TechUK and others to look at how do we do this in collaboration.

So how do we reach the wider population, and I’m not just thinking our industry here, but the wider society population with what are the training courses, what are the upskilling courses, what are the opportunities to learn and gain skills to thrive in an AI world, but then also how do we train our industry and sector for the shift that is happening as well. I think generally that task force is looking to train over one million people in AI so that we can help the greater population. be ready for working in this era. I think there is anxiety. I think there is concern. Some workers understandably worrying about displacement, worrying about if they’re at high exposure to automation, what does that mean?

How do they shift? How do they move? But I think what we are looking at is how do you turn and this is a word I’ve heard a lot about this week, how do you turn anxiety into agency? How do we encourage people to take a lead, lead, to take what they’ve learned but as you said, continuous learning, continuous upskilling because that is what you will need to thrive in this world. But I think what we’re looking at in terms of helping people do that is through restructured training, reskilling programs. It’s pathways for mid -career into new careers. One of the very interesting initiatives that the UK government introduced was around how people coming out of university that might not have an AI degree, can do a one -year conversion course to become then able to work in the AI industry.

so I think there are lots of, perhaps we’ll go into a little bit more, there are lots of different initiatives that the UK are doing which could be applicable here and vice versa, we want to learn from how you’re addressing this but I think there is anxiety but then how do you turn that into opportunity and agency

Sangeeta Gupta

and you know one of the issues in India we keep talking with the government is that we have a very disaggregated focus right now within India, there are multiple governments multiple state governments, organisations places, organisations like NASSCOM, we’re all trying to do some part of the pie but there is no if I can use that word, whole of government or whole of country approach right, I’m saying this is how if this is such a big disruption, this is how we will go about doing it, do you see that in the UK that there is an integrated approach and then obviously every actor has their own role to play in that

Sue Daley OBE

I think it’s coming first of all I don’t think there’s a silver bullet, I don’t think there’s one pure answer because the moment, as you said things are moving rapidly and quickly the moment you put in a task force or initiative, it may very quickly need to shift and need to change. So I think in all of these, and AI generally having an iterative, flexible approach that can adapt and shift as technology evolves and has new developments evolve is really, really key. So I think the AI skills partnership, which we’ve signed up to with the UK government, has really kind of become a bit of a cornerstone, a bit of a nucleus of how do we retrain, how do we upskill the general population.

But then I think there’s also the conversation about how are we ensuring our schools, our education curriculum, what young people are learning in schools, how is that joined up to the AI revolution? And I think while there’s some thinking there, I think that could be more joined up. And then, yes, of course, how are we training the industry? How are we getting people leaving, as you said, the freshers leaving universities with the skills that we need as industry? TechUK is part of part of TechUK is an organisation called TechSkills, go and check them out not right now but maybe afterwards and we at TechSkills work directly with employers directly with technology companies and universities so we be that bridge between the two to make sure that industry employers can provide input into the university, the courses what they’re teaching students so that when they come out of university they have a degree, it’s called a TechSkills Gold Accreditation Degree which means employers will recognise that degree and kind of go, yes you’ve got what I need, come and work for me so there’s no one single answer to this I think it’s a number of initiatives that need to work together but at TechUK as others we’re trying really really hard to join the dots but I think the TechSkills addresses the what do employees need from universities, how do we get universities and employers employers working more closer together what role can government do and what can government do that industry can’t and vice versa what can industry do that government can’t it’s really got to be a partnership and a collaboration but there’s no one I think single initiative that will in my view that will fix this or solve this or address this

Sangeeta Gupta

I think that’s probably a great way to think about it that there’s just so many changes that one single, there’s no single silver bullet like you said, you really got to figure out a way how you tie the different threads but let maybe a thousand flowers bloom because that’s the nature of what we’re dealing with right if you can bring it together and say here’s our coordinated approach I definitely think in the UK we could join up more these initiatives and maybe India with your scale can do that and you’ve definitely brought the world together in the summit so I’ve no doubt that you can definitely do that wonderful so Keesh if I can come back to you right again from an IT services perspective we’ve been always one of the largest employers for the engineering talent in this country now with the new skills that Ravi talked about do you see this as a way to focus will be largely on more elite top tier institutions and a large volume of students that were probably studying in tier 2, tier 3 colleges across the country and had a phenomenal career in our industry.

We are closing out opportunities for them.

Srikrishna Ramakarthikeyan

I want to make a point on a previous question and then I’ll address this. While, you know, and I agree there’s no silver bullet. However, I’ll say that, you know, I live in the US. The conversation I hear about policy around AI is should we regulate, should we not regulate? Who should regulate? Should it be the state? Should it be the central government? I’m not hearing what I heard here, which is a big focus on inclusiveness. And I think while, you know, it may not have all of the… I think while, you know, it may not have all of the… I think it’s still a very material difference in approach of how government I see here is thinking about.

And actually, I heard that from the UK. I did minister there before. I heard from President Macron yesterday in the plenary session. So I think there’s a big difference in some of countries relative to at least what I’m hearing in the U .S., much more focused on how to make it work for everyone. How to make it inclusive, which I think is a huge difference. I think it will lead to a very material difference in outcomes over a period of time. Now, coming back to your question. So first, do I know all the answers? No. But here are some things, some pieces that I think are true. First. I think we’re going to have to look at the data.

I think we’re going to have to look at the data. I think we’re going to have to look at the data. I think the I’ve seen young air native talent is much better at many things than think somebody even who’s in their 30s and trying to retrain them it’s much like you know do you use Instagram I don’t actually you know but there are kids who are grown up with it right so I think it’s the same difference the digital native I think you’re going to see an air native generation and we find actually like last year the there’s like a set of people we hire from the absolute top engineering schools like IT we had them train our management team on white coding in May last year because white coding back then was brand new and they were like and guess who the kids were the best in it in the company the people who came out of college right there in the best so we had them trainer so I think this part is going to be true right whenever we think of pyramid we have to bear in mind that sometimes the best talent is the youngest one that is coming the second one that’s going to choose I think ultimately the new opportunities cleared by AI go far outlaw far greater than the number of jobs this direct could reduce now there’s going to a period where you know there is a transition period and I’m not sure exactly you know how to clear but I’m very confident that ultimately AI is going to so many more things that will need building applications building tech for and I think power I think the third is also true that for kids the problem to solve is not tech is not coding.

It’s not, you know, creating data structures or whatever it is that kids are trying to solve. I think that’s a solved problem by some of the tech, by AI. So now you’ve got to think of what problems that you want to solve, which is something else, which is where the big

Sangeeta Gupta

So, Keech, I’m going to hold you to that where you said AI will create more jobs than it changes. So we’ll see how that plays out. But you know, one of the conversations I was having with another IT services company, and they were like this AI native talent is great, but that talent will have never learned to, you know, work without AI. And does that mean that some of your foundational and core skills will not be as solid as they were in the past because this is the world you’ve grown up with, and your dependence on these tools will be so high that does it lead to a lack of some foundational skills also, right?

Srikrishna Ramakarthikeyan

Listen, And I was in the United States for a couple of years. And I was in the United States for a couple of years. And I was in the United States for a couple of years. And I was in the United States for a couple of years. And I was in the United States for a couple of years. There was a time when coding you had to do in C++, right? And then there were, the whole evolution of coding as an example has been abstracting what you need to code for to something, right? So you wouldn’t have IDEs like I don’t know how many years ago, right? But who codes without an IDE now? Nobody, right?

And that’s been true for whatever, a decade. So I think that same question will become who codes now? And I don’t think anybody will code, okay? That’ll be a solved problem. So no. Is it going to be a discipline? I think far from it. I think it’s going to become a significant advantage. I think the cost of coding is going to become zero. Cost of code is going to become zero. What that means is you can solve. any number of problems with code that you couldn’t solve for before because it is too complex or too expensive to do so. So, absolutely not. I think it’s going to be a big advantage.

Sue Daley OBE

Yeah, no, really fascinating. I think just on the coding point, you’re absolutely right. And I’m just thinking as a woman in tech as well, we had a big focus in the UK of getting girls into coding. Brilliant. But actually well now, why? But there is also an opportunity there but there’s also a risk. So coding, AI for coding, great. But we will need somebody to check the code. So again, it’s that shifting and that moving of skills. And then my brain went to okay, well the people that were doing the code could we reskill them into checking the code and going more into governance. But then my brain goes to, but hang on, but AI might be able to check the code quicker than a human can.

but then you get to that point of somebody then needs to check that the AI has checked the code correctly so there is, you know, you’re baking in governance and assurance in AI, humans will need to be in the loop, so how can people in the coding world be shifted in their role, shifted to help more on the governance side I did have another point, however my jet lag brain means I’ve forgotten it, so I’ll give away

Sangeeta Gupta

But if you’ve never coded in your life how do you know what to check for?

Sue Daley OBE

Oh, I remembered my point that’s kind of related in a way to the gentleman from Mastercard was saying about context and the completely important context is really, really key and something that is in my brain as well is that people that work in organisations over the last couple of years, they have, you know, done junior roles, they’ve learnt the company they’ve learnt the sector, they’ve learnt the industry they’ve kind of done the grunt work you know, to learn the context and learn what’s important and what’s important and what’s important and what’s important what concerns me slightly is that people coming in using AI will not using AI but when do we give them time to learn the company, when do we give them time to learn the context, are they getting exposed to, you know when I first started in a company, I started in the basement I worked my way up but I knew my sector, I knew my industry, I knew that background I knew that context, I knew what I was checking and why so if automation takes those junior roles away, how are we teaching people, how are people getting exposed to the context and what a fintech industry needs and what it looks like if those opportunities which came through more junior roles are now no longer there, so I think there’s huge opportunities here but there’s also some rethinking we need to do as an industry and a sector of are we skilling people with the right things for what the industry needs going forward as well

Sangeeta Gupta

Thank you so Ravi if you want to go both on the question on we have a million plus engineers graduating every year what are the jobs for them and you know obviously you’ve talked about the skills they need but will we as and you know today tech jobs are not just in the tech industry they are in every sector but what you see as the opportunity for them and secondly this whole part about right what will humans do if AI does all the coding sorry what would humans what would humans do or the engineer do if AI is going to do all the coding right so

Ravi Aurora

flows, how operational controls shape risk over time and when to intervene. Then I think we have to make governance interdisciplinary and influential which is requiring fluency for people and putting things together along law, technology, ethics, operations. Like I mentioned before, privacy, AI, governance, they cannot operate only in silos. So the future readiness requires a big structural change in design, in procurement design, deployment. And we also have to close the uneven digital capability across institutions. We talked about that. If there are central agencies and large enterprises can attract talent, and large can attract talent, then we have to while smaller cannot. So that will create governance gaps and governance gaps especially where AI is expanding the most.

And those are risks that we need to make sure that we have the right solutions or the right thought process because it is around going beyond kind of elite specialization towards more of a broad -based AI digital literacy. So at MasterCard, like, you know, what we do is I think that, you know, we have spent, you know, several of our last years operationalizing responsible AI, right? And not just as a policy exercise but as a workforce and capability challenge. Now, we have a very formal established AI governance framework. We have a chief AI and data governance officer. We have a chief privacy officer, you know. And we have a privacy by design approach into everything.

ensure that AI risks are addressed before systems are built and deployed and not afterwards. And we have an AI governance team that is working horizontally across data, science, product, legal, compliance, engineering because knowing how important that integration layer is because we have and then the product and engineering leaders, you could say they are the first line stewards of risk and AI risk. They are not kind of the recipients of compliance decisions. They are the stewards up front. But that happens when you get that right integration up front. So I think that for us from a MasterCard perspective what we have learned that governing AI at scale, it’s fundamentally a workforce challenge that requires interdisciplinary skill.

and early integration is required into product design and we need governance professionals who can manage risk and not just enforce rules. So it is a privacy by design, security by design. Those are kind of core principles, but then how do you bring those things together in this evolving is important.

Sangeeta Gupta

And I think that’s a fascinating part of this conversation, right? The whole focus on ethics, principles, trust, security, privacy by design, right? And as you think about, Ravi, going back to this large student workforce, right, that we are building for tomorrow, how do we get them to imbibe many of these principles? Obviously, when they come into your organizations, there’s structured programs that you’re running to drive this thinking. But if we had to take this back to the whole college -university ecosystem, any recommendations? Any recommendations you have on how to drive that?

Ravi Aurora

I think, no, absolutely. So clearly, you know, from a corporation perspective, I think, you know, I was looking at this morning when I took a picture of that. I think when I, you know, opened the news this morning, you know, the very first thing on the TV was around, you know, the headlines were AI skills or skill gap, right? And a lot of discussion based on, you know, obviously this week of what’s happening at the, you know, as part of this summit, right? So, and I think that, you know, clearly the role that business, academia, government, right, we all have a role to play in navigating this workforce transition. I think for corporations, it’s not just enough to say you’re offering internships, right, you know, to students.

I personally feel, you know, how are we taking our, maybe, you know, chief learning officer, or other, or engineering. kind of who are at the front line, how are they working with, you know, people in academia and actually helping think through and design courses based on real world examples of, and situations that are coming, you know, then, and certainly, obviously, when people come into internships, it helps them get that exposure, take that back into their learning environment. But I think in, you know, the whole facet of curriculum and curriculum design is changing, where it needs, it should not be only restricted only to computer science majors, but I think that this is something that is required, you know, in terms of AI in every different form across a broad set of disciplines, right?

So it’s not something that we can leave it only for, you know, computer science majors, you know, per se. So I think that the, you know, in terms of priority, embedding AI governance and interdisciplinary. interdisciplinary collaboration into, you know, is one of the very first layers that we have to begin with. So that, you know, the people coming in, you know, as you talked about the engineer, they’re trained to think across the full life cycle of AI system, you know, and not just in a very silent approach, right? And that is what talked about bringing engineers, product, risk, policy, all of those, you know, together. And, you know, then I think another priority is, I know we talk about it and we have to think, focus on role redesigning and not just reskilling.

And I think that, you know, because AI is transforming tasks within jobs rather than eliminating, you know, roles entirely. So I think that the work, you know, we have to see is how do we kind of redesign roles, right? Rather than only focusing on reskilling, right? And we have to build inclusive and distributed talent pipeline. So I think here, I mean, I go back to CII, you know, and other organizations where we have worked with where you go on the field and you’re working, let us say, with MSMEs, right? And, you know, working with the last mile, understanding their challenges and their pain points and bringing that into our product design and, you know, and output that’s required.

And because that provides the context. Right. So I think that, you know, the ability to take our talent pipeline and expose them to real world and helping them contextualize, you know, is very, very important.

Sangeeta Gupta

Thank you. Kish, if I can follow up the question with you, right? I think Ravi spoke about two themes. One is role redesign. So how are you seeing the role redesign happen from a technology services context? And secondly, I think there’s so much we hear about the changing role of the engineer. Now this whole forward deployed engineers becoming like the new buzzword in town. How are you seeing this happen in your organization?

Srikrishna Ramakarthikeyan

Thank you. I mean specifically on role redesign that is absolutely true I mean just again going back to software engineering you think of a typical kind of squad that builds software may have had 7, 8, 10 people some developers some testers scrum master typical roles I think in the extreme case we are seeing down to 3 people one product owner one developer one tester and that substantial redesign of the role and the time it takes to do it is coming down from 2 weeks to 2 days so yes you won’t see value unless you are redesigning the role you won’t see real value from AI now we have been speaking a lot about capabilities right I I I think we should spend enough time on adoption.

And I think there is a pretty big gap. Actually, I think that gap is good for workforce. Because no matter what the capabilities are, I think by the time it becomes real, adopted at scale into workforce, into our enterprise customers, it’s several years. In aggregate, ultimately, I think of the impact of work, and hence workforce, is maybe in low single -digit percentages per year at most. Even 1 to 2 percent right now. Maybe they’ll expand to 2 to 3 next year. This is because of the speed of adoption and the multiple constraints in adoption. Because I don’t think AI knows context. Right? Right. Everybody’s speaking in a watered -like… But, you know, mad could mean what the word mad means for one enterprise.

It could mean the old world for Chennai and another enterprise, right? So, there are many reasons why adoption, I think, is going to be slow. But, and frankly, one of the reasons is role redesign, because it is not as simple as getting a coding toy or whatever data tool. It is an organizational redesign to make that happen.

Sangeeta Gupta

And are you engaged in enabling all your employees to be able to use these tools, given, you know, some of the issues around governance risks that are being talked about?

Srikrishna Ramakarthikeyan

Yeah, 100%. Okay. I think it is kind of… A little bit silly to tell employees that you cannot use. We’ve already got… We are already in the second generation of retaining our employees on the air. I think first generation was whatever, on Gen AI and I would say even as of Jan last year, the whole concept of agentic came in, whatever you learn till that point becomes useless. And so we are doing that second generation of training. Now, what we found is that earlier we used to mandate training. We wanted everybody to learn and we were pushing employees to learn. Suddenly we stopped it. We said, hey, it’s up to you. The truth is, if you don’t learn, you are going to be redundant.

Yeah, so it’s not for us that we learn. It’s for you. And suddenly we’re finding that the number of people who are actually getting trained is more, not less, once you stop mandating it. So, yeah, I mean, are there privacy risks with Facebook? do people use it? The answer for both is yes. So I think you’re just going to find a generation of people who think about the resources here and very differently.

Sangeeta Gupta

So you know yesterday at the Impact Summit, the CEO of Anthropic spoke about I think what was the 100x geniuses in a data center, right? That’s the kind of intelligence at scale that will exist as these technologies really mature to a deployment and scale the deployment gap. How do you see the role of humans shifting and what is this human -AI collaboration that we are all talking about, right?

Srikrishna Ramakarthikeyan

See, the thing is this, I tell my customers this. Stop chasing the shiniest object. There is always going to be advancement in technology every month, every two months, every three months. Something better will come. And in the quest to keep chasing that, actually what you’re doing is not realizing value from anything. So, for me, most enterprises can get significant value if they fully adopt systematically capabilities that existed a year ago. Certainly capabilities that existed six months ago. So, what are the relevance of data center full of geniuses for most enterprises? I think it’s zero. What problems can it solve that enterprises… I think enterprise problems are not to do with IQ. It is far more complex than a linear IQ issue.

So, I think yes, it may be true that AI can do like a thousand things that humans can’t, but it’s not relevant. So I think the real focus is not about capability, about how do you help enterprise adopt and that is the real answer to your earlier problem, earlier question. What do people do if machines do coding? Actually the problem you are trying to solve is not writing code, you are trying to always solve for some other problem. I think that’s the re -skilling that engineers and young talent need to go through. For me now, AI knowledge is like English, it’s foundational, it’s fundamental. I need to be in the business of solving for something else.

And there I think the point you have made several times in terms of engineering, engineering and interdisciplinary I think is crucial. So how many times do you go to a doctor and get frustrated? Listen, I don’t want an eye doctor. I don’t want a nose doctor. I actually want a doctor. Right? And you know, that’s true in engineering. You think about robotics. You don’t want a mechanical engineer. You don’t want a software engineer. You don’t want an AI engineer. You don’t want an electrical engineer. You want an engineer. And I think that is where our talent needs to go. Now, frankly, I think academia has a big job to do to help them get there because our courses are not designed like this right now.

They’re designed as electrical and whatever else. But I think young talent who are reorienting themselves that, hey, AI is not the skill. AI is very foundational. But I’m going to use that to solve for something more meaningful. I think we’ll just be fine on workforce.

Sangeeta Gupta

yeah so if i can come to you right i think you’ve heard a lot about how learning has to change and uh you know whether it’s critical thinking that we’re talking about problem solving experiential use case based uh but at the same time you need access to data you need access to compute you need access to research right so how how are you think how how is uk thinking about this and you know are there examples that india can learn from from there

Sue Daley OBE

yeah absolutely so when we think about realizing the opportunities economic and social opportunities of ai it isn’t just about obviously skills skills is part of it but it’s it’s about it getting to use that word again the foundation’s right so in the uk particularly last year we focused a lot around um and a lot of initiatives a lot of investment has been put into getting the infrastructure right so whether that is looking at our data infrastructure um the uk government infrastructure right so whether that is looking at our data um announcing a national data library initiative to try and um announcing a national data library initiative to try and um announcing a national data library initiative to try and we have, well I was about to say we have huge data sets but you guys have massive data sets, but the data sets we have, how are we using them, how are we bringing them together, not just for public services and public sector use but potentially for industry use as well so data infrastructure absolutely, a lot of investments gone into compute infrastructure so the creation of AI growth zones so dedicated areas in the UK where perhaps we don’t have the compute infrastructure resource right now, how are we building that, part of it is also investment gone into AI, so an AI research resource, so dedicated computer resource compute power chips to allow AI researchers at that fundamental research level to do the work that they’re doing as well so absolutely a lot of focus and I think if I think about and if I reflect on the last, when 2025 in the UK yes the conversation was a lot about how do we get the foundations right, how are we getting the infrastructure right where I think and where I want the conversation to shift is to now adoption yes we’ve been talking about adoption there is already adoption happening in the UK whether it’s financial services, whether it’s in our healthcare system whether it’s transport, logistics but boy there’s so much potential completely agree and at Tech UK we’re really looking at how do we accelerate that AI adoption at pace and speed in a way that we don’t get it wrong from a governance, from an ethics, from a responsible from a regulation point of view absolutely and how do we get it right for people but how do we move quickly enough to realise the opportunity and that’s really really something that we’ll be advocating for more this year because again what can government do to help that but what can we as industry particularly the tech industry help other sectors and industries to understand how they can do that as well and that’s really our core mission of my work at Tech UK and I’m really excited about the future of Tech UK and I’m really excited about the future of Tech UK skills comes into it of course but also does public trust and confidence none of what we’re talking about here is going to really fly if people don’t trust and have confidence in using AI so there is, or having AI used about them so there’s lots of initiatives happening, compute infrastructure absolutely, access to data making sure that researchers have what they need, industry have what they need SMEs have what they need but skills is an integral part of that it’s all linked, it’s all connected but I completely agree adoption is really the key and I was at a UK, I had a reception last night, the High Commission and the Rishi Sunak, the previous Prime Minister was talking about which country will win the AI race, we’re talking about sovereignty we’re talking about the previous panel was talking about sovereignty is kind of key for India it’s key for a lot of countries and we’re looking at what does data what does tech, what does AI sovereignty mean for the UK but Ritchie Sunak’s point was like the countries that will win the race in AI are not the countries that are looking at sovereignty or looking at stack or looking at infrastructure it’s the countries that can demonstrate adoption and can win the race in adoption and that can integrate AI across all the sectors and across all your industry and your economy and definitely in the UK we’re very much tying digital AI adoption and deployment diffusion into society into our economy as a key driver of growth and productivity as well so lots going on but with that central core theme of how do we get this right as well.

Sangeeta Gupta

I fully agree I think getting deployment right is really the opportunity or challenge for economies that are not competing for the LLMs right so I think that that’s what India has to get right because AI can help solve to Keisha’s point we necessarily the shiniest toy is not needed for the enterprise it’s needed to solve India’s deep healthcare challenges it’s needed to solve some of our agriculture related issues right and I think that’s where the whole inclusion focus and what AI can do for you it really means.

Sue Daley OBE

I think sometimes we have to take a step back and just realize how transformational, how exciting this technology is. I mean, many of us have been talking about this for a number of years. But where we are in terms of compute infrastructure and compute power that we never had before, in terms of the digital data and the data sets that we’ve never really had before, I don’t know, I’m feeling quite this does feel like a step change. This does feel like a different moment in time. And it’s how do we grasp that moment in time, which I think is really important. How do we help young people and everybody working in the industry to understand what grasping this opportunity means for them as well?

Sangeeta Gupta

No, I think we’re reaching the end of our session, but I just want to get to the last session and quick comments from all of you, right? You know, what would be, Ravi, your top three priorities for business, academia, and government to successfully navigate this AI workforce transition? And, you know, what are some risks it should plan for?

Ravi Aurora

Great question. I think like, you know, the priorities, I think I mentioned, you know, to you about this whole interdisciplinary collaboration, the whole, you know, aspect around redesign and so forth, right? And I think in risks, I would also see, I think like, if I go to, you know, how what we’ve been talking around AI and how it has democratized access and so forth, right? But there is also the concentration risk that we have to be aware about, right? Because kind of when we have a small set of institutions or companies or talent pools pull ahead disproportionately because they have access to better data or compute and research ecosystems, right? Then I think we have to be very deliberate in how we design for systems.

Right? Right. I think this is where. You know, India, we have a position of strength because, you know, our engineers and you talked about the million plus engineers that are, you know, we are coming from a position of strength because India has contributed to the global technology revolution. You look at all the growth of our global capability centers, you know, kind of reflect that depth of the talent pool, you know, that exists, right? And I think that we have to, as we go forward, you know, get that, you know, design aspect right, right? Because foundational digital and AI literacy into school curriculum, right? Because equitable access to tools, infrastructure, right? Hands -on exposure across geographies, right?

So, and then also we have to go beyond top tier institutions to tier two. Tier two and tier three because other. Otherwise, again, we’ll come back to a concentration risk, you know, that will exist. And, you know, because we don’t need just people who can build AI. We can, we need folks, you know, and professionals who can build with AI, who can govern AI, and who can, and know when to override AI, right? So I think that’s kind of important. So, and we have to make sure that in terms of risk, we don’t go towards over -automation, you know, without adequate human oversight. And, you know, biases need to be taken into account because it should work well for both formal workers as well as informal workers, right?

Women entrepreneurs, you know, vernacular, because, and we talked about context and the contextual aspect of it, right? So we, otherwise, we risk exclusion at scale. And, you know, to Sri’s point, we want that inclusion that you talked about. You know, we have a lot of people who have been in the industry for a long time, and we’ve had a lot of people who have been in the industry for a long time, and we’ve had a lot of people who have been in the industry for a long time, and we’ve had a lot of people who have been in the industry for a long time, and we’ve had a lot of people who have been in the industry for a long time, and we’ve had a lot of people who have been in the industry for a long time, and we’ve had a lot of people who have been in the industry for a long time, You know, and I already talked about the…

I’m sorry, we know we are ending the session, so whoever is ringing the bell, please, we’ll be there on time. Yeah, okay. So, okay, so therefore, I mean, I’ll just conclude there that, you know, it is about this transformation that we need

Sangeeta Gupta

So I think you articulated it very well, right, the risk of concentration, the risk of exclusion, and obviously not doing it very thoughtfully, right? So I think those are very, very well articulated. So if I can come to you, right, what do you see both as from a workforce transition framework, what are our big opportunities and risks, right?

Sue Daley OBE

Yeah, I’m glad you could hear that bell as well. I thought it was so funny in my head. So the question in terms of priorities, so very quickly for businesses, so touching on some of the points you were making as well, embed lifelong learning we need to continuously learn we all do actually but also our organisations I think think about for businesses not just jobs and roles but tasks, what are organisations looking for people to do and I think also organisations need to think about the opportunities but the risks they need to invest in human skills along with technical skills, governance skills but for government as well we see something in the UK that we think should be prioritised and I don’t know if this will resonate with here in India but it’s interoperability of skills credentials so if I get a credential if we’re focusing on lifelong learning if I learn a skill, if I take a course, if I have a credential how is that transferable can that be recognised elsewhere because people will need to shift and people will need to move but also a national taxonomy of skills and perhaps requirements and fundamental foundational skills that we’re talking about?

Are we all talking the same language? Are we all talking about the same skills? Some priorities there, but I’ll leave it there.

Sangeeta Gupta

So a new skills taxonomy and interoperability of skills, I think that’s going to be very important in this environment. But technology is changing so fast, right? Because what was applicable last year is now going to be applicable this year. Keech, if we can come back to you for the closing comments. How are you seeing this?

Srikrishna Ramakarthikeyan

I’ll maybe just say one thing, okay? Sorry. I think inclusiveness has to be by design.

Sangeeta Gupta

Okay, we’re just ending. We said that we’re ending. It’s just 24 seconds, right? Yeah, why didn’t you just close that, Keech?

Srikrishna Ramakarthikeyan

If you look at it, internet is very inclusive. That’s because academia made something free. I think we need academia to do that for AI. that’s how it become more inclusive and I think this has to be a huge priority

Sangeeta Gupta

Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. To welcome you all today for our session, Reimagining AI and STEM Education for India’s Next Generation. Celebrating the vision of Vixit Bharat and its grandeur, we are witnessing the AI revolution during the AI India Summit. With a young population and vibrant digital ecosystem and strong policy momentum, we are uniquely positioned to harness AI not only for the economic future,

Related ResourcesKnowledge base sources related to the discussion topics (36)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“Moderator Sangeeta Gupta welcomed the participants and laid out a three‑segment structure for the discussion: (1) nature of AI‑driven disruption, (2) emerging skill requirements, and (3) policy and education responses.”

The knowledge base identifies Sangeeta Gupta as the panel moderator and notes that the discussion was explicitly broken into three segments covering disruption, design, and related perspectives [S1] and [S11].

Additional Contextmedium

“Srikrishna argued that AI capability is expanding rapidly and is reshaping software engineering more than testing or infrastructure.”

A related source discusses how AI is rapidly reshaping software developer careers, indicating a broader impact on software engineering roles, which adds nuance to the claim [S8].

Additional Contextlow

“Adoption will be gradual, with an estimated 1‑2 % annual impact on employment, potentially rising to 2‑3 % as organisations catch up with the technology.”

Research on labour markets shows that despite rapid AI adoption, overall employment impacts have remained modest and anxiety about job loss has not translated into large-scale displacement, providing additional perspective on the size of the effect [S106].

External Sources (118)
S1
How AI Is Transforming Indias Workforce for Global Competitivene — -Sangeeta Gupta- Panel moderator (role/title not specified in transcript) -Srikrishna Ramakarthikeyan- (Role/title not …
S2
How AI Is Transforming Indias Workforce for Global Competitivene — -Srikrishna Ramakarthikeyan- (Role/title not clearly specified, but appears to be from IT services sector based on discu…
S3
How AI Is Transforming Indias Workforce for Global Competitivene — – Ravi Aurora- Srikrishna Ramakarthikeyan- Sue Daley OBE – Ravi Aurora- Sue Daley OBE – Ravi Aurora- Sue Daley OBE- Sr…
S4
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S5
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Speaker 1- Role/title not specified (appears to be a moderator/participant) -Speaker 2- Role/title not specified (appe…
S6
Building the Workforce_ AI for Viksit Bharat 2047 — -Speaker 1- Role/Title: Not specified, Area of expertise: Not specified -Speaker 3- Role/Title: Not specified, Area of …
S7
How AI Is Transforming Indias Workforce for Global Competitivene — – Srikrishna Ramakarthikeyan- Sue Daley OBE
S8
What happens to software careers in the AI era — AI is rapidly reshaping what it means to work as a software developer, and the shift is already visible inside organisat…
S9
The mismatch between public fear of AI and its measured impact — Insoftware development, “AI-assisted coding” usually means autocomplete, boilerplate generation, or debugging assistance…
S10
Fireside Conversation: 02 — Economic Impact and Gradual Transformation When addressing AI’s economic impact, LeCun cites economists including “Phil…
S11
https://dig.watch/event/india-ai-impact-summit-2026/how-ai-is-transforming-indias-workforce-for-global-competitivene — And I think there is a pretty big gap. Actually, I think that gap is good for workforce. Because no matter what the capa…
S12
State of Play: AI Governance / DAVOS 2025 — Krishna emphasizes the need to drive down the cost of AI technology to make it more inclusive and accessible globally. H…
S13
Singapore takes global lead in AI skills adoption — Workers in Singapore have emerged as the world leaders in adopting AI skills, according toLinkedIn’s recent Future of Wo…
S14
Bridging the AI innovation gap — LJ Rich: to invite our opening keynote. It’s a pleasure to invite to the stage the director of the Telecommunications St…
S15
AI agents offer major value but trust and data gaps remain — AI agents coulddrive up to $450 billion in economic value by 2028, according to new research by Capgemini. The gains wou…
S16
How can Artificial Intelligence (AI) improve digital accessibility for persons with disabilities? — The omnipresence of Artificial Intelligence (AI) and its applications across different sectors necessitates considering …
S17
How Multilingual AI Bridges the Gap to Inclusive Access — “AI can only serve the public good if it serves all languages and all cultures.”[1]. “Today, linguistic exclusion remain…
S18
Generative AI: Steam Engine of the Fourth Industrial Revolution? — To ensure widespread innovation and access to AI, it is imperative to keep AI platforms open and avoid closed ecosystems…
S19
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — This comment fundamentally reframes how we think about AI system design – moving from standardization that excludes outl…
S20
Day 0 Event #248 No One Left Behind Digital Inclusion As a Human Right in the Global Digital Age — Rather than retrofitting accessibility, inclusive design must be built into systems from the beginning. This requires re…
S21
Artificial intelligence (AI) – UN Security Council — Furthermore, there was a consensus on the necessity for enhanced data literacy and data management skills. As AI systems…
S22
AI (and) education: Convergences between Chinese and European pedagogical practices — Norman Sze: Thank you for introduction. It’s my honor to join this forum and share insight from perspective of professio…
S23
Inclusive AI Starts with People Not Just Algorithms — Future careers will combine multiple disciplines, and the power lies in asking the right questions rather than just know…
S24
High Level Session 3: AI &amp; the Future of Work — Nthati Moorosi: Thank you, Programme Director, the moderator, and thank you for affording me this opportunity to talk a …
S25
How will AI transform the UK’s job landscape? — The Institute for Public Policy Research (IPPR) released its report, ‘Transformed by AI’, signalling a potential structu…
S26
Interdisciplinary approaches — AI-related issues are being discussed in various international spaces. In addition to the EU, OECD, and UNESCO, organisa…
S27
Why science metters in global AI governance — These key comments collectively transformed what could have been a technical discussion about AI governance into a profo…
S28
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — These key comments fundamentally shaped the symposium by establishing a framework for responsible, human-centric AI adop…
S29
Building the Next Wave of AI_ Responsible Frameworks &amp; Standards — The panel demonstrated a maturing field where practitioners are converging on core principles while offering complementa…
S30
How to make AI governance fit for purpose? — These key comments fundamentally elevated the discussion from a typical regulatory debate to a sophisticated exploration…
S31
Secure Finance Risk-Based AI Policy for the Banking Sector — The panel explored how AI governance frameworks must account for India’s linguistic diversity, demographic heterogeneity…
S32
The UK government unveils a new Wireless Infrastructure Strategy — The UK government has announced a newWireless Infrastructure Strategyto boost digital connectivity, with an ambition for…
S33
Contents — JD is a Chinese retailer with significant e-commerce logistics. The operation of infrastructure networks, logistics, sou…
S34
Part 3: ‘Readiness across the spectrum: Countries’ — ESCWA’s19 public policy recommendationsprovide guidance for governments navigating the metaverse landscape. These recomm…
S35
Ministerial Roundtable — There’s a stark contrast between countries that have achieved near-universal connectivity (like Azerbaijan) and those st…
S36
Table of Contents — 1. (I) The process of identifying, measuring, and controlling (i.e., mitigating) risks in information systems so as to r…
S37
Summary — The Principality of Liechtenstein is supporting, developing and shaping digitalisation for the benefit of the population…
S38
e-Commerce Policy 2.0 (2025-30) Expanding Markets. Empowering People. Enabling Trust. — Participation will be open to startups, EMIs, banks, and other innovators, including nonlicensed entities, sup…
S39
Diplomatic policy analysis — Overreliance on technology:While machine learning and analytics are powerful tools, they are not infallible. Overdepende…
S40
Driving Social Good with AI_ Evaluation and Open Source at Scale — Moderate disagreement with significant implications. The disagreements reflect deeper tensions between technical efficie…
S41
Open Forum #79 Regulation of Autonomous Weapon Systems: Navigating the Legal and Ethical Imperative — **Human Control and Oversight**: Despite different approaches, speakers across perspectives emphasized the importance of…
S42
How AI Is Transforming Indias Workforce for Global Competitivene — And I think there is a pretty big gap. Actually, I think that gap is good for workforce. Because no matter what the capa…
S43
Building Inclusive Societies with AI — So I have a last question to each of you for what I request is maybe just a minute or two, a quick one. So Arutati, as p…
S44
AI adoption reshapes UK scale-up hiring policy framework — AI adoption is prompting UK scale-ups torecalibrateworkforce policies. Survey data indicates that 33% of founders antici…
S45
Australia weighs risks and rewards of rapid AI adoption — AI is reshaping Australia’s labour market at a pace that has reignited anxiety aboutjob security and skills. Experts say…
S46
Generative AI: Steam Engine of the Fourth Industrial Revolution? — Additionally, the analysis notes that the need for skill development aligns with the Sustainable Development Goals (SDGs…
S47
Living with the genie: Responsible use of genAI in content creation — In conclusion, the summary reiterates that AI algorithms are significantly shaped by their input data, with predominantl…
S48
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — And this requires proactive and coherent policy responses. First, people must be at the center of AI strategy, as we hea…
S49
Projecting Digital economy rules on Global South’s AI regulations: what is needed to safeguard human rights? ( Data Privacy Brasil Research Association) — Additionally, the analysis notes a neutral argument that there is a regulatory “race to the bottom.” This perspective hi…
S50
Interdisciplinary approaches — AI-related issues are being discussed in various international spaces. In addition to the EU, OECD, and UNESCO, organisa…
S51
AI for Social Empowerment_ Driving Change and Inclusion — The required policy responses span multiple domains:
S52
Hard power of AI — In conclusion, the analysis provides insights into the dynamic relationship between technology, politics, and AI. It hig…
S53
How to make AI governance fit for purpose? — This philosophical insight resonated throughout the discussion, providing a framework for understanding why AI governanc…
S54
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — The discussion maintained an optimistic and collaborative tone throughout, characterized by constructive problem-solving…
S55
Artificial intelligence (AI) – UN Security Council — Furthermore, there was a consensus on the necessity for enhanced data literacy and data management skills. As AI systems…
S56
AI (and) education: Convergences between Chinese and European pedagogical practices — Norman Sze: Thank you for introduction. It’s my honor to join this forum and share insight from perspective of professio…
S57
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — AI is not just a technology but a social technical system, a system of systems, and one discipline alone is not sufficie…
S58
Responsible AI for Children Safe Playful and Empowering Learning — Wonderful. So there’s two things that we need for empowerment. One is foundational skills. The child needs to have a bas…
S59
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — The discussion reveals strong consensus on key strategic directions: comprehensive ecosystem development beyond chip man…
S60
Discussion Report: AI as Foundational Infrastructure – A Conversation Between Laurence Fink and Satya Nadella — So the mindset we as leaders should have is we need to think about changing the workflow with the technology. Then that …
S61
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — Armando José Manzueta-Peña:Well, thank you, Luca, for the presentation. I’m more than thrilled to be present here and to…
S62
NextGen AI Skills Safety and Social Value – technical mastery aligned with ethical standards — Dr. Sarabjot emphasizes critical thinking, questioning AI outputs, and understanding AI limitations as the primary requi…
S63
How AI Is Transforming Indias Workforce for Global Competitivene — Are we having the same conversations? Are we facing the same kind of issues? I think what I’ve just heard from my fellow…
S64
What happens to software careers in the AI era — AI is rapidly reshaping what it means to work as a software developer, and the shift is already visible inside organisat…
S65
TCS boosts development with AI-driven engineering — Tata Consultancy Services (TCS) isharnessinggenerative AI to accelerate development in the rapidly growing field of engi…
S66
https://dig.watch/event/india-ai-impact-summit-2026/how-ai-is-transforming-indias-workforce-for-global-competitivene — That is true. So I think whatever disruption we saw I thought would be there in infra. I think it is there but it is a p…
S67
Lightning Talk #246 AI for Sustainable Development Public Private Sector Roles — Guo argues that universities, as engines of knowledge and innovation, have a responsibility to lead AI development in wa…
S68
Workshop 6: Perception of AI Tools in Business Operations: Building Trustworthy and Rights-Respecting Technologies — The discussion identified key AI governance challenges including bias, transparency, privacy, and oversight. Addressing …
S69
High-level AI Standards panel — Need to move from purely technical approach to multidisciplinary, socio-technical paradigm
S70
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — However, one lingering challenge in AI regulation is finding the right balance between adaptability and regulatory predi…
S71
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — These key comments fundamentally shaped the symposium by establishing a framework for responsible, human-centric AI adop…
S72
Pre 12: Resilience of IoT Ecosystems: Preparing for the Future — As AI becomes integrated into IoT systems, proper governance frameworks are essential to ensure ethical and trustworthy …
S73
Balancing innovation and oversight: AI’s future requires shared governance — At IGF 2024, day two in Riyadh, policymakers, tech experts, and corporate leaders discussed one of the most pressing dil…
S74
AI governance struggles to match rapid adoption — Accelerating AI adoptionis exposingclear weaknesses in corporate AI governance. Research shows that while most organisat…
S75
The UK government unveils a new Wireless Infrastructure Strategy — The UK government has announced a newWireless Infrastructure Strategyto boost digital connectivity, with an ambition for…
S76
Bridging the Digital Skills Gap: Strategies for Reskilling and Upskilling in a Changing World — Adapting global best practices to local contexts while maintaining international cooperation and knowledge sharing Addr…
S77
Contents — JD is a Chinese retailer with significant e-commerce logistics. The operation of infrastructure networks, logistics, sou…
S78
UK government invests £1.1 billion to upskill workforce in future technologies — The UK governmenthas unveileda £1.1 billion package to upskill thousands of individuals in future technologies such as A…
S79
United Kingdom — The UK Digital Strategy, published in 2022, outlines a comprehensive approach to strengthening digital foundations, prom…
S80
Adoption of the agenda and organization of work — Inclusion of safeguards such as human rights provisions is necessary for international cooperation and law enforcement …
S81
What is the nature of the internet? Different Approaches | IGF 2023 WS #445 — Bruna Martin-Santos:Thanks, Paula. Yeah, just to add some more thoughts to this, I think I agree with some of both our c…
S82
AI That Empowers Safety Growth and Social Inclusion in Action — Second, they want to close capacity gaps. Many developing countries need infrastructure, skills, and compute to particip…
S83
Ministry of Communications &amp; Information Technology — The following key principles guide our approach to information security and further maintain the confidentiality, integr…
S84
Digital ECOnOMy POliCy lEgal inStRuMEntS — 18. See for example Dark Reading, 2012 which provides examples of longterm attacks at the US Chamber of Commerce, Norte…
S85
Laying the foundations for AI governance — The tone was collaborative and constructive throughout, with panelists building on each other’s points rather than disag…
S86
The Dawn of Artificial General Intelligence? / DAVOS 2025 — The purpose of this panel discussion was to explore different perspectives on the development of artificial general inte…
S87
Comprehensive Report: AI’s Impact on the Future of Work – Davos 2026 Panel Discussion — The tone was notably optimistic and solution-oriented rather than alarmist. While acknowledging legitimate concerns abou…
S88
The Foundation of AI Democratizing Compute Data Infrastructure — The tone was collaborative and solution-oriented throughout, with speakers building on each other’s ideas rather than de…
S89
How AI Is Transforming Diplomacy and Conflict Management — The discussion maintained a consistently thoughtful and cautiously optimistic tone throughout. Participants demonstrated…
S90
Networking Session #132 Cyberpolicy Dialogues:Connecting research/policy communities — The tone of the discussion was collaborative and solution-oriented. It began in a more formal, presentation-style format…
S91
High-Level Dialogue: The role of parliaments in shaping our digital future — The discussion maintained a tone of cautious optimism throughout. Speakers acknowledged significant challenges and risks…
S92
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — The tone was consistently optimistic yet pragmatic throughout the conversation. Speakers maintained an encouraging outlo…
S93
Comprehensive Discussion Report: Governance Frameworks for Reducing Digital Divides in African and Francophone Contexts — The tone was pragmatic and solution-oriented, with speakers expressing both frustration with past failures and cautious …
S94
WS #103 Aligning strategies, protecting critical infrastructure — The tone was largely collaborative and solution-oriented. Speakers built on each other’s points and emphasized the need …
S95
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — The discussion maintained a consistently optimistic and collaborative tone throughout. Speakers expressed enthusiasm abo…
S96
Host Country Open Stage — The tone was consistently professional, optimistic, and forward-looking throughout. Speakers maintained an informative, …
S97
Multigenerational Collaboration: Rethinking Work, Learning and Inclusion in the Digital Age — The discussion maintained a professional yet urgent tone throughout, with speakers expressing both optimism about collab…
S98
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — The discussion maintained a professional, collaborative, and optimistic tone throughout. Panelists demonstrated mutual r…
S99
AI for Good Technology That Empowers People — -Vishnu Ram OV- Session moderator/host
S100
Internet standards and human rights | IGF 2023 WS #460 — Challenges faced at standard forums were discussed, and there was an emphasis on finding ways to overcome these challeng…
S101
The Global Power Shift India’s Rise in AI &amp; Semiconductors — High level of consensus with complementary perspectives rather than conflicting views. The speakers come from different …
S102
https://dig.watch/event/india-ai-impact-summit-2026/fireside-conversation-01 — Dario, you were in India in October, and you’re back again now. You spend a lot of time, actually, with the developer co…
S103
What Proliferation of Artificial Intelligence Means for Information Integrity? — Hicks argues that the information environment is undergoing rapid transformation that we haven’t fully grasped, with soc…
S104
Reinventing Digital Inclusion / DAVOS 2025 — Robert argues that many AI applications, especially for government and local use cases, don’t require the most advanced …
S105
Emerging Markets: Resilience, Innovation, and the Future of Global Development — Because it’s more open source. The costs are lower. It allows you to build on top of that.
S106
Labour market remains stable despite rapid AI adoption — Surveys show persistent anxiety aboutAI-driven job losses. Nearly three years after ChatGPT’s launch, labour data indica…
S107
© 2019, United Nations — From the perspective of ‘creative destruction’ (Schum -peter, 1942), the introduction of new technologies leads to some …
S108
The Impact of Digitalisation and AI on Employment Quality – Challenges and Opportunities — During a session focused on the impact of digitalisation on employment, experts from the International Labour Organisati…
S109
IGF Leadership Panel Event — However, Vint Cerf provided a counterbalancing perspective, arguing that “despite challenges, we must maintain enthusias…
S110
AI for Safer Workplaces &amp; Smarter Industries Transforming Risk into Real-Time Intelligence — Piyush Nangru articulated the transformation in educational terms, stating that “coding is no longer a skill. It’s table…
S111
Open Forum: A Primer on AI — In conclusion, AI holds great promise in reshaping industries and driving innovation. It has the potential to create new…
S112
AI to disrupt jobs, warns DeepMind CEO, as Gen Alpha faces new realities — AI will likely cause significant job disruption in the next five years,accordingto Demis Hassabis, CEO of Google DeepMin…
S113
Building Trustworthy AI Foundations and Practical Pathways — However, Thakkar warns that current AI systems suffer from underlying problems that companies are addressing with superf…
S114
Writing as thinking in the age of AI — In hisarticle, Richard Gunderman argues that writing is not merely a way to present ideas but a core human activity thro…
S115
Experts urge broader values in AI development — Since the launch of ChatGPT in late 2023, the private sectorhas led AI innovation. Major players like Microsoft, Google,…
S116
Acknowledgements — The report identifies four key areas where support is needed:
S117
Rights and Permissions — Creating a skilled workforce for the future of work rests on the growing demand for advanced cognitive skills, sociobeha…
S118
Towards a Reskilling Revolution — In addition to more generalized strategic workforce planning, companies can upgrade their future workforce preparedness …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Srikrishna Ramakarthikeyan
12 arguments139 words per minute2207 words950 seconds
Argument 1
AI reshapes software engineering more than testing or infra (Srikrishna Ramakarthikeyan)
EXPLANATION
He argues that AI’s disruptive impact is greatest on software engineering, surpassing its effects on testing and infrastructure management. The shift reflects how AI tools are increasingly automating coding tasks.
EVIDENCE
He first noted that earlier he would have placed testing before software engineering in terms of AI impact, but now flips that view, stating software engineering is the most affected area [34-37]. He also observes that disruption in infrastructure is plateauing and not showing leaps and bounds of change, unlike software engineering [39-42].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External analyses highlight that AI is fundamentally changing software developer roles and that AI-assisted coding mainly affects engineering tasks rather than testing or infrastructure, supporting the claim that software engineering is most impacted [S8], [S9].
MAJOR DISCUSSION POINT
Scope of AI disruption in IT services
Argument 2
Adoption will be gradual, yielding low single‑digit workforce impact per year (Srikrishna Ramakarthikeyan)
EXPLANATION
He predicts that AI adoption will proceed slowly, resulting in only modest workforce displacement of a few percent annually. The limited impact is due to the time needed for organizations to integrate AI capabilities.
EVIDENCE
He estimates the impact on workforces to be low single-digit percentages per year, perhaps 1-2% now and 2-3% next year, citing the slow pace of adoption and multiple constraints [238-245].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Commentary notes that AI-driven productivity gains are modest (around 0.6% annually) and that a gap exists before large-scale adoption, indicating a low single-digit impact each year [S10], [S1].
MAJOR DISCUSSION POINT
Pace of AI adoption and workforce impact
DISAGREED WITH
Ravi Aurora
Argument 3
AI drives coding cost toward zero, enabling solutions previously infeasible (Srikrishna Ramakarthikeyan)
EXPLANATION
He claims that AI will reduce the cost of writing code to near zero, allowing organizations to solve problems that were previously too complex or expensive. This democratizes software creation and expands the range of possible applications.
EVIDENCE
He states that the cost of coding will become zero, which will let anyone solve many problems that were previously unaffordable or too complex, effectively turning code into a free resource [164-168].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The speaker himself is quoted saying “the cost of coding is going to become zero,” directly corroborating the argument [S1].
MAJOR DISCUSSION POINT
Economic impact of AI on software development
DISAGREED WITH
Sue Daley OBE
Argument 4
Digital‑native talent holds a natural advantage in AI adoption (Srikrishna Ramakarthikeyan)
EXPLANATION
He observes that younger, digitally native workers adapt more quickly to AI tools than older employees, giving them a competitive edge. Their familiarity with digital platforms translates into faster AI uptake.
EVIDENCE
He notes that young AI-native talent outperforms older workers, comparing it to the difference between Instagram users and non-users, and cites hiring top engineering school graduates who excelled at new AI techniques [141-142].
MAJOR DISCUSSION POINT
Talent advantage of digital natives
DISAGREED WITH
Sangeeta Gupta
Argument 5
Squad sizes shrinking (e.g., from 7‑10 to 3) and delivery cycles accelerating (Srikrishna Ramakarthikeyan)
EXPLANATION
He explains that AI enables smaller, more efficient development teams, reducing typical squad sizes and cutting delivery timelines dramatically. This reflects a re‑engineering of agile processes.
EVIDENCE
He describes a typical software squad shrinking from 7-10 members to just three (product owner, developer, tester) and the delivery cycle dropping from two weeks to two days [238-239].
MAJOR DISCUSSION POINT
Organizational restructuring due to AI
Argument 6
Role redesign is critical to realize AI value; adoption gaps hinder impact (Srikrishna Ramakarthikeyan)
EXPLANATION
He stresses that without redesigning roles to incorporate AI, organizations will not capture its benefits. Adoption gaps further slow the realization of AI’s potential.
EVIDENCE
He emphasizes that AI value will not be seen unless roles are redesigned, noting that adoption gaps are a major barrier and that impact may stay in low single-digit percentages [238-247]. He also points out that slow adoption is partly due to the need for organizational redesign rather than just new tools [252-253].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Discussion stresses that redesigning roles-not merely reskilling-is essential for capturing AI benefits, and that adoption gaps keep impact low [S1].
MAJOR DISCUSSION POINT
Importance of role redesign
Argument 7
Inclusion by design; free academic AI resources to democratize access (Srikrishna Ramakarthikeyan)
EXPLANATION
He argues that AI should be made inclusive from the outset, leveraging free academic resources to broaden participation. This approach mirrors how the internet became inclusive through open academic contributions.
EVIDENCE
He states that inclusiveness must be built by design, citing the internet’s inclusivity due to free academic resources and calling for similar openness for AI [365-373].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for open AI ecosystems and inclusive design echo the need for free academic resources to broaden participation [S18], [S20].
MAJOR DISCUSSION POINT
Inclusive AI development
Argument 8
Inclusiveness must be built into AI systems by design (Srikrishna Ramakarthikeyan)
EXPLANATION
He reiterates that AI systems need to be designed with inclusivity at their core, ensuring that diverse users benefit equally. This principle should guide policy and technical development.
EVIDENCE
He succinctly says, “inclusiveness has to be by design,” and expands by noting the internet’s inclusive nature stemming from free academic contributions, urging the same for AI [365][371-373].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The same inclusive-by-design principle is advocated in literature on open AI and universal design [S18], [S20].
MAJOR DISCUSSION POINT
Designing inclusive AI
Argument 9
AI literacy should be treated as a foundational skill comparable to language proficiency, essential for all professionals
EXPLANATION
He likens AI knowledge to English, suggesting that understanding AI will become a basic requirement for effective participation in the modern economy.
EVIDENCE
He states “AI knowledge is like English, it’s foundational, it’s fundamental” [291-292].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Data literacy is highlighted as essential for AI work, suggesting a broader foundational literacy analogous to AI literacy [S21].
MAJOR DISCUSSION POINT
AI as a basic literacy
AGREED WITH
Ravi Aurora
Argument 10
Future engineers need to be interdisciplinary generalists rather than siloed specialists, because AI problems span multiple domains
EXPLANATION
He argues that the traditional separation of engineering disciplines is no longer useful; instead, a unified engineering approach is required to address AI challenges.
EVIDENCE
He says “You don’t want a mechanical engineer, you don’t want a software engineer, you don’t want an AI engineer… you want an engineer” [300-304].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Future AI careers are described as requiring multiple disciplinary fluency, reinforcing the need for interdisciplinary engineers [S23].
MAJOR DISCUSSION POINT
Interdisciplinary engineering for AI
Argument 11
AI will generate more new job opportunities than it eliminates, because it enables solving problems previously infeasible, expanding the scope of work.
EXPLANATION
He argues that AI expands the range of problems that can be tackled, creating entirely new domains of activity and therefore more employment than the jobs it displaces. This net‑positive effect stems from AI lowering the cost of coding to near zero and unlocking solutions that were previously unaffordable.
EVIDENCE
He notes that opportunities for a young, technically savvy person are enormous and that AI’s real value lies in solving problems that could not be solved before, while also stating that the cost of coding will become zero, allowing anyone to address previously impossible challenges [45][166-168].
MAJOR DISCUSSION POINT
Net job creation from AI
Argument 12
AI policy should prioritize inclusiveness and broad participation rather than focusing solely on regulation, contrasting US approach with more inclusive models.
EXPLANATION
He argues that discussions in the United States centre on whether and how to regulate AI, while the United Kingdom and other regions emphasize making AI inclusive for all, suggesting that policy should be designed to ensure broad access and participation.
EVIDENCE
He notes that in the US the conversation is about regulation-who should regulate, whether it should be the state or central government-whereas he observes a focus on inclusiveness in the UK and elsewhere, highlighting a material difference in approaches and emphasizing that inclusiveness will lead to better outcomes [120-134].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Policy discussions emphasize inclusive AI development over pure regulatory focus, aligning with the argument [S20], [S23].
MAJOR DISCUSSION POINT
Inclusive AI policy design
R
Ravi Aurora
14 arguments142 words per minute2127 words896 seconds
Argument 1
AI is embedded in decision‑making, shifting skill needs beyond downstream compliance (Ravi Aurora)
EXPLANATION
He points out that AI is now part of core decision‑making processes rather than a back‑office compliance function, requiring new skill sets across the organization.
EVIDENCE
He explains that AI is being embedded into decision-making, public infrastructure, service delivery and governance, moving it beyond a downstream compliance role [51-53].
MAJOR DISCUSSION POINT
AI’s expanded role in organizations
Argument 2
Need for system‑level judgment to monitor model drift and high‑stakes decisions (Ravi Aurora)
EXPLANATION
He emphasizes that professionals must be able to assess AI outputs, detect model drift, and intervene when decisions have high stakes, especially in regulated industries.
EVIDENCE
He describes the need for system-level judgment to understand AI outputs, detect model drift, and act appropriately in high-stakes, regulated contexts [57-64].
MAJOR DISCUSSION POINT
Critical skill: system‑level judgment
Argument 3
Interdisciplinary fluency across engineering, regulation, risk and user behavior (Ravi Aurora)
EXPLANATION
He argues that AI challenges sit at the intersection of multiple domains, so professionals need fluency across engineering, regulation, risk management, and user behavior.
EVIDENCE
He notes that AI challenges are not purely technical but span engineering, regulation, risk, and user behavior, requiring interdisciplinary fluency [64-66].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for interdisciplinary skill sets in AI careers is underscored in discussions about future AI workforces [S23].
MAJOR DISCUSSION POINT
Interdisciplinary skill requirement
Argument 4
Continuous learning mindset to keep pace with evolving AI models (Ravi Aurora)
EXPLANATION
He stresses that because AI systems continuously evolve with data, workers must adopt a lifelong learning attitude to stay relevant.
EVIDENCE
He highlights the necessity of a continuous learning mindset as AI models evolve with data and the workforce must evolve alongside them [67-70].
MAJOR DISCUSSION POINT
Lifelong learning for AI
Argument 5
Deep contextual awareness for multilingual and diverse user contexts (Ravi Aurora)
EXPLANATION
He points out that AI must understand varied linguistic and cultural contexts in India, requiring engineers to embed contextual awareness into models.
EVIDENCE
He discusses the need for AI agents to grasp multiple languages, dialects, informal systems, and real-life contexts to avoid misinterpretation [71-74].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Research on multilingual AI stresses that serving all languages and cultures is essential for inclusive AI outcomes [S17].
MAJOR DISCUSSION POINT
Contextual awareness in AI
Argument 6
AI literacy must be foundational across all disciplines, not just computer science (Ravi Aurora)
EXPLANATION
He argues that AI education should extend beyond CS majors to all fields, ensuring a broad base of AI‑literate professionals.
EVIDENCE
He states that AI should not be limited to computer-science majors and must be incorporated across a broad set of disciplines, emphasizing curriculum redesign [217-219] and embedding AI governance early [220-222].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for data literacy across the workforce imply that AI literacy should also be cross-disciplinary [S21].
MAJOR DISCUSSION POINT
Broad AI education
Argument 7
Early integration of governance with interdisciplinary teams is required (Ravi Aurora)
EXPLANATION
He describes Mastercard’s AI governance model where governance functions work horizontally across data, science, product, legal, compliance, and engineering, highlighting the need for early integration.
EVIDENCE
He outlines a formal AI governance framework with a chief AI & data governance officer, privacy-by-design, and a cross-functional AI governance team spanning data, science, product, legal, compliance, and engineering [191-200].
MAJOR DISCUSSION POINT
Governance integration
Argument 8
Shift from pure coding to AI governance, code verification and oversight (Sue Daley OBE)
Argument 9
Emphasis on redesigning roles rather than only reskilling existing staff (Ravi Aurora)
EXPLANATION
He argues that AI transforms tasks within jobs, so organizations should focus on redesigning roles to incorporate AI rather than merely reskilling employees.
EVIDENCE
He stresses the need to redesign roles, not just reskill, because AI changes tasks rather than eliminates whole roles, and highlights role redesign as a priority [222-226].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses point out that role redesign, not just reskilling, is the primary lever for AI adoption impact [S1].
MAJOR DISCUSSION POINT
Role redesign vs reskilling
Argument 10
Corporations should co‑design curricula with academia and embed AI governance principles (Ravi Aurora)
EXPLANATION
He suggests that companies collaborate with universities to create courses that reflect real‑world AI challenges and embed governance principles from the start.
EVIDENCE
He describes how corporations can work with chief learning officers and academia to design courses based on real-world examples, ensuring AI governance is embedded early in curricula [215-222].
MAJOR DISCUSSION POINT
Industry‑academia curriculum co‑design
Argument 11
Concentration risk – need equitable access to data, compute and training across geographies (Ravi Aurora)
EXPLANATION
He warns that a few institutions may dominate AI development due to superior data and compute resources, creating concentration risk; equitable access is essential.
EVIDENCE
He highlights concentration risk where a small set of companies or talent pools pull ahead because of better data, compute, and research ecosystems, and calls for equitable access across geographies, including tier-2 and tier-3 institutions [326-339].
MAJOR DISCUSSION POINT
Concentration risk in AI ecosystem
Argument 12
Mastercard’s formal AI governance framework with chief AI & data governance officer, privacy‑by‑design (Ravi Aurora)
EXPLANATION
He outlines Mastercard’s comprehensive AI governance structure, featuring dedicated leadership and a privacy‑by‑design approach to manage AI risks.
EVIDENCE
He details a formal AI governance framework with a chief AI and data governance officer, a chief privacy officer, and a privacy-by-design approach integrated across product, engineering, legal, compliance, and data science teams [191-200][194-197].
MAJOR DISCUSSION POINT
Corporate AI governance model
Argument 13
Contextual awareness is vital to avoid bias and exclusion in AI outcomes (Ravi Aurora)
EXPLANATION
He reiterates that AI must understand local contexts, languages, and cultural nuances to prevent biased or exclusionary results.
EVIDENCE
He stresses the importance of contextual awareness for multilingual and diverse user contexts, noting that without it AI can produce biased outcomes [71-74].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Inclusive design literature and studies on bias highlight that contextual awareness is key to preventing exclusionary AI results [S16], [S17], [S20].
MAJOR DISCUSSION POINT
Avoiding bias through context
Argument 14
Effective AI governance requires dedicated risk‑management professionals who proactively oversee AI systems rather than merely enforce compliance rules.
EXPLANATION
He stresses that governing AI at scale is fundamentally a workforce challenge that needs people who can manage AI‑related risk across functions, not just auditors checking boxes. Proactive risk‑management roles are essential for responsible AI deployment.
EVIDENCE
He states that governing AI at scale is a workforce challenge requiring interdisciplinary skill and that governance professionals must manage risk and not just enforce rules, highlighting the need for a cross-functional AI governance team [201-202].
MAJOR DISCUSSION POINT
Proactive risk management in AI governance
S
Sue Daley OBE
9 arguments184 words per minute2940 words957 seconds
Argument 1
Youth anxiety can be turned into agency through upskilling and reskilling (Sue Daley OBE)
EXPLANATION
She suggests that the anxiety young workers feel about AI can be transformed into proactive agency by providing upskilling and reskilling opportunities.
EVIDENCE
She acknowledges existing anxiety and concern among workers about displacement, then describes turning that anxiety into agency through continuous learning, upskilling programs, and reskilling pathways, including a one-year conversion course for non-AI graduates [98-104][105-107].
MAJOR DISCUSSION POINT
Managing youth anxiety
Argument 2
Lifelong learning plus human/social (“soft”) skills are essential (Sue Daley OBE)
EXPLANATION
She emphasizes that alongside technical abilities, workers need human‑centric soft skills such as communication and client‑facing abilities to add value in an AI‑augmented workplace.
EVIDENCE
She notes that automation frees people for problem-solving and client advisory roles, stressing the need to teach human/social skills in addition to technical and governance competencies [94-95][95-96].
MAJOR DISCUSSION POINT
Importance of soft skills
Argument 3
UK AI Skills Partnership aims to train >1 million people, includes one‑year conversion courses (Sue Daley OBE)
EXPLANATION
She describes the UK government’s AI Skills Partnership, which targets training over a million individuals and offers a one‑year conversion programme for those without an AI degree.
EVIDENCE
She cites the AI Skills Partnership’s goal to train over one million people and mentions a one-year conversion course for university graduates lacking an AI background [95-107].
MAJOR DISCUSSION POINT
Large‑scale upskilling initiative
Argument 4
Whole‑of‑government, industry‑academia collaboration is needed for coordinated upskilling (Sue Daley OBE)
EXPLANATION
She argues that coordinated effort across government bodies, industry, and academia is essential to deliver effective AI upskilling and reskilling at scale.
EVIDENCE
She references the UK’s AI Opportunities Action Plan, the AI Skills Partnership bringing together government, TechUK, and other bodies to design upskilling programmes, and stresses the need for joint collaboration across sectors [89-96].
MAJOR DISCUSSION POINT
Coordinated upskilling effort
Argument 5
Interoperability of skill credentials and a national taxonomy are priorities (Sue Daley OBE)
EXPLANATION
She highlights the need for a common framework that makes skill credentials portable and recognizable across employers and borders, supported by a national taxonomy of skills.
EVIDENCE
She discusses the importance of interoperable skill credentials, a national taxonomy, and a common language for skills, noting initiatives like TechSkills Gold Accreditation that align university curricula with employer needs [110-118].
MAJOR DISCUSSION POINT
Standardising skill credentials
Argument 6
Effective AI governance requires interdisciplinary skills and human oversight (Sue Daley OBE)
EXPLANATION
She asserts that AI governance cannot rely solely on technical controls; it needs interdisciplinary expertise and continuous human oversight to ensure responsible AI use.
EVIDENCE
She emphasizes that AI challenges are interdisciplinary and that human skills are needed to interact with clients and oversee AI outcomes, linking governance with both technical and human competencies [92-95].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Future AI roles are described as interdisciplinary, requiring human oversight for responsible governance [S23].
MAJOR DISCUSSION POINT
Interdisciplinary governance
Argument 7
Human‑in‑the‑loop needed for code checking and decision validation to prevent over‑automation (Sue Daley OBE)
EXPLANATION
She warns that even with AI‑assisted coding, humans must verify code and AI decisions to avoid errors and over‑automation.
EVIDENCE
She points out that while AI can generate code, there remains a need for humans to check that code, and later to verify the AI’s own checks, highlighting the necessity of human oversight in governance [175-178].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of AI-assisted coding note that tools are not fully autonomous and still need human verification, supporting the human-in-the-loop requirement [S9].
MAJOR DISCUSSION POINT
Human oversight in AI coding
Argument 8
AI initiatives need an iterative, flexible approach because static policies quickly become obsolete as technology evolves
EXPLANATION
She argues that there is no single, permanent solution for AI governance; programmes must be designed to adapt rapidly to new developments and shifting contexts.
EVIDENCE
She says “I don’t think there’s a silver bullet… the moment you put in a task force or initiative, it may very quickly need to shift and need to change” [109-110].
MAJOR DISCUSSION POINT
Need for adaptable AI policy frameworks
Argument 9
AI upskilling programmes should include clear pathways for mid‑career professionals to transition into new AI‑related roles, not only entry‑level training.
EXPLANATION
She points out that beyond training fresh graduates, there is a need for structured reskilling routes that enable existing workers to move into AI‑focused positions, ensuring the whole workforce can adapt to AI‑driven change.
EVIDENCE
She mentions “pathways for mid-career into new careers” as part of the AI Skills Partnership initiatives, indicating a focus on reskilling workers already in the labour market [105-107].
MAJOR DISCUSSION POINT
Mid‑career reskilling pathways
S
Sangeeta Gupta
7 arguments137 words per minute1828 words796 seconds
Argument 1
India’s fragmented approach versus the UK’s integrated model (Sangeeta Gupta)
EXPLANATION
She contrasts India’s disaggregated, multi‑government efforts on AI workforce transition with the UK’s more coordinated whole‑of‑government strategy.
EVIDENCE
She notes that in India multiple governments and organisations like NASSCOM are working on parts of the AI upskilling pie without a whole-of-government approach, and asks whether the UK has an integrated model [108-109].
MAJOR DISCUSSION POINT
Comparative policy coordination
Argument 2
AI creates both opportunities and anxiety among Indian youth, requiring clear understanding and navigation of AI-driven workforce shifts
EXPLANATION
She points out that while AI opens new possibilities for the Indian workforce, it also generates significant concern among young people, making it essential to demystify AI and guide the transition.
EVIDENCE
She notes that AI is “obviously creating a number of opportunities” but also “creating a lot of anxiety amongst the youth” and stresses the need to “decode what does AI really mean and how do we navigate these shifts” [9-12].
MAJOR DISCUSSION POINT
Balancing AI opportunities with youth anxiety
Argument 3
Higher education and school curricula must be restructured to embed AI principles, interdisciplinary skills, and continuous learning
EXPLANATION
She argues that current college and school programmes are insufficient for the rapidly changing AI landscape and must be updated to equip graduates with the necessary technical and soft skills.
EVIDENCE
She states that “there’s a lot of change that is needed at a college level and school level on how, you know, even how you’re learning so that you are ready for this very, very changing world” [73-74].
MAJOR DISCUSSION POINT
Curriculum redesign for AI readiness
Argument 4
Upskilling efforts must go beyond elite institutions and include tier‑2 and tier‑3 colleges to avoid marginalising a large talent pool
EXPLANATION
She warns that focusing only on top‑tier universities would exclude many capable graduates from smaller colleges, and calls for inclusive training programmes that reach the broader student base.
EVIDENCE
She asks whether the new AI skill focus will be “largely on more elite top tier institutions and a large volume of students that were probably studying in tier 2, tier 3 colleges” and notes that “We are closing out opportunities for them” [117-118].
MAJOR DISCUSSION POINT
Inclusive upskilling across educational tiers
Argument 5
AI‑native talent may lack foundational coding and problem‑solving skills due to over‑reliance on AI tools
EXPLANATION
She raises the concern that a generation raised with AI assistance might never develop core technical competencies, potentially weakening the overall skill base.
EVIDENCE
She observes that “AI native talent is great, but that talent will have never learned to, you know, work without AI” and asks whether this leads to a “lack of some foundational skills” [145-148].
MAJOR DISCUSSION POINT
Risk of skill erosion in AI‑dependent workforce
Argument 6
Role redesign and the emergence of forward‑deployed engineers are key trends that organisations must adapt to
EXPLANATION
She highlights that AI is reshaping job structures, prompting a shift from traditional roles to new configurations such as forward‑deployed engineers, and asks how companies are responding.
EVIDENCE
She asks Kish about “role redesign” and the concept of “forward deployed engineers becoming like the new buzzword in town” [231-236].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Discussion of AI-driven transformation stresses role redesign as a central organizational response [S1].
MAJOR DISCUSSION POINT
Organisational role redesign in the AI era
Argument 7
AI education curricula must embed ethics, trust, security and privacy principles to ensure future professionals develop responsible AI practices.
EXPLANATION
She asks for recommendations on how universities can incorporate ethical foundations, trust, security and privacy into AI programmes, highlighting the necessity of responsible AI education for upcoming talent.
EVIDENCE
She explicitly requests guidance on embedding ethics, principles, trust, security and privacy into curricula when speaking with Ravi Aurora about education recommendations [204-209].
MAJOR DISCUSSION POINT
Embedding AI ethics and trust in education
S
Speaker
1 argument95 words per minute25 words15 seconds
Argument 1
Multi‑stakeholder collaboration is essential for AI workforce transformation
EXPLANATION
The opening remark brings together senior representatives from government policy, industry, and technology innovation, signalling that tackling AI’s impact on jobs and skills requires coordinated action across these sectors.
EVIDENCE
The speaker lists the President of Global Public Policy and Government Affairs at Mastercard, the Co-Founder and Managing Director of Nucleus Software, and the Director of Tech and Innovation at Tech UK, demonstrating a deliberately cross-sectoral panel assembled to discuss AI and workforce issues [1].
MAJOR DISCUSSION POINT
Need for coordinated multi‑sector effort on AI and workforce change
AGREED WITH
Sangeeta Gupta, Sue Daley OBE
Agreements
Agreement Points
Role redesign and interdisciplinary skill requirement are essential to realize AI value
Speakers: Srikrishna Ramakarthikeyan, Ravi Aurora, Sue Daley OBE
Role redesign is critical to realize AI value; adoption gaps hinder impact Emphasis on redesigning roles rather than only reskilling existing staff Effective AI governance requires interdisciplinary skills and human oversight
All three speakers stress that without redesigning roles and embedding interdisciplinary expertise, organisations will not capture AI benefits; adoption gaps and the need for human oversight are highlighted as barriers to value creation [238-247][222-226][92-95].
POLICY CONTEXT (KNOWLEDGE BASE)
International policy discussions emphasize interdisciplinary approaches to AI governance, such as OECD and UNESCO initiatives [S50] and calls for a new interdisciplinary field to address AI complexity [S57]; India’s AI workforce strategy also highlights role redesign and cross-skill development [S59].
A continuous/lifelong learning mindset is required to keep pace with AI evolution
Speakers: Ravi Aurora, Sue Daley OBE
Continuous learning mindset to keep pace with evolving AI models Lifelong learning plus human/social (‘soft’) skills are essential
Both speakers emphasise that workers must adopt a lifelong learning approach to stay relevant as AI systems evolve rapidly, with continuous upskilling being crucial for future readiness [67-70][98-104].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with SDG 4 on quality education and skill development for the digital economy [S46]; the AI Impact Summit 2026 stresses lifelong learning and social-protection policies for workers adapting to AI [S48]; and expert panels call for continuous upskilling and critical thinking in AI curricula [S62].
Youth anxiety about AI should be transformed into agency through upskilling
Speakers: Sangeeta Gupta, Sue Daley OBE
AI creates both opportunities and anxiety among Indian youth, requiring clear understanding and navigation of AI‑driven workforce shifts Youth anxiety can be turned into agency through upskilling and reskilling
Both acknowledge that AI generates anxiety among young workers and propose targeted upskilling programmes to convert that anxiety into proactive agency and career opportunities [10-12][98-104].
POLICY CONTEXT (KNOWLEDGE BASE)
Australian experts note heightened job-security anxiety due to rapid AI adoption and stress empowerment through skills development [S45]; child-focused AI literacy frameworks also advocate building agency via foundational AI skills [S58]; inclusive AI policy discussions similarly call for turning anxiety into participation [S43].
Coordinated whole‑of‑government and multi‑stakeholder collaboration is needed for AI workforce transformation
Speakers: Sangeeta Gupta, Sue Daley OBE, Speaker
India’s fragmented approach versus the UK’s integrated model Whole‑of‑government, industry‑academia collaboration is needed for coordinated upskilling Multi‑stakeholder collaboration is essential for AI workforce transformation
All three highlight that effective AI workforce transition requires an integrated approach linking government, industry and academia, moving beyond fragmented efforts to a unified strategy [108-109][89-96][1].
POLICY CONTEXT (KNOWLEDGE BASE)
The AI Impact Summit calls for coherent whole-of-government strategies and multi-stakeholder investment in skills [S48]; interdisciplinary policy work is being pursued across the EU, OECD, UNESCO and other bodies [S50]; and policy roadmaps emphasize cross-sector collaboration for AI empowerment [S51].
Interdisciplinary fluency across technical, regulatory and user‑behavior domains is vital for AI implementation
Speakers: Ravi Aurora, Sue Daley OBE
Interdisciplinary fluency across engineering, regulation, risk and user behavior Effective AI governance requires interdisciplinary skills and human oversight
Both speakers argue that AI challenges span multiple domains and therefore require professionals who can operate across engineering, risk, regulation and user-experience, coupled with human oversight to ensure responsible deployment [64-66][92-95].
POLICY CONTEXT (KNOWLEDGE BASE)
International AI governance discussions highlight the need for interdisciplinary fluency to address technical, legal and societal dimensions [S57]; policy research roadmaps call for a new interdisciplinary field to manage AI’s system-of-systems nature [S57]; skill frameworks stress combining technical, regulatory and user-experience knowledge [S62].
AI literacy is a foundational skill comparable to language proficiency and must be embedded across all disciplines
Speakers: Srikrishna Ramakarthikeyan, Ravi Aurora
AI literacy should be treated as a foundational skill comparable to language proficiency, essential for all professionals AI literacy must be foundational across all disciplines, not just computer science
Both assert that AI knowledge will become as essential as basic language skills, requiring its integration into curricula and professional development across all fields, not limited to computer-science majors [291-292][217-219].
POLICY CONTEXT (KNOWLEDGE BASE)
UN Security Council consensus underscores data and AI literacy as essential competencies for all professionals [S55]; educational initiatives for children stress AI literacy as a basic right and skill [S58]; and SDG 4 emphasizes AI-related literacy as part of quality education [S46].
Similar Viewpoints
Both stress that AI systems must retain human oversight, with professionals required to verify AI‑generated code and decisions to avoid over‑automation risks [191-200][175-178].
Speakers: Ravi Aurora, Sue Daley OBE
Effective AI governance requires interdisciplinary skills and human oversight Human‑in‑the‑loop needed for code checking and decision validation to prevent over‑automation
Both highlight that AI policy and programme design must be adaptable and iterative, as static solutions quickly become outdated in a fast‑moving technological landscape [109-110][238-247].
Speakers: Ravi Aurora, Sue Daley OBE
AI initiatives need an iterative, flexible approach because static policies quickly become obsolete as technology evolves I don’t think there’s a silver bullet… the moment you put in a task force or initiative, it may very quickly need to shift and need to change
Unexpected Consensus
Both UK and US‑influenced speakers stress the need for flexible, iterative policy approaches despite differing regulatory philosophies
Speakers: Ravi Aurora, Sue Daley OBE
AI initiatives need an iterative, flexible approach because static policies quickly become obsolete as technology evolves I don’t think there’s a silver bullet… the moment you put in a task force or initiative, it may very quickly need to shift and need to change
While Ravi discusses AI governance from a corporate perspective and Sue from a policy standpoint, both converge on the unexpected consensus that AI initiatives must be designed with flexibility and continuous revision, a point not explicitly raised by other participants [109-110][238-247].
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses note the gap between rapid technological change and slower policy cycles, urging flexible, iterative frameworks [S52][S53]; concerns about a regulatory “race to the bottom” further motivate adaptable policy design [S49]; UK scale-up hiring surveys illustrate cautious, adaptive workforce policies in response to AI [S44].
Overall Assessment

The panel shows strong convergence on several key themes: the necessity of role redesign and interdisciplinary skill sets; the centrality of lifelong learning; the importance of coordinated multi‑stakeholder action; the view of AI literacy as a basic skill; and the need to address youth anxiety through upskilling. These shared positions indicate a high level of consensus on how to manage AI‑driven workforce transformation.

High consensus across speakers, suggesting that policy makers, industry leaders and educators are aligned on the strategic priorities for AI workforce transition, which should facilitate coordinated actions and accelerate effective implementation.

Differences
Different Viewpoints
AI‑native talent is an advantage versus a risk of eroding foundational skills
Speakers: Srikrishna Ramakarthikeyan, Sangeeta Gupta
Digital‑native talent holds a natural advantage in AI adoption (Srikrishna Ramakarthikeyan) AI‑native talent may lack foundational coding and problem‑solving skills due to over‑reliance on AI tools (Sangeeta Gupta)
Srikrishna argues that younger, digitally native workers adapt faster to AI and give organisations a competitive edge, citing examples of Instagram-savvy hires and top-engineering-school graduates [141-142]. Sangeeta counters that a generation raised with AI tools may never learn to work without them, potentially weakening core coding and problem-solving abilities [145-148].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on AI education balance the benefits of AI-native talent with preserving core foundational skills, as highlighted in child-focused AI literacy and SDG-aligned skill development frameworks [S58][S46].
AI will reduce coding costs to near zero versus the continued need for human code verification and governance
Speakers: Srikrishna Ramakarthikeyan, Sue Daley OBE
AI drives coding cost toward zero, enabling solutions previously infeasible (Srikrishna Ramakarthikeyan) Shift from pure coding to AI governance, code verification and oversight; humans must still check AI‑generated code (Sue Daley OBE)
Srikrishna predicts that the cost of coding will become zero, allowing anyone to solve complex problems and making coding a solved problem [164-168]. Sue argues that even with AI-generated code, humans are required to review and validate the output to avoid errors, emphasizing a human-in-the-loop governance model [175-178].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy analyses warn against overreliance on AI without human oversight, emphasizing risks of bias and the necessity of verification [S39]; the tension between technical efficiency and human governance is discussed in efficiency-oversight debates [S40]; autonomous-systems discussions also stress maintaining human control in critical decisions [S41].
Speed of AI adoption and its workforce impact
Speakers: Srikrishna Ramakarthikeyan, Ravi Aurora
Adoption will be gradual, yielding low single‑digit workforce impact per year (Srikrishna Ramakarthikeyan) AI is already embedded in core decision‑making and requires rapid role redesign and interdisciplinary integration (Ravi Aurora)
Srikrishna estimates AI will affect only 1-2 % of the workforce this year and 2-3 % next year, citing slow adoption and organizational redesign constraints [238-245]. Ravi stresses that AI is now part of decision-making across finance, risk and governance, urging swift integration, role redesign and continuous learning to keep pace [51-53][222-226].
POLICY CONTEXT (KNOWLEDGE BASE)
Reports document rapid AI diffusion and its labour-market effects, from India’s projected workforce shifts [S42] to UK founders anticipating job cuts [S44] and Australian anxiety over job security [S45]; these illustrate the accelerating impact on workers.
Unexpected Differences
Optimism about zero‑cost coding versus necessity of human oversight
Speakers: Srikrishna Ramakarthikeyan, Sue Daley OBE
AI will make coding cost zero, turning code into a free resource (Srikrishna Ramakarthikeyan) Even with AI‑generated code, humans must verify and govern it to prevent errors (Sue Daley OBE)
Srikrishna’s vision of coding becoming a cost-free, democratized activity [164-168] clashes with Sue’s insistence that a human-in-the-loop remains essential for code validation and governance [175-178], revealing an unexpected tension between the promise of full automation and practical governance needs.
POLICY CONTEXT (KNOWLEDGE BASE)
Reflects the same policy concerns about overdependence on generative AI and the need for human governance noted in overreliance studies [S39] and the efficiency-oversight debate [S40].
Speed of AI impact versus perceived immediate strategic urgency
Speakers: Srikrishna Ramakarthikeyan, Ravi Aurora
AI adoption will be slow, yielding modest workforce impact (Srikrishna Ramakarthikeyan) AI is already embedded in core decision‑making and requires rapid organizational change (Ravi Aurora)
While Srikrishna forecasts a gradual rollout with low-single-digit impact [238-245], Ravi portrays AI as already central to decision-making and urges swift role redesign and continuous learning [51-53][222-226]. The contrast between a measured rollout and an urgent transformation agenda was not anticipated given the shared focus on AI.
POLICY CONTEXT (KNOWLEDGE BASE)
Highlights the documented mismatch between fast-moving AI technologies and slower policy responses, a theme in AI governance literature calling for urgent yet thoughtful action [S52][S53].
Overall Assessment

The panel largely agrees on the need for upskilling, interdisciplinary skills, and inclusive AI policies. The main points of contention revolve around the implications of AI‑driven automation: whether AI will render coding essentially free and eliminate the need for human coders, versus the necessity of human oversight; and whether AI‑native talent represents a strategic advantage or a risk of eroding core technical foundations. A secondary tension exists over the expected speed of AI adoption, with some participants forecasting a gradual impact and others urging rapid transformation.

Moderate disagreement. The divergences are focused on future expectations and implementation details rather than fundamental goals, suggesting that consensus on overarching objectives (upskilling, inclusion, interdisciplinary collaboration) remains strong, but policy and practice pathways will require careful negotiation to balance optimism about automation with safeguards for skill integrity and governance.

Partial Agreements
All three agree that upskilling is essential, but differ on the primary mechanism: Ravi pushes for direct industry‑academia collaboration and role redesign [215-222]; Sue highlights a government‑led AI Skills Partnership with conversion courses and a national taxonomy for portable credentials [95-107][110-118]; Sangeeta calls for broader curriculum reform at school and college levels [73-74].
Speakers: Ravi Aurora, Sue Daley OBE, Sangeeta Gupta
Need for upskilling and reskilling the workforce for AI (Ravi Aurora, Sue Daley OBE, Sangeeta Gupta) Industry‑academia co‑design of curricula and embedding AI governance early (Ravi Aurora) National AI Skills Partnership targeting >1 million learners and credential interoperability (Sue Daley OBE) Revamping school and college curricula to embed AI principles (Sangeeta Gupta)
All concur on the goal of inclusive AI, yet propose different pathways: Ravi warns of concentration risk and calls for equitable infrastructure across geographies and tier‑2/3 institutions [326-339]; Srikrishna stresses designing inclusiveness from the start, citing the internet’s open academic origins [365-373]; Sue focuses on a coordinated whole‑of‑government partnership and skill‑credential interoperability [89-96][109-110].
Speakers: Ravi Aurora, Srikrishna Ramakarthikeyan, Sue Daley OBE
Inclusive AI ecosystem is required (Ravi Aurora, Srikrishna Ramakarthikeyan, Sue Daley OBE) Concentration risk must be mitigated through equitable access to data, compute and training (Ravi Aurora) Inclusiveness should be built by design, leveraging free academic resources (Srikrishna Ramakarthikeyan) Whole‑of‑government, industry‑academia partnership needed for coordinated upskilling (Sue Daley OBE)
Takeaways
Key takeaways
AI is reshaping software engineering more than testing or infrastructure, driving coding costs toward zero and enabling previously infeasible solutions. Adoption of AI in the workforce will be gradual, likely resulting in low single‑digit percentage impacts on employment each year. New skill requirements include system‑level judgment, interdisciplinary fluency (engineering, regulation, risk, user behavior), continuous learning mindset, deep contextual awareness, and strong human/social (soft) skills. AI governance must be embedded early in product design, with interdisciplinary teams and human‑in‑the‑loop oversight to manage model drift, bias, and high‑stakes decisions. Role redesign (e.g., smaller agile squads, faster delivery cycles) is essential; merely reskilling staff is insufficient. Education systems need to integrate AI literacy across all disciplines, co‑design curricula with industry, and provide pathways for conversion courses and lifelong learning. A coordinated, whole‑of‑government and industry‑academia approach (as exemplified by the UK AI Skills Partnership) is critical for scaling upskilling and ensuring inclusivity. Risks include concentration of talent and resources, over‑automation without human oversight, exclusion of non‑English or informal sector workers, and fragmented policy approaches (especially in India). Inclusiveness must be built into AI systems by design, leveraging free academic resources and equitable access to data and compute.
Resolutions and action items
Commitment from panelists to promote interdisciplinary collaboration between industry, academia, and government for AI curriculum design (Ravi Aurora). Recommendation to develop a national taxonomy of AI‑related skills and ensure interoperability of skill credentials (Sue Daley OBE). Suggestion for corporations to involve frontline engineers in co‑creating training programs and real‑world use‑case curricula with universities (Ravi Aurora). Call for governments to invest in data and compute infrastructure (AI growth zones, national data library) to support widespread AI adoption (Sue Daley OBE). Encouragement for organizations to shift from mandatory training mandates to self‑directed learning, leveraging the natural motivation of digital‑native employees (Srikrishna Ramakarthikeyan). Proposal to focus on role redesign rather than only reskilling, reducing squad sizes and accelerating delivery cycles (Srikrishna Ramakarthikeyan).
Unresolved issues
Exact magnitude and timeline of job displacement versus job creation across different sectors remain uncertain. How to ensure that young professionals acquire deep foundational coding and system knowledge while relying heavily on AI tools. Specific mechanisms for coordinating fragmented Indian state and central initiatives into a unified AI workforce strategy. Details on how to provide contextual AI training for multilingual and informal‑sector workers at scale. Methods to monitor and mitigate concentration risk where a few institutions dominate access to data, compute, and talent. Concrete metrics or benchmarks for measuring the effectiveness of upskilling programs and AI governance frameworks.
Suggested compromises
Balancing automation of routine tasks with investment in human‑centric skills (soft skills, governance) to retain meaningful work (Sue Daley OBE). Adopting an iterative, flexible policy approach rather than a single, fixed solution, allowing rapid adaptation as technology evolves (Sue Daley OBE). Combining top‑tier talent pipelines with broader inclusion of tier‑2/3 institutions to avoid concentration while still leveraging elite expertise (Ravi Aurora, Srikrishna Ramakarthikeyan). Encouraging voluntary upskilling rather than mandatory mandates, trusting employee motivation while still providing resources (Srikrishna Ramakarthikeyan).
Thought Provoking Comments
Software engineering is now the most disrupted area, even more than testing or infrastructure management.
He challenged the common assumption that testing would be the first casualty of AI, highlighting a rapid shift in where AI impact is felt.
Prompted the moderator to ask about implications for fresh graduates and led the panel to focus on upskilling software engineers rather than just QA staff.
Speaker: Srikrishna Ramakarthikeyan
The real value of AI is not in reducing headcount but in solving problems we couldn’t solve before.
This reframed AI from a threat of job loss to an opportunity for new problem‑solving capabilities.
Shifted the tone from anxiety to opportunity, encouraging other speakers (Ravi and Sue) to discuss skill sets needed to leverage AI rather than merely defending jobs.
Speaker: Srikrishna Ramakarthikeyan
We need system‑level judgment, interdisciplinary fluency, a continuous learning mindset, and deep contextual awareness to work with AI in high‑stakes, regulated environments.
He introduced a nuanced skill taxonomy that goes beyond technical coding ability, emphasizing judgment and context.
Guided the discussion toward the importance of governance, risk, and domain knowledge, influencing Sue’s remarks on AI governance and the need for interdisciplinary training.
Speaker: Ravi Aurora
Turn anxiety into agency – empower people to take the lead in upskilling, with initiatives like the UK AI Skills Partnership aiming to train over one million people.
She offered a concrete, large‑scale response to workforce anxiety, linking policy to personal empowerment.
Created a turning point where the conversation moved from describing problems to presenting actionable government‑backed solutions, prompting Sangeeta to compare India’s fragmented approach with the UK’s coordinated effort.
Speaker: Sue Daley OBE
In the US the debate is about regulation; in the UK (and elsewhere) the focus is on inclusiveness – making AI work for everyone.
He highlighted a strategic difference in national AI policy approaches, introducing the theme of inclusivity versus regulation.
Sparked a deeper dialogue on policy design, leading Sue to stress the need for iterative, flexible frameworks and Sangeeta to question India’s disaggregated governance.
Speaker: Srikrishna Ramakarthikeyan
The cost of coding will become zero; AI will make code cheap, allowing us to solve problems that were previously too complex or expensive.
He projected a radical shift in the economics of software development, challenging the notion that coding skills will remain a premium commodity.
Prompted Sue to discuss the future role of humans in checking AI‑generated code and raised concerns about losing foundational coding knowledge, deepening the debate on future job functions.
Speaker: Srikrishna Ramakarthikeyan
Automation of junior roles removes the pathway through which people learn context; we must consider how to teach context if those roles disappear.
She raised a subtle but critical point about the hidden value of entry‑level positions for building domain knowledge.
Shifted the conversation toward the importance of preserving experiential learning, influencing Ravi’s later emphasis on role redesign rather than pure reskilling.
Speaker: Sue Daley OBE
AI transforms tasks within jobs rather than eliminating entire roles; we should focus on role redesign and building inclusive, distributed talent pipelines.
He reframed the narrative from job loss to task evolution, emphasizing redesign over reskilling.
Steered the panel toward concrete strategies for organizations and governments, leading Srikrishna to discuss adoption speed and the need for systematic rollout.
Speaker: Ravi Aurora
There is a concentration risk: a few institutions or companies could pull ahead due to better data, compute, and talent, leaving others behind.
He introduced a macro‑level risk that goes beyond individual skill gaps, highlighting systemic inequality in AI development.
Prompted Sangeeta to ask about inclusive policies and Sue to mention the need for interoperable skill credentials and national taxonomy, expanding the discussion to ecosystem‑wide solutions.
Speaker: Ravi Aurora
Inclusiveness has to be by design; academia should make AI education free and open, just as the internet became inclusive.
He offered a clear, actionable principle for making AI benefits broadly accessible, tying back to earlier points on policy and education.
Served as a closing rallying call, reinforcing the earlier themes of inclusive policy and education, and aligning with Sue’s emphasis on universal access to AI infrastructure.
Speaker: Srikrishna Ramakarthikeyan
Overall Assessment

The discussion was shaped by a series of pivot points where speakers moved the conversation from fear of displacement to concrete opportunities and systemic solutions. Early insights about which job families are most affected (software engineering) and the reframing of AI’s value set the stage for deeper analysis of required skill sets. Ravi’s articulation of system‑level judgment and interdisciplinary fluency, followed by Sue’s policy‑level response (turning anxiety into agency), introduced a practical roadmap that shifted the tone from speculative to actionable. Srikrishna’s contrasts between regulatory focus and inclusivity, plus his bold claim that coding will become free, injected strategic and economic perspectives that broadened the debate. Concerns about loss of contextual learning and concentration risk added nuance, prompting calls for role redesign, inclusive education, and interoperable credentials. Collectively, these comments redirected the panel from describing disruption to proposing coordinated, inclusive, and interdisciplinary responses, highlighting the need for policy, industry, and academia to work together.

Follow-up Questions
Which specific services or functions within IT services are most likely to be impacted by AI, and how will that change over time?
Identifying the most vulnerable services helps firms prioritize reskilling and investment decisions.
Speaker: Srikrishna Ramakarthikeyan
How can we ensure that AI‑native talent retains foundational coding and problem‑solving skills despite heavy reliance on AI tools?
There is a risk that over‑dependence on AI erodes core technical competencies needed for future adaptability.
Speaker: Sangeeta Gupta
What concrete measures can turn AI‑related anxiety among workers into agency and proactive upskilling?
Converting fear into action is essential for a smooth workforce transition and for maintaining productivity.
Speaker: Sue Daley OBE
How can school and university curricula be better aligned with the AI revolution to provide relevant, interdisciplinary skills?
Curriculum alignment ensures graduates possess the skills demanded by AI‑driven industries.
Speaker: Sue Daley OBE
What effective models exist for role redesign in IT services, especially regarding the emergence of ‘forward‑deployed engineers’?
Understanding new role structures is critical for organizations to leverage AI while preserving employee value.
Speaker: Sangeeta Gupta
How can AI governance and interdisciplinary collaboration be embedded into higher‑education curricula across non‑technical disciplines?
Broad AI literacy beyond computer science is needed to create a workforce capable of responsible AI deployment.
Speaker: Ravi Aurora
What strategies can mitigate concentration risk where a few institutions or firms dominate AI talent, data, and compute resources?
Preventing concentration ensures equitable access to AI opportunities and avoids widening socioeconomic gaps.
Speaker: Ravi Aurora
How can a national skills‑credential interoperability framework be created to allow lifelong learning and mobility across sectors?
Interoperable credentials enable workers to upskill continuously and transition between roles and industries.
Speaker: Sue Daley OBE
What policies and investments are needed to provide equitable access to data, compute, and research infrastructure for SMEs and under‑served regions?
Access to foundational AI infrastructure is a prerequisite for widespread adoption and inclusive growth.
Speaker: Sue Daley OBE
How can AI systems be designed to avoid exclusion of informal workers, women entrepreneurs, and vernacular language users?
Ensuring AI works for diverse user groups prevents large‑scale exclusion and supports inclusive economic development.
Speaker: Ravi Aurora
What pathways exist for developers whose coding tasks are automated to transition into code‑review or governance roles, and how can organizations support this shift?
Providing clear career transition routes helps retain talent and maintains oversight of AI‑generated code.
Speaker: Sue Daley OBE
How can organizations preserve the contextual learning traditionally gained in junior roles when those roles are automated?
Contextual knowledge is vital for effective AI oversight and for making informed business decisions.
Speaker: Sue Daley OBE
What evidence‑based upskilling programs are most effective for mid‑career professionals transitioning to AI‑augmented roles?
Mid‑career reskilling is crucial to avoid large‑scale displacement and to retain experienced talent.
Speaker: Sue Daley OBE
How can AI skills partnerships be structured to remain iterative and flexible as technology evolves?
Flexibility ensures that initiatives stay relevant and can adapt to rapid AI advancements.
Speaker: Sue Daley OBE
What data is needed to accurately quantify AI‑driven workforce impact (e.g., displacement percentages) across industries?
Reliable metrics are essential for policymakers and businesses to design effective transition strategies.
Speaker: Srikrishna Ramakarthikeyan
How can AI be taught as a foundational literacy akin to English across all educational levels?
Embedding AI literacy early prepares the next generation for an AI‑centric economy.
Speaker: Srikrishna Ramakarthikeyan
What specific examples from the UK’s AI infrastructure and adoption initiatives could be adapted for India?
Learning from UK successes can accelerate India’s AI deployment while avoiding known pitfalls.
Speaker: Sangeeta Gupta
How can inclusiveness be built into AI development and education by design?
Design‑level inclusivity ensures that AI benefits are broadly shared and that barriers to entry are minimized.
Speaker: Srikrishna Ramakarthikeyan

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building the Workforce_ AI for Viksit Bharat 2047

Building the Workforce_ AI for Viksit Bharat 2047

Session at a glanceSummary, keypoints, and speakers overview

Summary

The India AI Impact Summit opened with Dr. Washima emphasizing that AI is the “next big thing after electricity” and calling for trust-based, collaborative ethical frameworks to guide fast-paced AI development for economic growth, social good, and human capital [4-10][11-13]. Chairperson Shubhavi S. Radha Chauhan reinforced the government’s human-centric AI vision, advocating small, sector-specific language models and a capacity-building agenda that equips officials with the skills to set data guardrails and evaluate outcomes [23-33][34-38].


India’s approach was illustrated by the Commission’s policy frameworks, operational guidelines, and the nation’s extensive digital public infrastructure, which together support a workforce of 5.8 million AI professionals and enable localized AI solutions such as tiny models for rural health and agriculture [73-78][92-98]. Panelist Anil Shivastava warned that AI cannot simply be layered onto legacy systems; it requires re-engineering of data silos, multilingual support, and new security considerations [122-138]. Guilherme Albusco Almeida highlighted Brazil-India collaboration opportunities in R&D, capacity-building platforms, and ethical-assessment frameworks, noting existing South-South partnerships [148-166]. Robin Scott identified major gaps: only 26 % of implementers understand their own ethical frameworks and many pilots lack evaluation plans, yet over 90 % remain optimistic about AI’s productivity gains [182-194].


Addressing environmental impact, Robin cited a new AI-climate course, Guilherme described “green AI” initiatives and AI-driven forest monitoring, and Anil noted Google’s pledge for carbon-neutral data centres by 2030 [201-203][204-213][215-218]. The summit culminated in the launch of a Digital Capacity Building Alliance-a global public-good model linking governments, industry, academia, and civil society to scale inclusive, ethical AI capacity building [247-270], reinforced by Minister Dr. Jitendra Singh’s keynote on dynamic governance, continuous learning, and the necessity of human-in-the-loop integrity [288-306]. The event closed with a call to translate the discussed frameworks into concrete actions for responsible AI deployment worldwide [387].


Keypoints


Major discussion points


Human-centric, ethical AI governance and the need for trust-based frameworks – The opening remarks stress “carve out trust-based collaborative ethical frameworks” for AI-DPD (dynamic AI) age [10]; the Chairperson reiterates the PM’s “human-centric framework for ethical, accountable and inclusive AI governance” [23-24]; the summit theme itself is “AI for economic development, social good, safe and trusted AI, and human capital” [11-13]; later, Robin highlights that only 26 % of implementers understand their own government’s ethical frameworks, exposing a major risk [182-188].


Building localized, sector-specific AI capabilities and competency pathways – Shubhavi argues that the future lies in “small language models, context-specific, sectoral, and decentralized” rather than monolithic models, requiring customized competency frameworks [30-33]; Anil explains that legacy IT systems must be re-engineered, data prepared, and multilingual support added to enable AI at the edge (e.g., ASHA workers) [126-135]; Ramadorai adds that the next billion AI users will interact with “tiny embedded AI in phones, tractors, classrooms, clinics and local government systems” [96-98].


International (Brazil-India) collaboration for AI capacity building – Guilherme describes existing Brazil-India exchanges, complementary R&D, and parallel capacity-building institutions, proposing a “South-South” partnership to scale knowledge for public servants [147-160]; Robin notes the joint effort with Brazil’s ENAP and Google.org to train a million public servants, emphasizing the strategic value of such collaborations [172-179]; the moderator’s question to Guilherme explicitly asks about deepening Brazil-India cooperation [143-144].


Technical and operational risks of integrating AI into legacy public systems – Anil points out that AI cannot simply be layered onto existing siloed systems; it requires re-engineering, data readiness, and attention to security and data-sovereignty vectors [126-138]; Robin’s survey data reveal a gap between pilots and evaluation plans (45 % have evaluation despite 72 % planning pilots) [188-190]; Ramadorai also warns that governance challenges extend beyond technology to ensure officials understand system limits and ethical use [99-101].


Environmental sustainability of AI deployment – The final panel question raises the climate footprint of AI; Robin mentions a dedicated “AI and climate” course developed with Stanford’s Doerr School of Sustainability [201-202]; Guilherme differentiates “AI for green” (energy-efficient GPUs) and using AI to support climate policy, citing Brazil’s AI-driven forest-monitoring system [204-213]; Anil adds Google’s commitment to carbon-neutral data centres by 2030 and the need for energy-efficient models [215-218].


Overall purpose / goal of the discussion


The event serves to launch and promote a Digital Capacity Building Alliance that will provide a global, non-proprietary framework for AI skill development, ethical standards, and public-service innovation. It aligns with the India AI Impact Summit’s theme of leveraging AI for economic development, social good, and safe, trusted deployment, and seeks to translate the Capacity Building Commission’s policy work into actionable, collaborative programs for India and partner nations [11-13][39-40][250-267].


Overall tone and its evolution


– The opening segment is formal and aspirational, emphasizing vision, responsibility, and collective purpose [4-13].


– As the panel proceeds, the tone becomes technical and problem-focused, with detailed discussion of legacy system challenges, data sovereignty, and competency design [126-138].


– Mid-session, the conversation shifts to a collaborative and optimistic mood, highlighting Brazil-India partnerships and shared training initiatives [147-179].


– Towards the end, the tone turns pragmatic and urgent, addressing gaps in ethical understanding, evaluation, and the environmental impact of AI [182-190][197-213].


– The closing remarks return to a celebratory and call-to-action tone, urging participants to seize the moment, uphold integrity, and implement the newly launched alliance [288-306][350-356].


Overall, the discussion moves from high-level vision to concrete challenges, then to partnership opportunities, and finally to concrete commitments and a rallying call for collective action.


Speakers

Dr. Washima


Moderator – Event moderator (moderates the session) [S13]


Shubhavi S. Radha Chauhan – Chairperson of the Capacity Building Commission; expertise in public administration and capacity building [S7]


Guilherme Albusco Almeida – Senior Consultant, Institute of Management and Corporation in Public Services, Government of Brazil; expertise in government reform, digital transformation and AI ethics [S8]


Dr. Jitendra Singh – Honorable Minister of State for Personnel, Minister of State for Personal Grievances and Pensions; expertise in administrative reforms and India’s science & innovation agenda [S9]


Anil Shivastava – Chief Architect for Goodwill’s public-sector work; leads Public Policy Strategic AI Solution Engagements of Global Cloud in India; expertise in AI solutions, cloud computing and public-sector transformation (as described in the transcript)


Subramanian Ramadorai – Chairperson of Karni Nagi Bharat and former MD & CEO of Tata Consultancy Services; expertise in technology engineering and the intersection of technology with government institutions [S18]


Robin Scott – Co-founder and CEO of Apolitical; expertise in AI capacity-building programmes for public servants worldwide [S19]


Audience – Various audience members (e.g., Professor Charu, Indian Institute of Public Administration – public administration; Yuv from Senegal) [S1][S2]


Speaker 1 – Unnamed speaker who presented the AI-enabled government blueprint after the minister’s launch (role not specified)


Speaker 3 – Unnamed speaker who asked a follow-up question near the end of the session (role not specified)


Additional speakers not listed in the provided speakers list


Mr. Frager – Mentioned by the moderator when introducing the panel


Mr. S. Amogarai – Referred to as Chairperson of Karni Nagi Bharat (possible duplicate of Subramanian Ramadorai)


Mr. Schneider – Cited by the moderator in the opening remarks


Mr. Jeet Adani – Cited by the moderator in the opening remarks


Kirti Ardhan – Named by Dr. Jitendra Singh during his address


Other unnamed participants – Various individuals who spoke briefly or were referenced in the dialogue but are not part of the original speakers list.


Full session reportComprehensive analysis and detailed insights

The summit opened with Dr Washima reminding the audience that “technology is a great leveler, and AI, they say, is the next big thing after electricity” and urging participants to “carve out trust-based collaborative ethical frameworks” for the fast-paced AI-DPD age so that public services can be delivered faster, better, safer and more equitably [4-5][6][8-10][11-13]. He emphasized that AI must augment, not replace, human judgment-a point later echoed by Dr Jitendra Singh [10][324-327].


Chairperson Shubhavi S. Radha Chauhan of the Capacity Building Commission highlighted the Prime Minister’s “Mani Vision” – a human-centric framework for ethical, accountable and inclusive AI governance [23-24] – and argued that the future will lie in “small language models, context-specific, sectoral, and decentralised” rather than massive monolithic systems [30-31]. To operationalise this vision, the Commission has produced holistic policy frameworks, operational guidelines, personalised learning pathways and dynamic governance models that together support a workforce of 5.8 million professionals [34-38][73-78] and enable localised solutions such as tiny models for rural health and agriculture [92-98].


The moderator introduced the panel, naming the chairperson of the Capacity Building Commission, Mr S. Amogarai, and the three distinguished panelists – Prof Guilherme Albusco Almeida (Brazil), Anil Shivastava (Google Cloud), and Robin Scott (co-founder of a global public-servant network) [42-50].


Subramanian Ramadorai opened the discussion by reflecting on past technological revolutions, noting that “the most important question … is not how far we can scale AI but how we can recognise it as a movement that elevates humanity” [66-68]. He positioned India’s approach as a “third way” – a partnership model that sits between the US-led market race and China’s state-led techno-nationalism [73-76]. He underscored India’s extensive digital public infrastructure (RADAR, UPI, digital locker, etc.) as a “trust architecture” that can support the next billion AI users who will interact with “tiny embedded AI in phones, tractors, classrooms, clinics and local government systems” [92-98][96-98].


When asked about the technical risks of layering AI onto legacy systems, Anil Shivastava warned that existing IT platforms are “centred on silos of data and business logic” and cannot simply have an AI layer added [122-124]. He called for a re-engineering of data pipelines, multilingual data preparation for edge-AI (e.g., ASHA workers), and a renewed focus on security and data-sovereignty before any AI-driven decision-making can be trusted [126-138].


Prof Guilherme Albusco Almeida responded by outlining Brazil-India collaboration opportunities. He cited ongoing exchanges, complementary R&D capacities and parallel capacity-building institutions, proposing a “South-South” partnership that would scale knowledge for civil-service AI training across both nations [147-152][155-166]. He also mentioned Brazil’s ethical-assessment framework for AI and the potential to co-develop sector-specific models and procurement guidelines.


Robin Scott presented survey data that revealed a stark governance gap: only 26 % of public-sector AI implementers say they understand their own government’s ethical framework, meaning the majority are “freestyling” [182-188]. Moreover, while 72 % plan pilots, merely 45 % have an evaluation plan, highlighting a risk of unchecked deployments [188-190]. Despite these gaps, she noted that over 90 % of public servants remain optimistic about AI’s productivity gains, estimating a $1.75 trillion upside if the technology is harnessed responsibly [191-194].


The final panel question turned to environmental sustainability. Robin announced a new “AI-and-climate” course co-created with Stanford’s Doerr School of Sustainability [201-203]. Guilherme differentiated between “green AI” (energy-efficient hardware) and “AI for green” (using AI to support climate policy), citing Brazil’s Rural Environmental Registry that employs AI to monitor deforestation and guide reforestation [204-213]. Anil added that Google has pledged to make all its data centres carbon-neutral by 2030 and is ready to partner with India to embed similar targets in Indian facilities [215-218].


Audience members then raised concerns about timelines, invoking the Doomsday Clock metaphor and asking whether India could achieve its AI-driven governance vision by 2047 or even earlier [227-244]. One participant, Professor Charu, called for a “generic, hyper-localisable international AI impact framework assessment tool” to bridge the current lack of procurement and ethical guidelines [246].


The moderator announced the launch of the Digital Capacity Building Allowance, a non-proprietary, demand-driven framework that combines global AI principles, digital-public-good standards, and the Mission-Karmayogi model to fund and coordinate capacity-building activities across governments, industry, academia, civil society and start-ups [251-270][263-270].


Minister Dr Jitendra Singh delivered the keynote, stressing that governance, capacity building and AI are all “dynamic, continuous” processes that must be synchronised [292-298]. He praised India’s political dispensation for removing nearly 2 000 outdated rules in the past decade and for embracing “human-in-the-loop” integrity as non-negotiable [310-327]. He introduced the acronym M-A-N-A-A (M = Moral & ethical systems; A = Accountable governance; N = National sovereignty; A = Accessible & inclusive; A = Valid & legitimate) [370-376].


The session closed with the moderator thanking all participants, inviting a group photograph and urging attendees to “translate the discussed frameworks into concrete actions for responsible AI deployment worldwide” [387-389].


Across the discussion, there was strong consensus that ethical, human-centred AI governance is essential (Dr Washima, Robin Scott, Dr Singh) [10][182-188][324-327]; that AI can act as a catalyst for inclusive, citizen-centred development (Dr Washima, Dr Singh, Subramanian, Speaker 1) [4][66-68][258-262]; and that small, sector-specific models are preferable to monolithic systems (Shubhavi, Subramanian) [31-33][96-98]. Participants also agreed on the need for customised competency frameworks and personalised learning pathways (Shubhavi, Anil, Robin, Speaker 1, Subramanian) [33-35][126-135][201-203][258-262][104-105]. South-South collaboration, especially between Brazil and India, was highlighted as a viable “third way” to shape global AI norms (Guilherme, Subramanian) [147-152][155-166][73-76].


Notable disagreements emerged. Robin’s data showed a gap between the aspirational “trust-based collaborative ethical frameworks” and the reality that only a quarter of implementers understand them [182-188] versus Dr Washima’s call for such frameworks [10]. On climate-focused AI, Anil presented a well-funded corporate pledge for carbon-neutral data centres [215-218] while Robin suggested that the AI-climate course “has too much money”, indicating uncertainty about funding adequacy [201-203]. Finally, Anil argued for extensive re-engineering of legacy systems [126-138] whereas Subramanian emphasised the opportunity of deploying lightweight edge models without a full overhaul [96-98].


The event concluded with the launch of the Digital Capacity Building Allowance and a commitment by Google to achieve carbon-neutral data centres by 2030 [215-218][263-270]. A pledge was also made to develop sector-specific, edge-optimised language models for rural contexts [96-98] and to continue large-scale AI training for public servants (target of one million, 400 000 already achieved) [172-179]. Unresolved issues include the detailed operational design of the Allowance’s funding mechanisms, the precise steps needed to align legacy-infrastructure modernisation with workforce capability development, the establishment of standardised evaluation processes for AI pilots, and the creation of a universally applicable, hyper-localisable AI impact assessment tool [246][188-190][104-105].


In sum, the India AI Impact Summit reaffirmed that AI should be deployed as a human-centred, trustworthy technology that drives inclusive socio-economic development while respecting ethical norms and environmental limits. By coupling sector-specific capacity-building pathways with global South-South partnerships and a clear commitment to human-in-the-loop integrity, the participants charted a roadmap that moves from aspirational vision to actionable, collaborative implementation [4][66-68][324-327][258-262].


Session transcriptComplete transcript of the session
Dr. Washima

and partnerships from the Capacity Building Commission to deliver welcome remarks. Good afternoon. Thank you, Mustafa. A very good afternoon to all of you, distinguished guests, panelists, fellow participants, colleagues from Karni Yogi Bharat and Capacity Building Commission, and a warm welcome to everyone. Technology, they say, is a great leveler, and AI, they say, is the next big thing after electricity. We as individuals are part of society as individuals and a dual role of individuals. And we are deeply impacted by these two to an extent that we cannot distinguish between these two anymore. As the popular Bollywood line says, Mayor Mary Panhai is actually me. And my AI, in certain context. This room carries the huge responsibility of making that distinction happen.

Responsibility is to carve out trust -based collaborative ethical frameworks so that the demands of fast -paced dynamic AI -DPD age, which constantly creates push -up demands for faster, better, safer public services, is met by a well -informed design and delivery model. Today we gather here as a first step, aligned with the India AI Impact Summit theme, AI for economic development, social good, safe and trusted AI, and human capital. The need for collective discourse at the policy level is crucial, whenever to harness equitable benefits, mitigate risks, and to ensure an inclusive governance transformation. To carry this foundation forward, we have our distinguished panelists, we have our chairperson, and we look forward to the next session. Thank you for listening today.

Welcome, everyone.

Moderator

Thank you, Dr. Washima. I now invite our Chairperson of the Capacity Building Commission, Shubhavi S. Radha Chauhan, to deliver the opening address.

Shubhavi S. Radha Chauhan

Thank you. Thank you, Mr. Sir. Namaskar. It’s my privilege to extend a very warm welcome to all of you on behalf of Team Mission Karni Yogi. And I must disclose that this speech that I’m going to read out is handcrafted. No AI in the process has been used. Yes, absolutely. Be compliant. Our Honorable PM yesterday outlined Mani Vision, a human -centric framework for ethical, accountable and inclusive AI governance. Mission Panayogi has and shall continue to relentlessly craft and embed these wish capabilities that will translate this vision into reality. Every service today must evolve at a pace, hitherto untraceable. It must learn continuously, develop deep competencies and dynamically adapt to eternally emerging work and workspaces. Underprivileged competency and skill is a humanistic capacity, that non -negotiable layer of intellect, diligence and values that has to flavor every decision made and every service delivered by governments and its systems.

From the community health worker delivering nutrition to an expecting mother to the balancing worker strategizing access to specialized healthcare. It is the quality of this human layer that will ultimately define the quality of service we deliver to our citizens. The future of AI, more precisely the agentic AIs, will not be in massive monolithic models. It will be in small language models, context -specific, sectoral, and decentralized. This would entail creating the customized, sector -specific competency framework that can suitably deploy AI agents to arrive at decision points that solve local needs and problems in its context. Capacity building must therefore focus on enabling our officials to deconstruct complexities, impose appropriate guardrails on data and its use, before evaluation benchmarks.

before using the authentic insights to taking decisions. In the past year, the Commission has developed holistic policy frameworks that have been tested and institutionalized, established operational guidelines, especially those for identifying competency gaps, leading to personalized learning pathways for each one of our learners. Dynamic governance models have evolved for stakeholders, especially our training institutions, ensuring they remain agile and responsive to competency demands. Continuous learner feedback loops, rigorously analyzed, have become integral to refining and strengthening the system. We are at the community portal’s times today as a testament to this remarkable trajectory. It has developed teamwork capacities effectively, at scale, and across the human race, to achieve this diversity of India’s governance ecosystem. It is in the context of this evolving journey that we see today’s event as an opportunity to take the plea, grounded in deep faith, that Mission for New Delhi, as a public good, must inform every other government that is on a similar and seminal mission to deliver inclusive, ethical and impactful public services.

I sincerely hope that this deliberation here produces a cohesive and common pathway for all of us to enter upon as global partners. Thank you so much.

Moderator

Thank you so much, ma ‘am, for placing the panel through the address, placing everything into the context for the panel discussion. Thank you, Mr. Frager, and the remaining proceedings of the evening. Now I take your immediate pleasure in inviting your panelists for today’s discussion and also more later Mr. Professor Amogarai sir May I kindly request Professor Amogarai sir to join us on the rise The last panel discussion will be moderated by Mr. S. Amogarai Chairperson of Karni Nagi Bharat and former M .D. and CEO of Tata Consultancy Services Mr. Amogarai is of a specialty at the intersection of technology engineering and technology governments and institutions and he has worked at key institutions across academia, industry and public policy institutions including as advisor to the R .W .P.

Minister in the National Council on Scale Development We welcome you sir On the panel, we are joined by Guilherme Albusco Almeida from Brazil, a senior consultant at the Institute of Management and Corporation in Public Services, Government of Brazil, working at the intersection of government reform and digital transformation. Anil Shivastava, Chief Architect for Goodwill’s work in the public sector. He currently leads the Public Policy Strategic AI Solution Engagements of Global Cloud in India. And our final panelist for today, Robin Scott. She is the co -founder and CEO of A Political Network, a global online network of public servants. Thank you so much for joining us and taking time out for this session. With this, I hand it

Subramanian Ramadorai

Thank you. So, the mic’s there. Two minutes. Then I’ll say the second. No good answers. You got nothing to do. Before I begin, I want to extend a very warm welcome to the panelists. Thank you so much for agreeing to be a part of this. It will be a learning experience even for me, for sure. After spending over five decades in the technology industry, I’m probably the oldest here. It puts me with immense hope to sit alongside a group of young leaders who are shaping the next chapter of this global technology revolution. Thank you for being here and for looking forward to the exchange. If we look back at past technological revolutions, we rarely talk about the technologies themselves.

Instead, we talk about what they enable. Electricity is not celebrated because we built our plants. It is celebrated because we brought a revolutionary transformation into the world. It is a transformation to the quality of life. AI presences in the Sumedha moment. It gives us unprecedented power not to do things better but to do better things. We think how we explore, educate, govern, create, collaborate, heal and protect the people and the planet most importantly. But the most important question for this summit is not how far we can scale AI but how we can recognize it’s a movement in a direction that elevates humanity. Sometime ago I read an article titled Bridges as Humanity’s Greatest Legacy. It has spoken about the universalization instinct and how it has long leaned towards coexistence, cooperation and balance including.

shared progress. From Rupesh Mahatma Gandhi, India has consistently attempted to build robust and promote peace and harmony across the world. We are entering the era at a time when capitalism is increasingly intertwined with geopolitics and, of course, conflict. That reality demands deep reflection because the choices we make today will determine whether here becomes a nuclear race of the 21st century or the space race that will take humanity to the moon. Globally, AI is framed as a binary race, a market led by experimentation in the United States, versus state -led techno -nationalism in China. However, it might lend India offers a third way, in partnership, of course. For over five decades, India’s IT industry has built trust, reliability, and delivery capabilities across the world.

We know we have 5 .8 million professionals. This legacy gives India any strength to deploy technology safely and, of course, responsibly at every stage of the technology industry. We know we have 5 .8 million professionals. We know we have 5 .8 million professionals. We know we have 5 .8 million professionals. We know we have 5 .8 million professionals. We know we have 5 .8 million professionals. We know we have 5 .8 million professionals. We know we have 5 .8 million professionals. We know we have 5 .8 million professionals. We know we have 5 .8 million professionals. We know we have 5 .8 million professionals. We know we have 5 .8 million professionals. We know we have 5 .8 million professionals. We know we have 5 .8 million professionals. For example, the Tata AI Saki Immersion Programme is empowering rural women artists to use AI as a tool for livelihood opportunity.

India’s AI journey is also interested in the digital public infrastructure, which includes RADAR, identity payments, UPI, documents, digital locker and consent framework, the data empowerment and protection architecture, or DUPI as it is called, at a scale. These are not really platforms, they are trust architectures. The next billion AI users may not interact with 3D and parameter models. They may interact with tiny embedded AI in phones, tractors, classrooms, clinics and local government systems. India’s rural opportunity lies in small language models that are absolutely domain specific and can run on edge devices, operate in rural cities, overseas environments, solve real problems. But I would not squadron of these children going through. of the exhibitions, exhibits. As artificial intelligence becomes an embedded across public administration, the challenge for governance is not limited to technology or adoption.

It extends to ensuring that public officials understand how the systems function, what their limitations lie and how human judgment responsibly and ethical consideration must guide this use. Vishen Karvayogi has established a model demonstrating that systemic technology -enabled civil services reform is achievable across diverse administrative contexts. Building on this institutional foundation, the next horizon is to embed AI within workforce transformation frameworks while contributing to the shaping of global norms on responsible AI in governance. In this context, the Summit Master launched a blueprint for digital capacity building and labs that sets out the share of fair work for developing AI and everything public could for public officials’ capacity. Let us take this convention forward to see how the proposed alliance can be operationalized and diverse partners can work together to translate the blueprint into sustainable actions.

I’ll turn over to the questions. For each of the panelists, I’ve got a couple of questions, but I’ll just start with one or two to each of you. And they’re slightly different for each, and none of them have been generated by, again, a disclosure. So let me start with that little shivastava, Google Cloud. Google is at the forefront of building global digital infrastructure and in many ways adoption across enterprises and public services. The question is, governments often adopt new technologies in fragmented ways. What technical and operational risks arise when any systems are layered onto the legacy infrastructure? without structural reform, how can initiatives like Mission Community help align infrastructure modernization with workforce capability development? Before I ask you the next question, please.

Anil Shivastava

So, first of all, thank you. Thank you so much. I’m honored to be here at Bharat Mandapam as part of the panel of esteemed experts. And we are talking about capacity building imperative for Vixit Parth 2047. You know, I think the question that, sir, as we said, is a very important question when we are in the journey of using AI in our day -to -day lives. And specifically, you know, governments, and especially Government of India, has a huge focus of using AI to improve the quality of life for the people of the world. And I think that’s a very important question. for making the lives of our citizens easier? And also the government, civil servants, you know, lives will be much, much easier than the work that they do today.

The kind of efforts that goes into Reni Server delivering citizen services, is there a way that we could actually leverage AI in Reni Server solving for that? Now to sir’s question, I think the key, it is a very important point that AI is not, you know, a layer that you could just put on existing systems. You know, the systems, the IT systems that were developed with the objective of solving specific problems. Please hear me out. Sure. Right. So, you know, the existing IT systems are very centric and they were built with the view to solve a specific problem for the kind of technology that we had at that time. With AI, we are sort of looking to change the way the humans interact with the AI systems.

The existing systems, they actually have silos of data, silos of business logic, whereas AI, as we sort of look at this as more holistically, you need to really have a contextual data for you to train models to make it useful for you. And so we need to really sort of look at reengineering some of our existing IT systems so that it can harness the potential of AI in solutions. And so that’s where we’re solving some of the problems. So that is one point of view. The other is to really sort of prepare data so that we can train models, whether it is the LLMs or the SLMs, whether it is at the edge that we would actually have AI in a small device, a mobile device.

ASHA worker could actually sort of go and can deliver services using AI in their own native language. So, you know, we need to ensure that we support the multilinguality, the in -depth languages to deliver on those services. Now, to build those systems, we really need to not only change the technology, the underlying technology, but also the process that needs to be re -engineered. So that is one aspect that we should think about. Also from a security perspective and data sovereignty perspective, we need to sort of re -look at the… the exposure that AI brings to our existing systems, the kind of vectors that we are, you know, the systems have been built today, we will need to have to re -look at it.

Some of the, you know, vectors or some of the issues that we have today needs to be resolved. So that would be my perspective, sir. Thank you.

Subramanian Ramadorai

Guy, he told me it’s very easy to remember his name because Guy is a new word also.

Guilherme Albusco Almeida

Yes.

Subramanian Ramadorai

How can countries like Brazil and India collaborate more closely in shaping the global conversation around care, trust, and alignment in AI? What do you think are the areas we can collaborate together which will have the greatest global impact as AI becomes more autonomous and more deeply embedded in society?

Guilherme Albusco Almeida

Well, that’s a fantastic question. I will try to bring some aspects of that, but I think we’ll keep answering that until at least 2047. But the point is, first, I think Brazil and India are really close and can collaborate a lot. I can testify that because it’s my fifth trip to India. We’ve been exchanging a lot technology -wise. When it relates to AI, I think we should consider different aspects. We just mentioned here data and data for training models, but I think there’s strong room for collaboration when you talk about R &D, right? Because there’s not only similarities but also complementarities. There are things that are complementary to one another. So I guess that there’s strong room for cooperation and collaboration.

But also in capacity building. I understand that I’m a great fan of Mission Kama Yogi and the Capacity Building Commission. We have similar organizations in Brazil. We have been training civil servants as well through an electronic and an online platform. And I believe that digital infrastructure approach to capacity building is also a way to bring this to more people to make this scalable in a way that we can actually change build knowledge build capacities and make things change within government. Of course we need to be careful about the risks of AI. In Brazil we have developed a framework for ethical assessment of AI implementation. We have also provided some guides explaining not only how AI works, but what caution should you have when you’re using AI within the public service?

Of course, we should consider boundaries and safeguards in AI implementation, but we should not prevent from using it for the betterment of people’s lives and to enhance our population. And I think that training and capacity building is crucial for that. In Brazil, we have at least four different profiles for capacity building, one for senior leaders, one for IT managers, one for data curators, and the other for general civil servants, in which we organize the knowledge you’re supposed to develop and to build in order to use AI properly and to build AI solutions. and I guess going back to the Brazil -India connection I guess both nations are well positioned to I would say lead but to conduct this conversation in a global perspective I think that we have great partnerships with Apolitical as well I’ve been working a lot with them and I think that coalition of willing organizations building knowledge for AI in public service is something that could be built and if you can bring a South -South flavor to that I think we’re better positioned to provide the transition we want to the government and to the world.

Thank you.

Subramanian Ramadorai

Robin you work with governments around the world what are the biggest gaps you see in AI readiness within public institutions? I think that how can we shift the global conversation towards work reinvention?

Robin Scott

Thank you so much. That’s a big question. It is such an honor to be here. And this seating arrangement is particularly meaningful to me because we’re not only honored to partner with the Capacity Building Commission and Mission Kamiyagi, but we are longstanding partners of ENAP, the excellent Brazilian school of government. And Google .org has funded us to provide world -class training for free on AI to a million public servants, and we’re 400 ,000 into that goal, including in India. So this particular configuration is very meaningful. And I also want to say something about Brazil and India, which I think links the two nations. In our experience working with them, they both understand that capacity building is not something that should be pushed to the side.

It is an afterthought. It is an engine of innovation. ENAP has an innovation unit within its school, and it is strategic. and especially with AI, it’s more strategic because you don’t get intelligent technology unless you have people intelligently supported to work alongside and in partnership with that technology. So I really appreciate the ambition and vision that both countries bring to capacity building. I’ll point to just a couple of gaps. One is around ethical frameworks. You mentioned Brazil has one. Most countries have one. According to our data, this is an 8 ,000 -person global survey. Of those people implementing AI in their governments, these are people whose job it is to roll out the technology, only 26 % say they understand their own government’s ethical frameworks.

So in other words, 75 % are freestyling, and that builds a great deal of risk into the system. We also have a gap between talk and ambition and evaluation. So when you talk to leaders, 72 % say they have a pilot or will have one this year, but only 45 % of them say they have a plan to evaluate the performance of that pilot. And there is no point piloting something without evaluation. There is a lot more to say, but I just want to end on a note of optimism. Well over 90 % of public servants are very optimistic about the role that AI can play. And there’s a $1 .75 trillion productivity prize for getting this right, according to BCG. So we’ve got the optimism, we’ve got the energy, and these gaps are big, but they’re not impossible to close.

Thank you.

Subramanian Ramadorai

Just one final question, if you can answer. As we expand AI -centric capacity building, scaling digital platforms, increasing compute and embedding AI into public systems, all of us agree we must also confront the environmental footprint of these technologies. How can governments and AI companies work? How can governments and AI companies work together to ensure that the AI -driven public infrastructure is also aligned with climate responsibility? energy efficiency and sustainable growth. Anyone else? We can all agree with each other or confidently.

Robin Scott

Well, I can just offer, we have developed a course on AI and climate and understanding the links with the Stanford Doerr School of Sustainability. So we literally have a program to answer your question. I’ll leave it. But it has too much money.

Guilherme Albusco Almeida

Well, I think there are two separate ways in which it could be framed. There’s AI for green and green AI. So aiming for sustainability on the power you provide for the GPUs and so on is crucial. But also we could use AI to leverage the potential of climate policies. If you put together data, meteorological data, data from forests and so on, you can bring intelligence and knowledge, which could lead to better informed decisions. In Brazil, we have a system in which we mapped all the forests in private lands. It’s the Rural Environmental Registry. And we’re using AI to understand whether forests have been cut down or not. So we can use different AI tools to promote not only fighting deforestation, but also promoting reforestation.

So there are several different potential tools, and they could be used. And if we can blend them together, we can make even more.

Anil Shivastava

So I would actually say that, you know, Google, first of all, has committed that by 2030, all our data centers will be carbon neutral. And we want to make sure that we want to partner with the government of India to ensure that all the data centers that we are building in the country should have some sort of targets to ensure that they are carbon neutral. Thank you. and there are, you know, designed from a civil perspective but from a technology perspective, we can always build models which are energy efficient. So I would actually leave it at that but I think that’s a very important point for India as a country. Thank you.

Subramanian Ramadorai

Any one question from the audience? Go ahead. Oh, it doesn’t work? Somebody gave you the mic. But you should add ladies first. I don’t know why you… Go ahead, go ahead. Sorry.

Audience

and just take a metric of doomsday clock that was just received on 27th January at 85 seconds to midnight. Now, just simply translate that in a thousand mile downhill journey, an overloaded truck with weapons of mass destruction that aptly disperse our civilization today. We are at 80 meters to the edge of the precipice. You know, that dangerous. We are being so cannibalistic of the future of our children. My question is, should we see Bharat by 2047? It is actually procrastination of our responsibilities. We just don’t want to own up what’s happening today.

Subramanian Ramadorai

I agree, yeah.

Audience

So they have to come as a concert of civilizations by 2026. Go ahead, go ahead, sorry. Chairman, in three precedents of the Federation, I will just take a metric of doomsday clock that was just received on 27th January at 85 seconds to midnight. Now, just simply translate that in a thousand mile downhill journey, an overloaded truck with weapons of mass destruction that aptly disperse our civilization today. We are at 80 meters to the edge of the precipice. You know, that dangerous. We are being so cannibalistic of the future for children. Should Vixen Bharat, my question is, should Vixen Bharat by 2047, which is actually procrastination of our responsibilities, which I don’t want to own up, what’s happening today, and what happened to India, should they have to come as a concert of civilizations that by 2026 itself, right here, right now?

Subramanian Ramadorai

I think confrontation starts very early and a lot of activities are happening in the country 2047 is a dream extraordinary year of our independence what we need to achieve is a roadmap that’s what is important rather than saying nothing is happening and suddenly something will happen in 2047

Speaker 3

one more question here last question please thank you

Audience

I’m professor Charu from Indian Institute of Public Administration we met some of you thanks in regards to the audience my question is with regard to a consolidated international lack of availability of frameworks in terms of AI procurement guidelines in terms of AI ethical frameworks in terms of competence frameworks we do have UNESCO competence framework for AI but we need to hyper localize it to context which Robin and I were talking in terms of various countries in another event so maybe we need to have something a more generic collaborative international AI impact framework assessment tool which could look into capabilities not just at the digital leadership level but across the whole organization or country thanks a lot

Moderator

thank you any comment it was right okay thank you so much honorable minister thank you to all the panelists and participants we are also now joined by honorable minister Dr. Jitendra Singh sir minister of state for personnel minister of state for personal grievances and pensions a visionary leader who has been at the forefront of administrative reforms and India’s science and innovation agenda sir it’s a pleasure to have you with us . Thank you. Thank you so much, Mr. Minister, for joining us and taking time out of your busy schedule. What was just witnessed in the form of a panel discussion is a small reflection of what the Capacity Building Commission and Karnal Yogi Bharat are hoping to achieve through what we seek to announce today.

May I now request all of you, Mr. Minister, to kindly launch the building for holding a musical capacity building alliance by pressing the button

Speaker 1

And to govern at this scale, this space, we need an ever -evolving system. So India built one. In 2020, our leader, Prime Minister Srinivasa Modi launched Mission Kalmaragi to build a future -ready citizen -centred civil services. At the heart of it, the capacity -building commission. The living capability with foundations of trust, empathy and inclusion. Part of this is our great Kalmaragi platform, India’s national digital learning platform, accessible anytime, anywhere. And now, the next two. AI -enabled governments, personalized learning paths, smart decision support, from reactive systems to adaptive capacity model. To both, all governments, especially in the global south, face the same challenge. Demand -driven and technology -destructive. Complex workflows. Move for Agile Institutional Capacity. India has a working model of unscannable cruising, a global public good.

Today, the Capacity Building Commission unveiled a proposal to forge Digital Capacity Building Allowance, an allowance that fuses global AI principles, digital public good standards, and the mission -cum -worthy model. A unique model for demand, design, delivery, and continued evolution. It wants in the mission to build a shape, non -proprietary foundation for capacity building across nations. This alliance aims to bring together a capacity building, a global funding, to drive policy design and standards, industry, to build digital commons and specialized solutions, academia, for the network of knowledge, research, and innovation. Civil Society, the champion systemic equity. and ethical accountability. DPG Partners, to orchestrate ecosystem for spirit impact, startups, to catalyze solutions and co -creation. Steered by the Capacity Building Commission and Karmayavi Bharat of Government of India, a global public good for inclusive, ethical, capacity building.

In the spirit of Selvajan Hithai, Selvajan Siddhai, Velsa for One, Happiness for One.

Moderator

Thank you so much, sir, for launching the blueprint. May I kindly request Honourable Minister Sir and all the dignitaries on the desk to stand for photographs. Thank you. Thank you. Thank you, sir. May I invite Honourable Minister Sir to the podium to deliver the keynote address for the afternoon. Thank you.

Shubhavi S. Radha Chauhan

Of course, you saw how it concluded and Dr. Ramadurai was giving the perspective of how does AI and its use payment for the public services, what are the challenges by way of capacity building. the learnings that we have had under Mission Karni Yogi, how valuable are they going to be to inform this whole journey that all the global partners, global governments, stakeholders are going to further take their steps towards. And the summit has seen the Honorable Prime Minister and the vision by Ruf Mano that he explained yesterday, which is totally bringing in the level of the need and importance of having a human -centric approach, the hands of the human in every decision that is being made that will impact citizens in every which way.

So I think the world capacity building is looking at that capacity and capability that we need to put in the hands of every public servant from the secretary at the policy level to the community level frontline worker. How do they use, navigate and… building the intellect that is needed to make the correct, ethical, modern value -based decisions when it’s going to impact the humankind in general. I think we have had a lot of fruitful insights from our partners, the panelists. And so this is almost a culmination of a long month, months that we have spent in discussing these issues with our stakeholders, whether it’s from the industry, the partners, our service providers, academia, startups. We were able to bring all of them together today.

And the document that has been unveiled by the minister just now is bringing all of those learnings and insights together to call for action in all of us who are working as experts in our fields to come together. collaborate, strengthen each other’s hands and responsibilities to forge that pathway with the human centricity that is required in handling the capacities that we have given to our baby, which is the alien and augmented intelligence. I think, sir, that is where your keynote now would give us the further needed guidance that you think at the political leadership level, how do they view the whole capacity building space? And like Robin said, this is very important that oft sidelined sector of public governance, but now we have the opportunity of bringing it center stage.

I don’t think we should lose this opportunity to take the leap with utter faith. Thank you, sir. Thank you.

Dr. Jitendra Singh

thank you for reminding me that you needed my guidance. And more than me, reminding all the women in the room that they have to listen to my guidance. But I know you have already done enough of that task. But thanks for your kind words and trying to encourage me to stand before this program. Now we are simultaneously talking about two, three things this afternoon. We are talking about governance, we are talking about capacity building, and we are also trying to bring in an artificial intelligence interface, which of course, in any case, we like it or not, we don’t have to do it, it’s all good. You know, we fear of life in every domain that we work in.

Now governance, if you take, governance is a dynamic process, like many other processes in every domain that we are engaged in. So also is capacity building. It’s dynamic. It’s continuous, doesn’t it? And the time would be such a fast track movement that by the time you bare yourself tomorrow, when tomorrow happens you realize that you were only to worry about yesterday. And then upon that, the artificial intelligence. And the role of both in capacity building as well as in governance. And you talk in the context of India today, when you talk of a framework which invocates the best of all the three, I think the most encouraging feature is that we have a government in place, a political dispensation in place, which is supportive of all these ideas and all these initiatives.

Till about maybe 15 years back, we wouldn’t have ever thought of a theme like this gaining priority in an economy. We would have never thought of a dynamic room of this nature. And therefore, this is a dispensation of the Prime Minister Modi, which is not only ready for futuristic ideas and initiatives, but also for future -ready ideas. And that means, while it looks very fanciful, romantic, talking about artificial intelligence, even if a grocer shop puts on a banner outside, they hear my items are sold through AI, I’m sure he’ll at least be able to attract some number of customers. So that’s a new nature. But at the same time, being unrealistic and pragmatic, and Mrs. Radha had also been secretly DOP’d before she took over her present assignment, we are also free to shed away some of the old baggage.

So while we are running into what is new, we should also… We also have the capacity, and we should be non -possessive enough to unbend something which goes to our chest for so many years. And I’m proud to say that in the last one decade, this government has done away with almost 2 ,000 rules. And that sin has not been committed alone by me. Madam Radha is me equally. Kirti Ardhan. And let’s recall, many of the rules were designed for the times that they were designed. They didn’t have thought of artificial intelligence about 100 years back. They were having rules which were haunting us for more than a century. Getting our certificates, documents attested by the economy before.

We didn’t even know that. We have our means. And we also have now means to trust each other. So at the same time, also we have opened up to learn the new practices. And to also, while learning new practices, to be able to learn new practices. So very capacity. Two building, commission sitting with the governors. because when you learn the practices, you must also learn to bring capacity to learn. So artificial intelligence, the basic mantra is to learn to be a good learner. And if you have not learned, then to learn to be a good learner. And capacity building condition is one instrument which helps you do that. And this also was the idea which first came from Dr.

Moti. I think quite a novel idea for a government sector because usually governments are used to work in a status quo mode. So we have a flow of a private sector. Bandhu is there, and here we have Agam there. Because now we have also got over the barriers of private and public sectors. Unless we learn to learn from each other and also give up our sceptical, we will not be actually building up capacity to the optimum. We will be building up capacities which are limited by certain barriers. And that will not be an unbridled learning. So, capacity building commission was there, mission current lobby was being talked about, was also there, creation, IE God, and all these are testament to the governance reforms being accomplished through optimum technologies.

Now, this lies, which is being lodged. Now, when we talk of digital public good, fortunately or unfortunately, I am one of the few who are from the school of science and business. So, if you take out the first word, digital, and just concentrate on public good, you will realize that non -governance is synonymous with public good. So, public good, essentially, would be at the core of the good governance. It is just that now it is started using digital means, so it is fanciful. I am known because now we are adding to the age of human epitome. So we say DPG. So now we are adding DPG. But to a hardcore old fashioned scientist student, if you ask me, I would say nothing.

It is just the same. Good governments, family good. I am just going to write how the day one has been, because I think the Prime Minister himself, one of the earliest declaration was, maximum or minimum government, which another word means, a government which is citizen centric, which is accountable as far as possible, which is transparent, and the ultimate human stress effort is to bring in ease of living. So all that DPG was happening earlier, only the announcement is being made today. Now we see that there are kind of events now. I think the Prachinidhi Commission developed appreciation because they were very independent. They were very much instrumental in taking us when they got these. frameworks.

And I must also congratulate Madam Radha and her team for this launch of digital capacity building allies. But the idea is, yes, even perhaps. But the guiding principle ultimately all of us would agree would be to build at a scale which is optimally inclusive to give as much inclusion as possible and these are others to learn and also to adopt. Otherwise the very purpose of the allies would not have been achieved. And therefore for that purpose artificial intelligence could be a powerful tool. But certainly not around itself. Now the same ministry would be already using it from the experience of the world where the optimum mix is something which is why would I put in a silver at this audience human record of AI plus HIV artificial intelligence plus human intelligence and we learned that in a hard way while being under the CP grams we are feeling very proud that our disposal weight of legions has increased to more than 95 % almost 100 % per week sometimes but everybody went back to the long phase so I told him then secondly we answered greenness disposal 100 % happiness disposal 0 % when we looked back and said I said no that’s something you call happiness index being discussed in the West so we actually had to introduce a human desk over there because everything was happening all night AI and the control would come before you expected it so that the person would end up saying, so somebody there to construct, so that kind of, so I think that hybrid model is something which is ultimately going to be effective in my own means.

As far as I’m concerned, the morning I was listening to the IMF chief, she said India’s progress in the AI initiative is phenomenal. So I think we have already received recognition from the global judges of global benchmarks. But that’s the part what is suited to our conditions. Because a country bar sitting in Gaurav may not be used to talking. We are not used to talking. For example, another example of hybrid model I will give, we have, I think I’m going to leave, who has the voluntary clinic in my constituency. and there were two doctors one was surgically sitting there the other was an AI doctor she is a lady then they take the history they do it right do all the assessment connect it to one of the leading hospitals you are planning 3 -4 of them connect to the super specialist there I know where the prescription is published just about 40 -50 minutes it’s a very new creative health startup now but of course we are not in the school of therapy so when the patient comes the physical doctor talks to him and he feels more gratified than the AI doctor talks to him but she talks exactly the same dialect or better than what he talks so now he speaks Bhojpuri she would speak better Bhojpuri so the debate is over and I am not talking without the udders because in medical partners we have something called placebo effect and Indians are very used to placebo effect you should get the placebo effect if you don’t get well if you don’t get well what should I do?

So you have such a whole new world to question. So it may be bad in government context. So as they say, Indian data, Indian solutions, Indian systems are important. So we need to have a very much digitalized vehicle. Maybe other nations or cultivators may not require that, but we will always be driving it. And I think with this alliance, the Peter model that Radha has suggested, we would be able to engage more in valuation than by preparing this draft chapter with the content standard and the evaluation framework that is required. But in the end, I will just pick up on what she said, the Manav part of it, which the Prime Minister spoke yesterday. Now, how many in this room actually, I mean, have been able to decipher what was meant by that Manav?

not a single hand has gone across hospitals so I think before you leave this room if you have a iPad or a notebook, you can say it out M stands for moral and ethical systems A stands for accountable governance N stands for national sovereignty and second A stands for accessible and inclusivity and finally A stands for validity and legitimacy now when we don’t have the capacity to learn the term, we have the capacity to learn the acronyms and very soon the artificial intelligence will take up this much capacity also we need to be present we need to be present what is the amount of the other day when I was watching this large -length model and we were going to send the language back to the Zohar.

So I said, no need. But along with that, we have also buried that beautiful breed of English that we had in the early age of mission. We can’t see people who know more than five languages, because we know them more than six languages, so many of them. Now we are at the risk of even forgetting our own language. We have someone else to do it for us. So that’s the why, in the end, I think the two years, because what I’ve done, I’ve been into this AI business quite deeply. For the last, it was half a decade, I’ve been trying it in different fields. So ultimately, I think the moral which I draw from myself, because these people can’t be enjoying for others because we have to learn for themselves, is that one has to be intelligent enough to use artificial intelligence.

Otherwise, you don’t get into this business. And we are which is I think the tagline also for some of the media persons which they are there. Artificial intelligence can substitute everything on this planet but it cannot substitute integrity. Now whatever you do the other day the law in this study was writing two days back somebody said now you will be able to get rid of these people it is like this is happening I said no because the doctor is sitting thousands of kilometers away doing an ultrasound on a lady who is thousands of kilometers away so you would not be able to actually do any I said now if I am a doctor and I am smart enough I will then be percentage to my brother’s side when the media peers who have to do the sex determination just sound around me and do that.

So that is integrity. So I think I think that is the most important thing if we are not able to use this with integrity we might run the risk of ending up also not putting to active use so much of other government’s models that came to us but got ruined away not because of reasons attributable to them but the reasons attributable to more of us who had been assigned to handle them. Thank you very much.

Moderator

Thank you so much sir for that insightful address and laying emphasis on the need for integrity which is only possible through the idea of having human in the loop something which also finds mention in the blueprint that we have just launched. with this we come to the conclusion of the event I extend my heartfelt gratitude on behalf of capacity building commission and Karmel Kiwara thank you so much Honourable Minister Sir dignitaries on the rise and all those present here thank you so much I now request the panellists also to just stay back for a quick minute for a photograph with Honourable Minister Sir and the dignitaries on the rise also all present I would like to take this opportunity to invite you to motion Karmel Kiwara in all the five thank you again Sir please thank you so much thank you thank you

Related ResourcesKnowledge base sources related to the discussion topics (44)
Factual NotesClaims verified against the Diplo knowledge base (7)
Confirmedhigh

“AI must augment, not replace, human judgment”

The knowledge base stresses that AI should enhance rather than replace humanity and that human oversight remains essential, confirming the report’s statement [S71] and [S127] and [S128].

!
Correctionhigh

“Prime Minister’s “Mani Vision” – a human‑centric framework for ethical, accountable and inclusive AI governance”

The official name of the framework is “MANAV Vision”, presented by Prime Minister Narendra Modi, not “Mani Vision” [S129].

Confirmedmedium

“Chairperson Shubhavi S. Radha Chauhan of the Capacity Building Commission highlighted the Vision”

The existence of the Capacity Building Commission and its role in AI workforce development is documented in the knowledge base [S2].

Confirmedmedium

“Prof Guilherme Albusco Almeida (Brazil) was a panelist”

Guilherme Albusco Almeida’s participation in the summit panel is recorded in the knowledge base [S16].

Additional Contextmedium

“India’s approach is a “third way” – a partnership model between the US‑led market race and China’s state‑led techno‑nationalism”

The knowledge base provides a definition of techno-nationalism, clarifying the contrast with the US market-driven model and supporting the report’s framing of India’s “third way” [S140].

Additional Contextlow

“India’s extensive digital public infrastructure (RADAR, UPI, digital locker, etc.) serves as a “trust architecture” for AI”

India’s digital public infrastructure initiatives, such as Mission Kalmaragi and related capacity-building efforts, are described in the knowledge base, giving background to the report’s claim [S2].

Additional Contextlow

“Future AI will rely on “small language models, context‑specific, sectoral, and decentralised” rather than massive monolithic systems”

The knowledge base mentions a push for frugal, low-carbon-footprint AI and the need for more lightweight models, which adds nuance to the report’s statement [S108].

External Sources (140)
S1
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S2
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S3
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S4
Building the Workforce_ AI for Viksit Bharat 2047 — -Speaker 1- Role/Title: Not specified, Area of expertise: Not specified -Speaker 3- Role/Title: Not specified, Area of …
S5
S6
Advancing Scientific AI with Safety Ethics and Responsibility — – Speaker 1- Speaker 2- Speaker 3 – Speaker 1- Speaker 3- Moderator
S7
Building the Workforce_ AI for Viksit Bharat 2047 — -Shubhavi S. Radha Chauhan- Role/Title: Chairperson of the Capacity Building Commission, Area of expertise: Public admin…
S8
Building the Workforce_ AI for Viksit Bharat 2047 — Guilherme Albusco Almeida from Brazil, noting his fifth trip to India, identified strong collaboration opportunities in …
S9
Building the Workforce_ AI for Viksit Bharat 2047 — -Dr. Jitendra Singh- Role/Title: Honorable Minister, Minister of State for Personnel, Minister of State for Personal Gri…
S10
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S11
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S12
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S13
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S14
Keynote-Vinod Khosla — -Moderator: Role/Title: Moderator of the event; Area of Expertise: Not mentioned -Mr. Jeet Adani: Role/Title: Not menti…
S15
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S16
https://dig.watch/event/india-ai-impact-summit-2026/building-the-workforce_-ai-for-viksit-bharat-2047 — Minister in the National Council on Scale Development We welcome you sir On the panel, we are joined by Guilherme Albusc…
S17
Building the Workforce_ AI for Viksit Bharat 2047 — – Anil Shivastava- Dr. Jitendra Singh – Anil Shivastava- Subramanian Ramadorai
S18
Building the Workforce_ AI for Viksit Bharat 2047 — -Subramanian Ramadorai- Role/Title: Chairperson of Karni Nagi Bharat and former M.D. and CEO of Tata Consultancy Service…
S19
https://dig.watch/event/india-ai-impact-summit-2026/how-ai-is-transforming-diplomacy-and-conflict-management — And the MOVE 37 initiative that we’re here to talk to you about today. is a part of that program. As you can imagine, in…
S20
How AI Is Transforming Diplomacy and Conflict Management — And the MOVE 37 initiative that we’re here to talk to you about today. is a part of that program. As you can imagine, in…
S21
Building the Workforce_ AI for Viksit Bharat 2047 — -Dr. Washima- Role/Title: Not specified, Area of expertise: Not specified
S22
PLAN NATIONAL DU NUMÉRIQUE HORIZON 2025 — | N° | NOMS | INSTITUTION | | 67 …
S23
Authors of this report — Trust builds on shared assumptions about material and immaterial values, about what is important and what is expendable….
S24
High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative — 1. Trust, safety, and accountability: His Excellency Dr. Abdullah bin Sharaf Alghamdi emphasised the need to focus on th…
S25
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Excellency, thank you very much first and foremost I would like to thank India for hosting this excellent event Malaysia…
S26
How can Artificial Intelligence (AI) improve digital accessibility for persons with disabilities? — Furthermore, the synthesis highlights the positive role of multi-sectoral collaboration in driving disability inclusion….
S27
Closing Ceremony — This argument positions artificial intelligence as a transformative force rather than merely a technological tool. It su…
S28
https://dig.watch/event/india-ai-impact-summit-2026/the-foundation-of-ai-democratizing-compute-data-infrastructure — Given the volume of funds available, I would focus a lot more on capability development of people to be able, their abil…
S29
AI Meets Agriculture Building Food Security and Climate Resilien — The World Bank’s Johannes Zutt stressed the importance of collaborative ecosystems where government provides foundationa…
S30
Building Scalable AI Through Global South Partnerships — This comment elevated the discussion by providing a philosophical foundation for South-South cooperation based on shared…
S31
Climate change and Technology implementation | IGF 2023 WS #570 — João Vitor Andrade:Hi, everyone. I’d like to thank you all to be present here today. My name is João Vitor, I’m from Bra…
S32
Judiciary engagement — – Adel Maged- Maureen Fondo- Slyvia Chirawu- Audience Legal and Regulatory Framework Needs Legal and regulatory | Huma…
S33
WS #438 Digital Dilemmaai Ethical Foresight Vs Regulatory Roulette — Deloitte consultant: Good morning everyone. My name is Yasmin Alduri. I’m an AI governance consultant at Deloitte and I’…
S34
Main Topic 2 – Empowering communities: partnerships for access to services  — Arturas Piliponis:Thank you, nice examples. Ieva, anything or others to add? If not, I can share just building on what y…
S35
Veronica Cretu — Experiences from around the world have demonstrated that using information on the performance of service providers by bo…
S36
Green AI and the battle between progress and sustainability — AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. Th…
S37
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — Thank you so much. And I’ll be very quick because I can see the ticker. There are a couple of things. One is that we’re …
S38
Survival Tech Harnessing AI to Manage Global Climate Extremes — “We are introducing, you know, IP and other innovations to drive translation”[99]. “In some of our programs, we have put…
S39
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Alaa Abdulaal concluded the session by emphasizing DCO’s commitment to a multi-stakeholder approach in addressing ethica…
S40
MahaAI Building Safe Secure &amp; Smart Governance — His solution advocated for “intelligent governance” built upon five core principles: human-centred design, transparency …
S41
AI &amp; Child Rights: Implementing UNICEF Policy Guidance | IGF 2023 WS #469 — Incidents such as the arrest of a young man near Windsor Castle, who was influenced by his AI assistant to harm the Quee…
S42
WS #205 Contextualising Fairness: AI Governance in Asia — 4. Community-based models: Chin mentioned the potential of community-based small models to serve specific needs. Milton…
S43
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Abhishek Singh: One part is that, of course, the way the technology is evolving, there is IP-driven solutions and there …
S44
Building Population-Scale Digital Public Infrastructure for AI — The Brazilian experience illustrates the systemic nature of the challenge, requiring coordinated changes in legal framew…
S45
AI as critical infrastructure for continuity in public services — Data silos emerged as a primary barrier, with organizations struggling to integrate data across different systems and de…
S46
WS #123 Responsible AI in Security Governance Risks and Innovation — This comment elevated the technical discussion to a more sophisticated understanding of systemic governance challenges. …
S47
Pre 10: Regulation of Autonomous Weapon Systems: Navigating the Legal and Ethical Imperative — Issues particularly evident in joint or cross-force environments where systems must function across organizational, nati…
S48
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — ### Scaling from Pilots to Systems Government AI carries higher risks than private sector applications, including ethic…
S49
Navigating the Double-Edged Sword: ICT’s and AI’s Impact on Energy Consumption, GHG Emissions, and Environmental Sustainability — An expert panel convened to examine the complex relationship between Information and Communication Technologies (ICTs) a…
S50
Networking Session #50 AI and Environment: Sustainable Development | IGF 2023 — Artificial Intelligence (AI) technologies have the potential to significantly contribute to creating greener cities and …
S51
Building the Workforce_ AI for Viksit Bharat 2047 — Capacity building should be treated as an engine of innovation rather than an afterthought, with personalized learning p…
S52
Agenda item 6 — Ghana:Mr. Chair, thank you for giving me the floor. I would like to join others before me in wishing us all a happy Wome…
S53
Opening of the session — Kazakhstan: Thank you, Chair, for giving the floor. Mr. Chair, distinguished delegates, as it’s our first time taking th…
S54
Networking Session #50 AI and Environment: Sustainable Development | IGF 2023 — In addition to supporting climate action, AI is expected to play a significant role in digitally managed energy systems….
S55
High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative — Li discusses the potential of AI-driven models in climate prediction and resource mobilization. He highlights the import…
S56
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Development | Human rights | Sustainable development Funding and Policy Mechanisms Mark Gachara emphasized that climat…
S57
Artificial Intelligence Strategy of the German Federal Government — The Federal Government will continue and expand the successfully launched funding initiative AI Flagship Projects for th…
S58
Closing remarks – Charting the path forward — Bouverot emphasizes that AI governance must address environmental concerns by incorporating sustainability measures. Thi…
S59
WS #466 AI at a Crossroads Between Sovereignty and Sustainability — Environmental Impact and Climate Justice Moltzau argues that given the current climate crisis and multiple global chall…
S60
UNESCO links AI development with climate responsibility — UNESCO hasrenewed calls for stronger international cooperationto ensure AI supports rather than undermines climate goals…
S61
Green AI and the battle between progress and sustainability — AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. Th…
S62
DC-DNSI: Beyond Borders – NIS2’s Impact on Global South — Guangyu Qiao-Franco: So my contribution is co-hosted with Mr. Mahmoud Javadi of Free University Brussels, who is also pr…
S63
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Ante este panorama, los países del sur global debemos priorizar estrategias y normativas para un uso ético y responsable…
S64
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — Armando José Manzueta-Peña:Well, thank you, Luca, for the presentation. I’m more than thrilled to be present here and to…
S65
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — Economic and Labor Market Impact Examples of relieving employees from 4-hour internet searches and policy drafting, add…
S66
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Development | Legal and regulatory Evidence-Based Policymaking and Research Integration Part of the roadmap emphasizes…
S67
Reinventing Digital Inclusion / DAVOS 2025 — Importance of local leadership and tailored solutions Paula Ingabire discusses Rwanda’s focus on identifying AI use cas…
S68
Strengthening Corporate Accountability on Inclusive, Trustworthy, and Rights-based Approach to Ethical Digital Transformation — Moderate disagreement with significant implications. While speakers generally agreed on the importance of digital inclus…
S69
Open Forum #66 the Ecosystem for Digital Cooperation in Development — This comment cuts to the heart of development challenges by highlighting the implementation gap between policy and pract…
S70
Dynamic Coalition Collaborative Session — The discussion frequently referenced the Global Digital Compact as an example of well-intentioned policy that lacks clea…
S71
Enhancing rather than replacing humanity with AI — AI development is not some unstoppable force beyond our control. It’s shaped by developers, institutions, policymakers, …
S72
How AI Drives Innovation and Economic Growth — The speakers show broad agreement on AI’s transformative potential for development but significant disagreements on impl…
S73
Ethics in the Age of AI — In conclusion, the conversation with Michael B. Jordan shed light on various aspects of contemporary issues surrounding …
S74
From Innovation to Impact_ Bringing AI to the Public — Whilst maintaining an optimistic outlook, the discussion acknowledges important limitations and risks. Sharma emphasises…
S75
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Larissa Zutter stands out as a senior AI policy advisor, closely studying the socio-economic implications of artificial …
S76
The Role of Government and Innovators in Citizen-Centric AI — “we are developing an ecosystem which is really brilliant, self‑reliant, sufficient in terms of good company producing o…
S77
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Achieving inclusive AI requires addressing inequalities across three fundamental areas: access to computing infrastructu…
S78
Is AI a catalyst for development? — The Economist argues that AI has the potential to revolutionise developing countries by transforming their economies and…
S79
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Chris Martin: Hiya, how are you doing? Check, check. Is that better? Cool. Again, hello. Welcome. My name is Chri…
S80
How AI Is Transforming Diplomacy and Conflict Management — So that’s a pretty big gap to close and we see gaps like this all the time. One of the biggest gaps is leaders not using…
S81
WS #110 AI Innovation Responsible Development Ethical Imperatives — Ethical Concerns and Risk Mitigation Human rights principles | Development Zhang emphasizes that human-centric princip…
S82
MahaAI Building Safe Secure &amp; Smart Governance — His solution advocated for “intelligent governance” built upon five core principles: human-centred design, transparency …
S83
S84
Building the Workforce_ AI for Viksit Bharat 2047 — From the community health worker delivering nutrition to an expecting mother to the balancing worker strategizing access…
S85
https://dig.watch/event/india-ai-impact-summit-2026/building-the-workforce_-ai-for-viksit-bharat-2047 — From the community health worker delivering nutrition to an expecting mother to the balancing worker strategizing access…
S86
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Abhishek Singh: One part is that, of course, the way the technology is evolving, there is IP-driven solutions and there …
S87
[Parliamentary Session 3] Researching at the frontier: Insights from the private sector in developing large-scale AI systems — Ammari highlighted META’s open-source approach to large language models, explaining, “META has adopted an open source me…
S88
Building Population-Scale Digital Public Infrastructure for AI — Excellent point. Excellent point, Trevor. And I think you brought out the inherent stress in the phrase diffusion pathwa…
S89
Signature Panel: Building Cyber Resilience for Sustainable Development by Bridging the Global Capacity Gap — Brazil:Thank you, Robin. Distinguished Delegates, it’s an honor to be here today at the Global Roundtable on Building Ca…
S90
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — Five identified risks: ethical risk, operational risk, exclusion risk, public resistance, and widened gaps between publi…
S91
Pre 10: Regulation of Autonomous Weapon Systems: Navigating the Legal and Ethical Imperative — Issues particularly evident in joint or cross-force environments where systems must function across organizational, nati…
S92
WS #123 Responsible AI in Security Governance Risks and Innovation — This comment elevated the technical discussion to a more sophisticated understanding of systemic governance challenges. …
S93
AI as critical infrastructure for continuity in public services — This statistic provides concrete evidence of the implementation gap between AI pilots and production systems. It challen…
S94
Navigating the Double-Edged Sword: ICT’s and AI’s Impact on Energy Consumption, GHG Emissions, and Environmental Sustainability — An expert panel convened to examine the complex relationship between Information and Communication Technologies (ICTs) a…
S95
Networking Session #50 AI and Environment: Sustainable Development | IGF 2023 — Artificial intelligence (AI) is improving the ways we live, work and solve problems. It can also help us fight climate c…
S96
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — AI’s energy demands. Threaten to outpace green energy progress. Model providers face a stark reality. AI’s energy needs …
S97
Green AI and the battle between progress and sustainability — AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. Th…
S98
Summit Opening Session — The tone throughout is consistently formal, diplomatic, and collaborative. Speakers maintain an optimistic and forward-l…
S99
Keynote-António Guterres — The tone is formal, diplomatic, and aspirational throughout, maintaining a consistent message of urgency mixed with opti…
S100
Opening remarks — In conclusion, the speaker hopes for a constructive meeting, reminding attendees of the global imperative for openness, …
S101
Opening — The overall tone was formal yet optimistic. Speakers acknowledged the serious challenges posed by rapid technological ch…
S102
Parliamentary Roundtable Safeguarding Democracy in the Digital Age Legislative Priorities and Policy Pathways — The discussion maintained a serious but collaborative tone throughout. It began with formal opening remarks emphasizing …
S103
Day 0 Event #188 Top Business and Technology Trends in Government for 2024 — A significant point emphasized in the presentation was the challenge posed by legacy systems:
S104
Prosperity Through Data Infrastructure — However, there are arguments suggesting that legacy systems present challenges in the journey of digitalisation. One vie…
S105
WS #279 AI: Guardian for Critical Infrastructure in Developing World — Hafiz Muhammad Farooq: First of all, thank you very much for inviting me today for this great panel discussion. I’m H…
S106
Agenda item 6: other matters/OEWG 2025 — The overall tone was constructive and diplomatic, with most delegations expressing willingness to compromise and find co…
S107
Open Forum #29 Advancing Digital Inclusion Through Segmented Monitoring — The discussion maintained a collaborative and constructive tone throughout, with panelists building on each other’s insi…
S108
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — The discussion maintained a consistently optimistic and collaborative tone throughout, characterized by mutual respect b…
S109
Any other business /Adoption of the report/ Closure of the session — In conclusion, the delegate’s remarks highlighted the enduring spirit of solidarity and collaboration, while also convey…
S110
Dynamic Coalition Collaborative Session — The discussion began with an optimistic, collaborative tone as panelists shared their expertise and perspectives. Howeve…
S111
How Multilingual AI Bridges the Gap to Inclusive Access — The tone was consistently collaborative, optimistic, and mission-driven throughout the conversation. Speakers demonstrat…
S112
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — The tone is consistently optimistic, collaborative, and forward-looking throughout the discussion. Speakers emphasize “l…
S113
New Technologies and the Impact on Human Rights — The discussion maintained a collaborative and constructive tone throughout, despite addressing complex and sometimes con…
S114
AI and Human Connection: Navigating Trust and Reality in a Fragmented World — The tone began optimistically with audience engagement but became increasingly concerned and urgent as panelists reveale…
S115
Comprehensive Summary: AI Governance and Societal Transformation – A Keynote Discussion — The tone begins confrontational and personal as Hunter-Torricke distances himself from his tech industry past, then shif…
S116
AI and Digital Developments Forecast for 2026 — The tone begins as analytical and educational but becomes increasingly cautionary and urgent throughout the conversation…
S117
WSIS Action Line C10: Ethics in AI: Shaping a Human-Centred Future in the Digital Age — The discussion maintained a collaborative and constructive tone throughout, with speakers building upon each other’s ins…
S118
Closing Ceremony — The discussion maintains a consistently positive and collaborative tone throughout, characterized by gratitude, celebrat…
S119
Closing Session  — The tone throughout the discussion was consistently formal, collaborative, and optimistic. It maintained a celebratory y…
S120
Closing remarks — The tone is consistently celebratory, optimistic, and forward-looking throughout the discussion. It maintains an enthusi…
S121
Open Mic &amp; Closing Ceremony — Hajia Sani: Hmm. Another round of applause, please. Another round of applause. Thank you so much. He just offered that t…
S122
Parliamentary Closing Closing Remarks and Key Messages From the Parliamentary Track — The discussion maintained a collaborative and constructive tone throughout, characterized by diplomatic language and mut…
S123
Bridging the AI innovation gap — LJ Rich: to invite our opening keynote. It’s a pleasure to invite to the stage the director of the Telecommunications St…
S124
AI Policy Summit Opening Remarks: Discussion Report — “The only way you could see that he was communicating with us is that there was a little bit of a tear coming out of his…
S125
Leaders TalkX: WSIS towards the Summit of the Future/GDC and beyond — In the address, the speaker opens by acknowledging and expressing gratitude to prominent individuals and organisations, …
S126
(Plenary segment) Summit of the Future – General Assembly, 4th plenary meeting, 79th session — Sovereign Order of Malta: Mr. Speaker, Heads of State and Government, Excellencies, ladies and gentlemen. The Sovereig…
S127
WS #184 AI in Warfare – Role of AI in upholding International Law — Mohamed Sheikh-Ali emphasizes the necessity of human oversight and control in AI-powered weapons systems. He argues that…
S128
WS #219 Generative AI Llms in Content Moderation Rights Risks — All speakers agree that despite technological advances, human oversight and involvement in content moderation remains cr…
S129
India unveils MANAV Vision as new global pathway for ethical AI — Narendra Modipresentedthe new MANAV Vision during the India AI Impact Summit 2026 in New Delhi, setting out a human-cent…
S130
GLOBAL COMMISSION ON THE FUTURE OF WORK — A strong lifelong learning system, combined with universal social protection, enables workers to assume their responsibi…
S131
Policies and platforms in support of learning: towards more coherence, coordination and convergence — 30. The learning policies of the United Nations system originate in the conditions set out in Article 101 of the Charter…
S132
WSIS+20 High-Level Dialogue: WSIS Legacy in Motion: Honoring the Past, Shaping the Future — Policy frameworks should take a holistic approach across economic, technical, socio-cultural and governance factors
S133
E U R O P E A N E C O N O M I C A R E A — 29 Several EU social partners in different economic sectors have also made joint pledges under the Alliance for Apprenti…
S134
Agenda item 5: discussions on substantive issues contained inparagraph 1 of General Assembly resolution 75/240 (continued)/ part 4 — Islamic Republic of Iran: Thank you, Mr. Chair. We would like to express our sincere appreciation to you, your team, and…
S135
Panel discussion: International law, cyber-norms, CBMs, capacity building,institutional dialogue — Dr Katherine Getao:I do apologize, distinguished delegates, if I was not clear. My fourth one was capacity building, whi…
S136
Opening Ceremony — This comment introduced a critical counternarrative to tech industry talking points and provided a concrete framework fo…
S137
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — It find a lot of startups across the world. These startups, they want the injection of funds, but the most important thi…
S138
Keynote-HE Emmanuel Macron — The speech concluded with a powerful reaffirmation of the central thesis: that the future of AI will be built by those w…
S139
Comprehensive Report: “Converging with Technology to Win” Panel Discussion — The discussion began by comparing two major technology ecosystem models: the U.S. approach, driven by university-industr…
S140
Digital Technologies in Emerging Countries Edited by Francis Fukuyama and Marietje Schaake — Techno-nationalism can be broken down into two subcomponents: political-informational and politicaleconomic. Political-i…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
Dr. Washima
2 arguments122 words per minute251 words122 seconds
Argument 1
Trust‑based collaborative ethical frameworks (Dr. Washima)
EXPLANATION
Dr. Washima emphasizes that the responsibility of the gathering is to develop trust‑based collaborative ethical frameworks that can guide AI deployment in fast‑paced public services. These frameworks are intended to ensure safety, fairness, and public trust in AI‑driven governance.
EVIDENCE
She states, “Responsibility is to carve out trust-based collaborative ethical frameworks so that the demands of fast-paced dynamic AI-DPD age … is met by a well-informed design and delivery model” [10].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Trust-based frameworks are grounded in shared values and norms as described in [S23], and the emphasis on trust, safety, and accountability in AI governance is highlighted in [S24].
MAJOR DISCUSSION POINT
Trust‑based collaborative ethical frameworks (Dr. Washima)
AGREED WITH
Robin Scott, Dr. Jitendra Singh
DISAGREED WITH
Robin Scott
Argument 2
AI as a transformative tool for improving quality of life, public services, and inclusive growth (Dr. Washima)
EXPLANATION
Dr. Washima likens AI to electricity, describing it as a great leveler that can transform quality of life, governance, and societal progress. She argues that AI should be harnessed to elevate humanity rather than merely scaling technology.
EVIDENCE
She remarks, “Technology, they say, is a great leveler, and AI, they say, is the next big thing after electricity” [4] and later notes that AI gives “unprecedented power not to do things better but to do better things” for education, governance, health, and the planet [66-68].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI is portrayed as a transformative force shaping economies and societies in [S27], and its characterization as a ‘great leveler’ comparable to electricity is echoed in [S2].
MAJOR DISCUSSION POINT
AI as a transformative tool for improving quality of life, public services, and inclusive growth (Dr. Washima)
AGREED WITH
Dr. Jitendra Singh, Subramanian Ramadorai, Speaker 1
S
Shubhavi S. Radha Chauhan
2 arguments115 words per minute886 words460 seconds
Argument 1
Human‑centric “Mani Vision” and sector‑specific small models (Shubhavi S. Radha Chauhan)
EXPLANATION
She highlights the Prime Minister’s “Mani Vision”, a human‑centric framework for ethical, accountable, and inclusive AI governance, and stresses that future AI will rely on small, context‑specific language models rather than massive monolithic ones.
EVIDENCE
She notes, “Our Honorable PM yesterday outlined Mani Vision, a human-centric framework for ethical, accountable and inclusive AI governance” [23] and adds, “It will be in small language models, context-specific, sectoral, and decentralized” [31].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Mani Vision framework and the shift toward small, context-specific language models are explicitly mentioned in [S2] and reinforced by calls for niche models in [S28].
MAJOR DISCUSSION POINT
Human‑centric “Mani Vision” and sector‑specific small models (Shubhavi S. Radha Chauhan)
AGREED WITH
Subramanian Ramadorai
Argument 2
Customized competency frameworks and personalized learning pathways (Shubhavi S. Radha Chauhan)
EXPLANATION
She describes the development of sector‑specific competency frameworks and personalized learning pathways for public officials, built on holistic policy frameworks that identify competency gaps and tailor learning.
EVIDENCE
She explains, “Capacity building must therefore focus on enabling our officials to deconstruct complexities, impose appropriate guardrails on data and its use…” [33-34] and adds that the Commission has “established operational guidelines, especially those for identifying competency gaps, leading to personalized learning pathways for each one of our learners” [35].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sector-specific competency frameworks and personalized learning pathways are outlined in [S2] and further detailed in [S16].
MAJOR DISCUSSION POINT
Customized competency frameworks and personalized learning pathways (Shubhavi S. Radha Chauhan)
AGREED WITH
Anil Shivastava, Robin Scott, Speaker 1, Subramanian Ramadorai
G
Guilherme Albusco Almeida
3 arguments121 words per minute613 words302 seconds
Argument 1
South‑South cooperation on civil‑service AI training; four‑tiered capacity‑building profiles (Guilherme Albusco Almeida)
EXPLANATION
Guilherme outlines Brazil’s civil‑service AI training programme, which uses four distinct capacity‑building profiles (senior leaders, IT managers, data curators, general civil servants) and stresses collaboration with India as a South‑South partnership.
EVIDENCE
He states, “In Brazil, we have at least four different profiles for capacity building, one for senior leaders, one for IT managers, one for data curators, and the other for general civil servants…” [161-166] and notes the partnership with India and similar organisations in Brazil [155-158].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
South-South partnership models for AI capacity building are discussed in [S30], and Guilherme’s Brazil-India collaboration is referenced in [S2].
MAJOR DISCUSSION POINT
South‑South cooperation on civil‑service AI training; four‑tiered capacity‑building profiles (Guilherme Albusco Almeida)
AGREED WITH
Subramanian Ramadorai
Argument 2
Brazil‑India collaboration on R&D, capacity building, and ethical assessment frameworks; South‑South knowledge sharing (Guilherme Albusco Almeida)
EXPLANATION
He emphasizes the complementary strengths of Brazil and India in AI R&D, capacity building, and ethical assessment, proposing joint efforts and leveraging South‑South networks such as Apolitical to scale knowledge.
EVIDENCE
He mentions, “We should consider R&D… there are strong room for cooperation and collaboration” [151-154] and adds, “We have developed a framework for ethical assessment of AI implementation” [161-163] while highlighting partnerships with Apolitical for global knowledge sharing [165].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Joint R&D and ethical assessment initiatives between Brazil and India are highlighted in [S30] and [S2].
MAJOR DISCUSSION POINT
Brazil‑India collaboration on R&D, capacity building, and ethical assessment frameworks; South‑South knowledge sharing (Guilherme Albusco Almeida)
AGREED WITH
Subramanian Ramadorai
Argument 3
Use of AI for climate monitoring, deforestation detection, and reforestation planning (Guilherme Albusco Almeida)
EXPLANATION
Guilherme describes Brazil’s Rural Environmental Registry that uses AI to map private‑land forests, detect illegal logging, and support reforestation, illustrating AI’s role in environmental stewardship.
EVIDENCE
He explains, “In Brazil, we have a system in which we mapped all the forests in private lands. It’s the Rural Environmental Registry. And we’re using AI to understand whether forests have been cut down or not” [209-212].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The environmental impact of AI and its application to climate monitoring are examined in [S36] and [S38], with sustainability considerations noted in [S37].
MAJOR DISCUSSION POINT
Use of AI for climate monitoring, deforestation detection, and reforestation planning (Guilherme Albusco Almeida)
AGREED WITH
Robin Scott, Anil Shivastava
R
Robin Scott
4 arguments150 words per minute452 words180 seconds
Argument 1
Widespread lack of understanding of ethical frameworks among implementers (Robin Scott)
EXPLANATION
Robin reports that a global survey shows only 26 % of AI implementers understand their own government’s ethical frameworks, leaving the majority to operate without clear guidance, which raises risk.
EVIDENCE
She cites, “only 26 % say they understand their own government’s ethical frameworks. So in other words, 75 % are freestyling, and that builds a great deal of risk into the system” [182-188].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Surveys revealing limited awareness of ethical frameworks and the need for clearer guidance are discussed in [S33] and [S32].
MAJOR DISCUSSION POINT
Widespread lack of understanding of ethical frameworks among implementers (Robin Scott)
AGREED WITH
Dr. Washima, Dr. Jitendra Singh
DISAGREED WITH
Dr. Washima
Argument 2
Survey‑based gaps (ethical awareness, pilot evaluation) and high optimism among public servants (Robin Scott)
EXPLANATION
Robin presents data from an 8,000‑person survey indicating low awareness of ethical frameworks and weak evaluation plans for AI pilots, yet over 90 % of public servants remain optimistic about AI’s potential.
EVIDENCE
She notes, “According to our data, this is an 8,000-person global survey… only 26 % understand ethical frameworks, 75 % freestyling… 72 % say they have a pilot, but only 45 % have a plan to evaluate performance… well over 90 % of public servants are very optimistic” [182-194].
MAJOR DISCUSSION POINT
Survey‑based gaps (ethical awareness, pilot evaluation) and high optimism among public servants (Robin Scott)
Argument 3
Development of an AI‑and‑climate course and promotion of “green AI” principles (Robin Scott)
EXPLANATION
Robin mentions that a dedicated course on AI and climate, developed with the Stanford Doerr School of Sustainability, has been created to educate public servants on the intersection of AI and environmental sustainability.
EVIDENCE
She says, “We have developed a course on AI and climate and understanding the links with the Stanford Doerr School of Sustainability” [201-203].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The push for ‘green AI’ and climate-focused AI education aligns with findings in [S36] and [S38].
MAJOR DISCUSSION POINT
Development of an AI‑and‑climate course and promotion of “green AI” principles (Robin Scott)
AGREED WITH
Guilherme Albusco Almeida, Anil Shivastava
DISAGREED WITH
Anil Shivastava
Argument 4
Global survey highlighting readiness gaps and the need for common evaluation standards (Robin Scott)
EXPLANATION
Robin reiterates findings from the global survey that reveal gaps in ethical framework comprehension and pilot evaluation, arguing that standardized evaluation metrics are essential for responsible AI rollout.
EVIDENCE
She references the same survey data, emphasizing that only 26 % understand ethical frameworks and that many pilots lack evaluation plans, underscoring the need for common standards [182-190].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for standardized ethical evaluation metrics are echoed in [S33] and the regulatory gaps noted in [S32].
MAJOR DISCUSSION POINT
Global survey highlighting readiness gaps and the need for common evaluation standards (Robin Scott)
D
Dr. Jitendra Singh
3 arguments144 words per minute2236 words927 seconds
Argument 1
Integrity and human‑in‑the‑loop as non‑negotiable (Dr. Jitendra Singh)
EXPLANATION
Dr. Singh stresses that AI systems must operate with integrity and always keep a human in the decision‑making loop; this is presented as a non‑negotiable principle for trustworthy AI governance.
EVIDENCE
He declares, “Artificial intelligence can substitute everything on this planet but it cannot substitute integrity… the basic mantra is to learn to be a good learner and human-in-the-loop is essential” [324-327].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The non-negotiable nature of integrity and human oversight is reinforced by trust and accountability themes in [S23] and [S24].
MAJOR DISCUSSION POINT
Integrity and human‑in‑the‑loop as non‑negotiable (Dr. Jitendra Singh)
AGREED WITH
Dr. Washima, Robin Scott
Argument 2
Hybrid models that combine AI tools with human judgment; AI as an augmenting, not substituting, technology (Dr. Jitendra Singh)
EXPLANATION
He describes AI as an augmenting technology that should work alongside human expertise, forming hybrid models where human judgment guides AI outputs, thereby preserving integrity and accountability.
EVIDENCE
He states, “AI is a powerful tool… we must learn to be a good learner… hybrid model… AI as augmenting, not substituting” [323-327].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Hybrid human-AI models are advocated in responsible AI governance discussions in [S24].
MAJOR DISCUSSION POINT
Hybrid models that combine AI tools with human judgment; AI as an augmenting, not substituting, technology (Dr. Jitendra Singh)
Argument 3
AI as a catalyst for citizen‑centred, accountable governance while preserving human values (Dr. Jitendra Singh)
EXPLANATION
Dr. Singh portrays AI as a catalyst that can enable citizen‑centred, accountable governance, provided that human values, integrity, and ethical oversight remain central to its deployment.
EVIDENCE
He remarks, “AI is a catalyst for citizen-centred, accountable governance while preserving human values” and elaborates on the need for integrity, accountability, and human oversight throughout his address [291-298] and [300-307].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI’s role in inclusive, citizen-focused governance is highlighted in the AI Impact Summit report [S25] and its transformative potential in [S27].
MAJOR DISCUSSION POINT
AI as a catalyst for citizen‑centred, accountable governance while preserving human values (Dr. Jitendra Singh)
AGREED WITH
Dr. Washima, Subramanian Ramadorai, Speaker 1
A
Anil Shivastava
3 arguments145 words per minute671 words275 seconds
Argument 1
Need to re‑engineer legacy processes, ensure multilingual support, and address data‑sovereignty (Anil Shivastava)
EXPLANATION
Anil explains that existing IT systems are siloed and built for past technologies, requiring re‑engineering of data, processes, multilingual capabilities, and security to enable effective AI integration.
EVIDENCE
He notes, “Existing IT systems are very centric… they have silos of data… AI needs contextual data… we need to re-engineer… also need to ensure multilingual support… and address data-sovereignty and security vectors” [126-138].
MAJOR DISCUSSION POINT
Need to re‑engineer legacy processes, ensure multilingual support, and address data‑sovereignty (Anil Shivastava)
AGREED WITH
Shubhavi S. Radha Chauhan, Robin Scott, Speaker 1, Subramanian Ramadorai
DISAGREED WITH
Subramanian Ramadorai
Argument 2
Legacy IT systems are siloed; AI requires re‑engineering of data, processes, and security controls (Anil Shivastava)
EXPLANATION
He reiterates that legacy systems contain siloed data and business logic, which must be restructured to provide the contextual, secure data streams AI models need.
EVIDENCE
He describes, “The existing IT systems are very centric… they have silos of data, silos of business logic, whereas AI needs contextual data… we also need to re-look at exposure and security vectors” [126-138].
MAJOR DISCUSSION POINT
Legacy IT systems are siloed; AI requires re‑engineering of data, processes, and security controls (Anil Shivastava)
Argument 3
Google’s commitment to carbon‑neutral data centres by 2030 and partnership with India for sustainable infrastructure (Anil Shivastava)
EXPLANATION
Anil highlights Google’s pledge to make all its data centres carbon neutral by 2030 and expresses willingness to partner with the Indian government to set similar sustainability targets.
EVIDENCE
He says, “Google, first of all, has committed that by 2030, all our data centers will be carbon neutral… we want to partner with the government of India to ensure that all the data centres we are building in the country have targets to ensure carbon neutrality” [215-218].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sustainability goals for AI infrastructure, including carbon-neutral data centres, are discussed in [S36] and [S37].
MAJOR DISCUSSION POINT
Google’s commitment to carbon‑neutral data centres by 2030 and partnership with India for sustainable infrastructure (Anil Shivastava)
AGREED WITH
Robin Scott, Guilherme Albusco Almeida
DISAGREED WITH
Robin Scott
S
Subramanian Ramadorai
3 arguments140 words per minute1223 words520 seconds
Argument 1
Blueprint for a Digital Capacity Building Alliance and global funding model (Subramanian Ramadorai)
EXPLANATION
Subramanian calls for operationalising the proposed alliance, referencing a blueprint for digital capacity‑building labs that outlines shared work for AI development and funding across nations.
EVIDENCE
He says, “Let us take this convention forward to see how the proposed alliance can be operationalized… the Summit Master launched a blueprint for digital capacity building and labs that sets out the share of fair work for developing AI” [104-105].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The concept of a global digital capacity-building alliance aligns with South-South partnership frameworks in [S30] and the inclusive AI agenda of the 2026 summit in [S25].
MAJOR DISCUSSION POINT
Blueprint for a Digital Capacity Building Alliance and global funding model (Subramanian Ramadorai)
AGREED WITH
Shubhavi S. Radha Chauhan, Anil Shivastava, Robin Scott, Speaker 1
Argument 2
Emphasis on edge AI and small, domain‑specific language models for rural and local contexts (Subramanian Ramadorai)
EXPLANATION
He points out that the next billion AI users will interact with tiny embedded AI on devices, and that India’s rural opportunity lies in small, domain‑specific language models that can run on edge hardware.
EVIDENCE
He notes, “The next billion AI users may not interact with 3D and parameter models. They may interact with tiny embedded AI in phones, tractors, classrooms, clinics… India’s rural opportunity lies in small language models that are absolutely domain specific and can run on edge devices” [96-98].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for small, domain-specific models for edge deployment are made in [S28] and reinforced by the focus on small models in [S2].
MAJOR DISCUSSION POINT
Emphasis on edge AI and small, domain‑specific language models for rural and local contexts (Subramanian Ramadorai)
AGREED WITH
Shubhavi S. Radha Chauhan
DISAGREED WITH
Anil Shivastava
Argument 3
India’s “third way” partnership model positioning itself between US‑led and China‑led AI trajectories (Subramanian Ramadorai)
EXPLANATION
He frames India’s AI strategy as a “third way”, offering a partnership‑based approach that is distinct from the market‑driven US model and the state‑led Chinese model.
EVIDENCE
He states, “Globally, AI is framed as a binary race… However, it might lend India offers a third way, in partnership, of course” [74-76].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The ‘third way’ partnership approach is reflected in the Global South collaboration narrative of [S30] and the inclusive AI development goals of [S25].
MAJOR DISCUSSION POINT
India’s “third way” partnership model positioning itself between US‑led and China‑led AI trajectories (Subramanian Ramadorai)
AGREED WITH
Guilherme Albusco Almeida
S
Speaker 1
2 arguments114 words per minute284 words149 seconds
Argument 1
Launch of the Digital Capacity Building Alliance as a public‑good platform (Speaker 1)
EXPLANATION
Speaker 1 announces the unveiling of a Digital Capacity Building Alliance that integrates global AI principles, digital public‑good standards, and a mission‑cum‑worthy model, positioning it as a public‑good resource for capacity building.
EVIDENCE
He declares, “Today, the Capacity Building Commission unveiled a proposal to forge Digital Capacity Building Allowance… a unique model for demand, design, delivery, and continued evolution… a global public good for inclusive, ethical capacity building” [263-270].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The public-good capacity-building platform mirrors alliance concepts described in [S30] and the AI summit’s emphasis on shared resources in [S25].
MAJOR DISCUSSION POINT
Launch of the Digital Capacity Building Alliance as a public‑good platform (Speaker 1)
Argument 2
AI‑enabled governments with personalized decision support, moving from reactive to adaptive capacity models (Speaker 1)
EXPLANATION
Speaker 1 describes AI‑enabled governments that provide personalized learning paths and smart decision support, shifting public service delivery from reactive systems to adaptive capacity models.
EVIDENCE
He says, “AI-enabled governments, personalized learning paths, smart decision support, from reactive systems to adaptive capacity model” [258-262].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Personalized learning pathways and adaptive decision support are discussed in [S28] and [S16].
MAJOR DISCUSSION POINT
AI‑enabled governments with personalized decision support, moving from reactive to adaptive capacity models (Speaker 1)
M
Moderator
1 argument96 words per minute632 words393 seconds
Argument 1
Moderator’s framing of questions that link technical risks to capacity‑building needs (Moderator)
EXPLANATION
The moderator asks about technical and operational risks of layering AI onto legacy infrastructure and how initiatives can align infrastructure modernization with workforce capability development.
EVIDENCE
He asks, “What technical and operational risks arise when any systems are layered onto the legacy infrastructure? without structural reform, how can initiatives like Mission Community help align infrastructure modernization with workforce capability development?” [110-112].
MAJOR DISCUSSION POINT
Moderator’s framing of questions that link technical risks to capacity‑building needs (Moderator)
S
Speaker 3
1 argument77 words per minute9 words6 seconds
Argument 1
Speaker 3’s prompt for audience interaction to surface additional concerns (Speaker 3)
EXPLANATION
Speaker 3 invites the audience to ask another question, encouraging further engagement and surfacing of concerns.
EVIDENCE
He says, “one more question here last question please thank you” [245].
MAJOR DISCUSSION POINT
Speaker 3’s prompt for audience interaction to surface additional concerns (Speaker 3)
A
Audience
1 argument144 words per minute364 words151 seconds
Argument 1
Call for a generic, hyper‑localizable international AI impact assessment framework (Audience)
EXPLANATION
An audience member proposes developing a collaborative, generic AI impact assessment tool that can be hyper‑localized to different national contexts.
EVIDENCE
The participant says, “maybe we need to have something a more generic collaborative international AI impact framework assessment tool which could look into capabilities not just at the digital leadership level but across the whole organization or country” [246].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for adaptable AI impact assessment tools is highlighted in the regulatory discussion of [S32] and the ethical foresight challenges in [S33].
MAJOR DISCUSSION POINT
Call for a generic, hyper‑localizable international AI impact assessment framework (Audience)
Agreements
Agreement Points
Need for trustworthy ethical frameworks and integrity in AI governance
Speakers: Dr. Washima, Robin Scott, Dr. Jitendra Singh
Trust‑based collaborative ethical frameworks (Dr. Washima) Widespread lack of understanding of ethical frameworks among implementers (Robin Scott) Integrity and human‑in‑the‑loop as non‑negotiable (Dr. Jitendra Singh)
All three speakers stress that AI systems must be anchored in trustworthy, collaborative ethical frameworks and that integrity with a human-in-the-loop is non-negotiable for trustworthy AI deployment. Dr. Washima calls for trust-based collaborative frameworks [10]; Robin points out that only 26 % of implementers understand their government’s ethical frameworks, leaving most to “freestyle” [182-188]; Dr. Singh emphasizes integrity and the necessity of keeping a human in the decision-making loop [324-327].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with calls for transparency and explainability as ethical imperatives highlighted at the IGF high-level session on AI ethics [S55] and reflects the broader consensus that AI must be governed with integrity, as emphasized in discussions on enhancing rather than replacing humanity [S71].
AI as a transformative catalyst for inclusive, citizen‑centred development
Speakers: Dr. Washima, Dr. Jitendra Singh, Subramanian Ramadorai, Speaker 1
AI as a transformative tool for improving quality of life, public services, and inclusive growth (Dr. Washima) AI as a catalyst for citizen‑centred, accountable governance while preserving human values (Dr. Jitendra Singh) AI for economic development, social good, safe and trusted AI, and human capital (Subramanian Ramadorai) AI‑enabled governments, personalized learning paths, smart decision support, from reactive systems to adaptive capacity model (Speaker 1)
The speakers converge on the view that AI should be leveraged as a societal lever rather than a mere technology. Dr. Washima likens AI to a “great leveler” that can improve quality of life [4][66-68]; Dr. Singh describes AI as a catalyst for citizen-centred, accountable governance while preserving human values [291-298]; Subramanian notes the summit theme of AI for economic development and social good [11]; Speaker 1 outlines AI-enabled governments delivering personalized decision support and shifting from reactive to adaptive capacity models [258-262].
POLICY CONTEXT (KNOWLEDGE BASE)
The view mirrors the optimism expressed at the AI Impact Summit 2026 that AI can be a catalyst for inclusive development in the Global South [S78] and the emphasis on citizen-centred public-service transformation noted in Rwanda’s digital inclusion strategy [S67].
Preference for small, sector‑specific or edge AI models over monolithic systems
Speakers: Shubhavi S. Radha Chauhan, Subramanian Ramadorai
Human‑centric “Mani Vision” and sector‑specific small models (Shubhavi S. Radha Chauhan) Emphasis on edge AI and small, domain‑specific language models for rural and local contexts (Subramanian Ramadorai)
Both speakers argue that the future of AI lies in lightweight, context-specific models rather than massive monolithic ones. Shubhavi states that AI will move to “small language models, context-specific, sectoral, and decentralized” [31]; Subramanian highlights the opportunity of “tiny embedded AI” on phones, tractors, classrooms and domain-specific models for rural users [96-98].
Capacity building through customized competency frameworks and personalized learning pathways
Speakers: Shubhavi S. Radha Chauhan, Anil Shivastava, Robin Scott, Speaker 1, Subramanian Ramadorai
Customized competency frameworks and personalized learning pathways (Shubhavi S. Radha Chauhan) Need to re‑engineer legacy processes, ensure multilingual support, and address data‑sovereignty (Anil Shivastava) Development of an AI‑and‑climate course and promotion of “green AI” principles (Robin Scott) AI‑enabled governments, personalized learning paths, smart decision support, from reactive systems to adaptive capacity model (Speaker 1) Blueprint for a Digital Capacity Building Alliance and global funding model (Subramanian Ramadorai)
A shared emphasis emerges on building human capacity with tailored frameworks and learning routes. Shubhavi describes sector-specific competency frameworks and personalized pathways for officials [33-35]; Anil stresses re-engineering legacy systems, multilingual support and data-sovereignty as prerequisites for effective AI uptake [126-138]; Robin notes the creation of an AI-and-climate training course as part of broader capacity-building efforts [201-203]; Speaker 1 highlights personalized learning paths as a core feature of AI-enabled governments [258-262]; Subramanian references a blueprint for a digital capacity-building alliance that will scale such pathways [104-105].
POLICY CONTEXT (KNOWLEDGE BASE)
Capacity building is framed as an engine of innovation in the ‘AI for Viksit Bharat 2047’ roadmap, which advocates personalized learning pathways and continuous feedback loops [S51]; similar recommendations appear in the AI Policy Research Roadmap that stresses raising awareness and competency across public-sector actors [S66].
South‑South collaboration between Brazil and India on AI R&D, capacity building and ethical frameworks
Speakers: Guilherme Albusco Almeida, Subramanian Ramadorai
South‑South cooperation on civil‑service AI training; four‑tiered capacity‑building profiles (Guilherme Albusco Almeida) Brazil‑India collaboration on R&D, capacity building, and ethical assessment frameworks; South‑South knowledge sharing (Guilherme Albusco Almeida) India’s “third way” partnership model positioning itself between US‑led and China‑led AI trajectories (Subramanian Ramadorai)
Both speakers advocate for a collaborative South-South approach. Guilherme outlines Brazil’s four-tiered civil-service AI training model and calls for deeper Brazil-India R&D and ethical-assessment cooperation, noting partnerships with Apolitical [151-158][165]; Subramanian frames India’s “third way” partnership as a collaborative alternative to the US-China binary, positioning India as a partner for other nations [74-76].
POLICY CONTEXT (KNOWLEDGE BASE)
South-South cooperation is highlighted as a priority in the AI Impact Summit 2026, calling for open, non-discriminatory norms and joint research initiatives among Global South countries [S63], providing a policy backdrop for bilateral Brazil-India collaboration.
Integration of environmental sustainability into AI initiatives
Speakers: Robin Scott, Guilherme Albusco Almeida, Anil Shivastava
Development of an AI‑and‑climate course and promotion of “green AI” principles (Robin Scott) Use of AI for climate monitoring, deforestation detection, and reforestation planning (Guilherme Albusco Almeida) Google’s commitment to carbon‑neutral data centres by 2030 and partnership with India for sustainable infrastructure (Anil Shivastava)
All three speakers link AI work to climate and sustainability goals. Robin reports a dedicated AI-and-climate course developed with the Stanford Doerr School of Sustainability [201-203]; Guilherme describes Brazil’s AI-driven Rural Environmental Registry that monitors forests and supports reforestation [209-212]; Anil highlights Google’s pledge to make all data centres carbon-neutral by 2030 and its willingness to partner with India on sustainable infrastructure [215-218].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple policy statements link AI to climate action, including the IGF networking session on AI and Environment that notes AI’s role in sustainable energy systems [S54], UNESCO’s call for AI to support climate goals [S60], and Germany’s AI Flagship Projects for the Environment that fund climate-focused AI solutions [S57]; together they underscore the need to embed sustainability in AI programmes.
Similar Viewpoints
All three highlight that technical and operational challenges of layering AI onto legacy systems must be addressed through targeted capacity‑building, governance reforms and structured alliances. The moderator explicitly asks about technical risks and capacity‑building alignment [110-112]; Anil stresses re‑engineering legacy IT, multilingual support and security as capacity‑building imperatives [126-138]; Subramanian presents a blueprint for a digital capacity‑building alliance that would tackle such risks [104-105].
Speakers: Moderator, Anil Shivastava, Subramanian Ramadorai
Moderator’s framing of questions that link technical risks to capacity‑building needs (Moderator) Need to re‑engineer legacy processes, ensure multilingual support, and address data‑sovereignty (Anil Shivastava) Blueprint for a Digital Capacity Building Alliance and global funding model (Subramanian Ramadorai)
Both call for systematic evaluation mechanisms for AI pilots. Robin points out that many pilots lack evaluation plans and stresses the need for common standards [188-190]; Subramanian’s alliance blueprint includes provisions for evaluation frameworks to ensure responsible AI rollout [104-105].
Speakers: Robin Scott, Subramanian Ramadorai
Global survey highlighting readiness gaps and the need for common evaluation standards (Robin Scott) Blueprint for a Digital Capacity Building Alliance and global funding model (Subramanian Ramadorai)
Unexpected Consensus
Call for a generic, hyper‑localizable international AI impact assessment framework
Speakers: Audience, Dr. Washima, Robin Scott, Shubhavi S. Radha Chauhan
Call for a generic, hyper‑localizable international AI impact assessment framework (Audience) Trust‑based collaborative ethical frameworks (Dr. Washima) Widespread lack of understanding of ethical frameworks among implementers (Robin Scott) Customized competency frameworks and personalized learning pathways (Shubhavi S. Radha Chauhan)
An audience member’s suggestion to create a generic, hyper‑localizable AI impact assessment tool mirrors the ethical‑framework focus of Dr. Washima, the survey‑driven gaps highlighted by Robin, and Shubhavi’s push for customized competency frameworks, showing an unexpected convergence between a non‑speaker input and multiple speakers’ positions.
POLICY CONTEXT (KNOWLEDGE BASE)
The need for a clear, implementable assessment tool echoes critiques of the Global Digital Compact for lacking concrete mechanisms [S70] and the broader call for evidence-based approaches to bridge policy-practice gaps in digital cooperation [S69].
Alignment of AI development with climate responsibility
Speakers: Robin Scott, Guilherme Albusco Almeida, Anil Shivastava
Development of an AI‑and‑climate course and promotion of “green AI” principles (Robin Scott) Use of AI for climate monitoring, deforestation detection, and reforestation planning (Guilherme Albusco Almeida) Google’s commitment to carbon‑neutral data centres by 2030 and partnership with India for sustainable infrastructure (Anil Shivastava)
Although each speaker approaches climate from a different angle—education (Robin), operational AI for forest monitoring (Guilherme), and sustainable data‑centre infrastructure (Anil)—they all converge on the principle that AI initiatives must be environmentally responsible, an alignment that was not explicitly pre‑planned.
POLICY CONTEXT (KNOWLEDGE BASE)
UNESCO’s renewed appeal for AI to align with climate responsibility [S60] and Bouverot’s remarks on ensuring AI progress matches environmental protection goals [S58] provide authoritative framing for this alignment.
Overall Assessment

The discussion shows strong convergence on several fronts: the necessity of trustworthy ethical frameworks and human‑in‑the‑loop integrity; AI as a catalyst for inclusive, citizen‑centred development; a shift toward small, edge‑oriented models; extensive capacity‑building through customized competency frameworks and personalized learning; South‑South partnership models, especially Brazil‑India cooperation; and embedding climate‑sustainability into AI programmes.

High consensus – most speakers echo each other’s core positions, indicating a solid shared foundation that can facilitate coordinated policy actions, joint programmes and the launch of the Digital Capacity Building Alliance.

Differences
Different Viewpoints
Gap between the aspiration for trust‑based collaborative ethical frameworks and the current low awareness of existing ethical frameworks among implementers
Speakers: Robin Scott, Dr. Washima
Widespread lack of understanding of ethical frameworks among implementers (Robin Scott) Trust‑based collaborative ethical frameworks (Dr. Washima)
Robin reports that only 26 % of AI implementers understand their own government’s ethical frameworks, indicating a large implementation gap [182-188]. Dr. Washima, however, calls for the creation of trust-based collaborative ethical frameworks to guide AI deployment, implying that such frameworks are not yet established or widely understood [10]. This reflects a disagreement between the current state of awareness and the envisioned governance model.
POLICY CONTEXT (KNOWLEDGE BASE)
The AI Policy Research Roadmap identifies a persistent awareness gap among practitioners [S66], while the Open Forum on Digital Cooperation highlighted implementation gaps between policy intent and on-the-ground understanding [S69].
Resource allocation for climate‑focused AI initiatives
Speakers: Anil Shivastava, Robin Scott
Google’s commitment to carbon‑neutral data centres by 2030 and partnership with India for sustainable infrastructure (Anil Shivastava) Development of an AI‑and‑climate course and promotion of “green AI” principles (Robin Scott)
Anil highlights a concrete corporate commitment and a partnership model to achieve carbon-neutral data centres by 2030, suggesting that sufficient resources are being mobilised [215-218]. Robin mentions a newly developed AI-climate course but adds that it “has too much money”, indicating concerns about funding adequacy for such initiatives [201-203]. The two positions differ on whether the necessary financial resources are already in place or remain a constraint.
POLICY CONTEXT (KNOWLEDGE BASE)
Funding mechanisms such as Germany’s AI Flagship for climate and the ‘Make Your AI Greener’ workshop stress dedicated resources for climate-oriented AI projects [S57][S56], and Bouverot’s closing remarks call for financing that matches environmental ambitions [S58].
Approach to enabling AI in public services – extensive re‑engineering of legacy systems versus deployment of lightweight edge models
Speakers: Anil Shivastava, Subramanian Ramadorai
Need to re‑engineer legacy processes, ensure multilingual support, and address data‑sovereignty (Anil Shivastava) Emphasis on edge AI and small, domain‑specific language models for rural and local contexts (Subramanian Ramadorai)
Anil argues that existing IT systems are siloed and must be re-engineered, with attention to multilingual data and security, before AI can be effectively integrated [126-138]. Subramanian stresses that the next billion AI users will interact with tiny embedded AI on edge devices and that small, domain-specific models are the primary opportunity for rural India, without explicitly calling for large-scale re-engineering [96-98]. The disagreement lies in the depth of systemic change required to enable AI.
Unexpected Differences
Optimism about AI’s transformative potential versus caution about its inability to replace human integrity
Speakers: Robin Scott, Dr. Jitendra Singh
Widespread lack of understanding of ethical frameworks among implementers (Robin Scott) Integrity and human‑in‑the‑loop as non‑negotiable (Dr. Jitendra Singh)
Robin expresses strong optimism, noting that over 90 % of public servants are very optimistic about AI’s role despite current gaps [190-194]. Dr. Singh, while supportive, warns that AI cannot substitute integrity and must always retain a human-in-the-loop, suggesting a more cautious stance on AI autonomy [324-327]. The contrast between high optimism and a firm warning about non-negotiable human oversight was not anticipated given the overall collaborative tone of the session.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on enhancing rather than replacing humanity capture this tension, with experts arguing AI should augment human values and not supplant ethical judgment [S71].
Different perceptions of funding adequacy for climate‑focused AI initiatives
Speakers: Anil Shivastava, Robin Scott
Google’s commitment to carbon‑neutral data centres by 2030 and partnership with India for sustainable infrastructure (Anil Shivastava) Development of an AI‑and‑climate course and promotion of “green AI” principles (Robin Scott)
Anil presents a concrete corporate pledge and partnership indicating sufficient funding for sustainable AI infrastructure [215-218]. Robin, however, remarks that the AI-climate course “has too much money”, implying financial constraints or insufficient budgeting for such programs [201-203]. The divergence in perceived funding sufficiency was unexpected given the shared emphasis on sustainability.
Overall Assessment

The discussion showed broad consensus on the importance of capacity building, ethical governance, and South‑South collaboration for AI in the public sector. Disagreements were mainly technical and implementation‑focused, such as the current awareness of ethical frameworks, the depth of system re‑engineering required, and the allocation of resources for climate‑related AI initiatives. These divergences reflect differing institutional perspectives (government vs. private sector vs. academia) rather than fundamental ideological conflict.

Moderate – while all participants share the same overarching goals (ethical, inclusive, and sustainable AI deployment), they differ on how to achieve them. The disagreements are likely to shape policy priorities, with potential implications for the speed of implementation, the design of capacity‑building programs, and the financing of green AI projects.

Partial Agreements
All speakers concur that capacity building is essential for responsible AI deployment and that tailored learning pathways, competency frameworks, and collaborative platforms are needed. They differ on the specific mechanisms (e.g., sector‑specific small models vs. re‑engineering legacy systems vs. South‑South training structures), but share the overarching goal of strengthening public‑sector AI capabilities [10][33-34][126-138][155-166][182-194][263-270].
Speakers: Dr. Washima, Shubhavi S. Radha Chauhan, Anil Shivastava, Guilherme Albusco Almeida, Robin Scott, Subramanian Ramadorai, Speaker 1
Trust‑based collaborative ethical frameworks (Dr. Washima) Customized competency frameworks and personalized learning pathways (Shubhavi S. Radha Chauhan) Need to re‑engineer legacy processes, ensure multilingual support, and address data‑sovereignty (Anil Shivastava) South‑South cooperation on civil‑service AI training; four‑tiered capacity‑building profiles (Guilherme Albusco Almeida) Widespread lack of understanding of ethical frameworks among implementers (Robin Scott) Emphasis on edge AI and small, domain‑specific language models for rural and local contexts (Subramanian Ramadorai) Launch of the Digital Capacity Building Alliance as a public‑good platform (Speaker 1)
All three stress that AI systems must operate under strong ethical oversight and human involvement. Dr. Singh explicitly states that AI cannot replace integrity and must keep a human in the loop [324-327]; Robin highlights the current deficiency in ethical‑framework awareness [182-188]; Dr. Washima calls for trust‑based collaborative frameworks to ensure safety and fairness [10]. They agree on the necessity of ethical governance, differing only in the description of the current gap versus the desired structure.
Speakers: Dr. Jitendra Singh, Robin Scott, Dr. Washima
Integrity and human‑in‑the‑loop as non‑negotiable (Dr. Jitendra Singh) Widespread lack of understanding of ethical frameworks among implementers (Robin Scott) Trust‑based collaborative ethical frameworks (Dr. Washima)
Both advocate for a collaborative, partnership‑based approach that avoids the binary US/China AI race. Guilherme emphasizes Brazil‑India cooperation and South‑South knowledge sharing [155-166]; Subramanian describes India’s third‑way model as a partnership alternative to the two dominant models [74-76]. They share the goal of multilateral cooperation, differing only in the framing of the partnership.
Speakers: Guilherme Albusco Almeida, Subramanian Ramadorai
South‑South cooperation on civil‑service AI training; four‑tiered capacity‑building profiles (Guilherme Albusco Almeida) India’s “third way” partnership model positioning itself between US‑led and China‑led AI trajectories (Subramanian Ramadorai)
Takeaways
Key takeaways
Ethical, human‑centric AI governance is essential; trust‑based collaborative frameworks and a “Mani Vision” were emphasized. Capacity building for public‑sector AI must use customized competency frameworks, personalized learning pathways, and continuous feedback loops. Legacy IT systems are siloed; successful AI integration requires re‑engineering data, processes, security, and multilingual support, with a focus on edge AI and small, sector‑specific language models. International South‑South cooperation (e.g., India‑Brazil) is seen as a viable “third way” to shape global AI norms, share R&D, and co‑develop training programmes. Environmental sustainability must accompany AI deployment; green AI principles, carbon‑neutral data centres, and AI for climate monitoring were highlighted. AI is positioned as a catalyst for socio‑economic development and improved public service delivery, but human‑in‑the‑loop integrity remains non‑negotiable.
Resolutions and action items
Launch of the Digital Capacity Building Alliance (public‑good platform) and its accompanying Blueprint for a Digital Capacity Building Allowance. Commitment by Google to achieve carbon‑neutral data centres by 2030 and to partner with the Indian government on sustainable AI infrastructure. Agreement to develop sector‑specific, small language models that can run on edge devices for rural and local contexts. Plan to embed AI within workforce transformation frameworks and to contribute to shaping global responsible‑AI norms. Proposal to create a generic, hyper‑localizable international AI impact assessment framework/tool (raised by audience). Continuation of large‑scale AI training for public servants (e.g., 1 million target through Apolitical partnership, 400 000 already trained).
Unresolved issues
Concrete operational details for the Digital Capacity Building Alliance and its funding mechanisms remain undefined. Specific technical‑operational steps to align legacy infrastructure modernization with workforce capability development were not fully addressed. Standardized processes for evaluating AI pilots and measuring outcomes are still lacking. A universally accepted international AI procurement and ethical‑framework guideline has not been established. Detailed actions for ensuring climate‑responsible AI beyond high‑level commitments were not specified. Audience concerns about the timeline for achieving AI‑driven governance goals (e.g., 2047 vs. 2026) were not resolved.
Suggested compromises
Adoption of hybrid models that combine AI tools with human judgment, ensuring integrity while leveraging technology. Preference for small, domain‑specific AI models over massive monolithic models to balance capability, resource use, and sustainability. India’s “third way” partnership model positioned as a middle path between US market‑led AI development and China’s state‑led techno‑nationalism.
Thought Provoking Comments
Technology is a great leveler, and AI is the next big thing after electricity. We must carve out trust‑based collaborative ethical frameworks so that the fast‑paced AI‑DPD age delivers safer public services.
Sets the philosophical foundation of the whole summit, framing AI not just as a technology but as a societal equalizer that requires trust and ethics.
Established the central theme of trust and ethics, prompting subsequent speakers to address governance, capacity building, and the need for ethical frameworks throughout the discussion.
Speaker: Dr. Washima
The future of AI will not be in massive monolithic models. It will be in small language models, context‑specific, sectoral, and decentralized.
Introduces a paradigm shift from large, generic AI models to localized, domain‑specific models, highlighting scalability and relevance for diverse Indian contexts.
Steered the conversation toward edge AI and multilingual capabilities, leading Anil Shivastava to discuss re‑engineering legacy systems for multilingual AI and Guilherme Almeida to emphasize Brazil‑India collaboration on small, sector‑specific models.
Speaker: Shubhavi S. Radha Chauhan
The most important question for this summit is not how far we can scale AI but how we can recognize it as a movement that elevates humanity. India can offer a ‘third way’—a partnership model between the US‑led market race and China’s techno‑nationalism.
Broadens the debate from technical scaling to geopolitical positioning and ethical purpose, positioning India as a potential bridge in global AI governance.
Reframed the dialogue from pure technology to strategic policy, influencing later remarks about India’s 5.8 million IT professionals, the need for inclusive capacity building, and Robin Scott’s focus on global norms.
Speaker: Subramanian Ramadorai
AI is not a layer you can just put on existing systems. Existing IT systems have data and business‑logic silos; to harness AI we must re‑engineer those systems, ensure multilingual data, and address security and data‑sovereignty vectors.
Provides a concrete technical critique of naïve AI integration, highlighting practical challenges that many policymakers overlook.
Prompted deeper discussion on the necessity of redesigning legacy infrastructure, reinforced Shubhavi’s point on sector‑specific models, and set the stage for Robin’s data on governance gaps.
Speaker: Anil Shivastava
Only 26 % of public‑sector AI implementers say they understand their own government’s ethical framework; 75 % are essentially freestyling. Moreover, while 72 % plan pilots, only 45 % have an evaluation plan.
Introduces hard data that quantifies the ethical and evaluative gaps in AI adoption, turning abstract concerns into measurable shortcomings.
Shifted the conversation from aspirational goals to urgent accountability, leading the moderator and other panelists to stress the need for standardized frameworks and evaluation mechanisms.
Speaker: Robin Scott
We have developed a course on AI and climate in partnership with the Stanford Doerr School of Sustainability – a concrete step to align AI‑driven public infrastructure with climate responsibility.
Links AI capacity building directly to environmental sustainability, expanding the scope of the discussion to include climate impact.
Opened a new sub‑topic on green AI, prompting Guilherme Almeida and Anil Shivastava to discuss energy‑efficient models and carbon‑neutral data centers.
Speaker: Robin Scott
Integrity cannot be substituted by AI; human‑in‑the‑loop is essential. Digital public good is simply public good delivered through digital means – the technology is a tool, not the end.
Re‑emphasizes the primacy of human values over technology, summarizing the summit’s ethical thrust and introducing the memorable M‑A‑N‑A‑A acronym for AI governance.
Culminated the discussion by reinforcing earlier ethical themes, providing a memorable framework that resonated with the audience and tied together the various strands of capacity building, governance, and sustainability.
Speaker: Dr. Jitendra Singh
Overall Assessment

The discussion was shaped by a series of pivotal remarks that moved the dialogue from high‑level optimism to concrete challenges and solutions. Dr. Washima’s opening set a trust‑centric agenda, which was sharpened by Shubhavi’s vision of decentralized, sector‑specific AI and reinforced by Subramanian’s geopolitical framing of India’s ‘third way.’ Anil’s technical critique grounded the conversation in implementation realities, while Robin’s data‑driven gaps forced participants to confront the lack of ethical awareness and evaluation. Subsequent comments on climate‑aligned AI and Dr. Singh’s insistence on human integrity provided actionable pathways and a unifying ethical narrative. Together, these comments redirected the flow from abstract enthusiasm to a focused, multi‑dimensional roadmap for responsible AI capacity building.

Follow-up Questions
What technical and operational risks arise when AI systems are layered onto legacy infrastructure, and how can initiatives like Mission Community align infrastructure modernization with workforce capability development?
Understanding integration challenges and capacity‑building alignment is essential for safe, effective AI deployment in government services.
Speaker: Subramanian Ramadorai (to Anil Shivastava)
How can Brazil and India collaborate more closely to shape the global conversation around AI trust, alignment, and governance, and which collaborative areas would have the greatest global impact as AI becomes more autonomous?
Leveraging complementary strengths of the two nations can help define international norms and accelerate responsible AI adoption.
Speaker: Subramanian Ramadorai (to Guilherme Albusco Almeida)
What are the biggest gaps in AI readiness within public institutions, and how can the global conversation be shifted toward work reinvention?
Identifying readiness gaps and promoting new work models are critical for scaling AI responsibly in the public sector.
Speaker: Subramanian Ramadorai (to Robin Scott)
How can governments and AI companies work together to ensure that AI‑driven public infrastructure is aligned with climate responsibility, energy efficiency, and sustainable growth?
Addressing the environmental footprint of large‑scale AI deployments is necessary to meet sustainability goals.
Speaker: Subramanian Ramadorai (to panel)
Can we develop a generic, collaborative international AI impact framework assessment tool—building on UNESCO’s competence framework—that can be hyper‑localized to national contexts and assess capabilities across whole organizations or countries?
A unified assessment tool would help standardize AI governance, procurement, and competence evaluation worldwide.
Speaker: Audience member (Prof. Charu, Indian Institute of Public Administration)
How can governments ensure systematic evaluation of AI pilots, given the gap between pilot implementation and performance assessment identified in surveys?
Without robust evaluation, pilots risk failing to deliver intended outcomes and may introduce unchecked risks.
Speaker: Robin Scott (observed from 8,000‑person survey)
What research is needed to develop sector‑specific, small language models that can run on edge devices for rural and underserved contexts?
Tailored, low‑resource models are crucial for delivering AI benefits to the next billion users in remote areas.
Speaker: Subramanian Ramadorai (statement)
How can public servants improve their understanding of existing governmental ethical AI frameworks to reduce the 75 % ‘freestyling’ risk?
Increasing awareness of ethical guidelines is vital to mitigate misuse and build trust in AI systems.
Speaker: Robin Scott (survey data)
What security and data‑sovereignty challenges arise from integrating AI into existing government IT systems, and how should they be addressed?
AI introduces new attack vectors and data‑privacy concerns that must be mitigated for safe deployment.
Speaker: Anil Shivastava
How can AI be leveraged for environmental sustainability (green AI) and for enhancing climate‑policy decision‑making, such as forest monitoring and reforestation?
Exploring AI’s dual role in reducing its own carbon footprint and supporting climate actions can amplify its societal benefit.
Speaker: Guilherme Albusco Almeida
What mechanisms ensure that AI systems retain human‑in‑the‑loop integrity and do not compromise ethical standards?
Maintaining human oversight is essential to preserve accountability, moral judgment, and public trust.
Speaker: Dr. Jitendra Singh
How can countries develop localized ethical assessment frameworks for AI implementation that complement global standards?
Context‑specific ethical guidelines are needed to address cultural, legal, and societal nuances.
Speaker: Guilherme Albusco Almeida
Should India aim to achieve its AI‑driven governance vision by 2047, and what concrete roadmap is required to avoid procrastination?
Clarifying timelines and actionable steps is critical to meet long‑term national AI objectives.
Speaker: Audience member (unnamed, referencing doomsday clock)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.