Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges

20 Feb 2026 17:00h - 18:00h

Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges

Session at a glanceSummary, keypoints, and speakers overview

Summary

The AI Impact Summit highlighted deepening Franco-Indian collaboration on artificial intelligence, with leaders from both countries convening to showcase joint initiatives. Estelle David noted that about one hundred French firms across quantum-ready photonics, secure edge AI, mobility, cybersecurity, digital twins and green tech participated, and she cited several concrete agreements-including a strategic partnership between Dacia Technology and GT Solved, a satellite propulsion contract between ExoTrail and Druva Space, and a healthcare collaboration between H-Company and St John’s Hospital-that illustrate growing bilateral trust and investment [3-4][8-11][12-14].


Julie Huguet, director of LaFrenchTech, emphasized that France now ranks among the world’s top three AI ecosystems and that the summit serves to build bridges, share common values such as low environmental impact, and accelerate French startup growth, citing the recent Macron-announced partnership between H-Company and St John’s Hospital to improve hospital efficiency [39-44][50-51]. She also presented four French startups-Agri-Co, White Lab Genomics, Candela and Edge Company-as exemplars of technologies ready to benefit from India’s scale [54-58].


In the high-level panel, moderator Arun Sardesh framed trust as the prerequisite for AI scaling, arguing that large organisations will adopt AI only when they trust it [84-94]. Neelakantan Venkataraman defined trust as “having your back” and stressed that it must be embedded at every layer of the AI stack, from data lineage to compliance with regulations such as India’s DPDP and the EU AI Act [130-141]. Valerian Ghez (Candela) added that trust requires traceability, predictability, verifiability, security and accountability, and announced the Merlin benchmarking framework to create a shared baseline between quantum and AI communities [160-168][172-176][259-267]. David Sadek of Thales outlined four pillars-security through “friendly hacking,” explainability, regulatory responsibility and frugal AI for reduced carbon footprint-insisting that trust must be demonstrated, not merely promised [188-197]. Tanuj Mittal linked trust to scale by referencing India’s UPI system, noting that once users trust a platform, massive transaction volumes naturally follow [281-283].


The subsequent “AI for Science” session, chaired by Prof. Karandikar, stressed that AI can compress years of research into months but warned that equitable access and reproducibility remain major challenges [369-372][380-384]. Antoine Petit described CNRS’s virtual “AI for Science, Science for AI” centre to foster interdisciplinary collaboration, while cautioning about the risk of AI-generated false papers [462-470][479-482]. Joelle Pineau argued that transparency and standardized evaluation are essential to address the reproducibility crisis, and that AI itself can accelerate reproducibility through open challenges [550-558].


Overall, participants agreed that sustained Franco-Indian cooperation, robust trust frameworks embedded across technology, regulation and governance, and open scientific practices are essential to scale AI responsibly and deliver broad societal benefits [8-11][126-129][272-275][592-603].


Keypoints


Major discussion points


Franco-Indian AI partnership and concrete outcomes – The opening remarks highlighted a series of signed agreements (e.g., Dacia-GT, ExoTrail-Druva Space, H-Company-St James Hospital) that illustrate “real partnerships, real signatures and real commitments between our two countries” [8-12]. Julie later reinforced the strategic value of the summit, noting that the French President announced a new collaboration between H-Company and St James Hospital to improve hospital efficiency [50-52].


Trust as the cornerstone for scaling AI – Multiple speakers argued that trust must be built into every layer of AI systems to achieve scale. Arun emphasized that “trust is the only way to scale” and that large organisations will adopt AI only when they trust it [84-92]. Neelakantan defined trust as “I have your back and I will not fail you” and described its evolution from pilot to production, stressing architectural embedding and regulatory codification [130-142]. Valerian listed pillars such as traceability, predictability, verifiability, security and accountability [159-167]. David added “trust is not a label … it’s a proof” and outlined technical, explainability and responsibility dimensions [188-196]. Tanuj illustrated the link between trust and scale with the UPI example [281-283].


Ecosystem-driven innovation and open collaboration – The panel repeatedly called for an ecosystem mindset rather than isolated effort. Neelakantan said “the mindset of an ecosystem… we can’t do it all” [253-256]. Valerian advocated “breaking the walls between quantum and AI” and building a community through shared benchmarks like the MERLIN framework [259-267]. Julie highlighted complementary strengths: India’s “scale, speed” and France’s “deep-tech excellence, scientific force, industrial capability” [62-65].


AI for scientific discovery, reproducibility and global cooperation – The second panel focused on using AI to accelerate research while addressing reproducibility and equity. Karandikar framed AI for science as a “core pillar” to compress decades of research into months and stressed the need to bridge the digital divide [368-374]. Amit described the IRO initiative to create high-end talent, IP pipelines and industry-academic collaborations [386-430]. Antoine explained CNRS’s virtual “AI for Science, Science for AI” centre and warned about the risk of AI-generated false papers [444-482]. Joelle emphasized transparency and evaluation as keys to reproducible AI-driven science [548-558].


Inclusive, people-centric vision for AI’s societal impact – Throughout the summit speakers invoked shared values and the need to reach the “bottom of the pyramid.” Julie spoke of “trustworthy, low environmental footprint, positive impact for humanity” [46-49]. Raj Reddy called for measurable multilingual AI that serves villagers and stressed personal, sovereign edge models for privacy [294-324]. Karandikar and Irakli highlighted the digital-divide challenge and the importance of AI benefiting “all, not a selected few” [368-371][595-599].


Overall purpose / goal of the discussion


The AI Impact Summit was convened to deepen Franco-Indian collaboration, showcase French AI startups, and create concrete partnership opportunities while jointly addressing how to build trusted, scalable AI across sectors. A secondary aim was to explore AI for scientific research, promote reproducibility, and discuss policies that ensure AI’s benefits are inclusive, ethical, and globally distributed.


Overall tone and its evolution


– The session opened with a celebratory and diplomatic tone, praising high-level visits and announcing partnership signings.


– It then shifted to a technical-analytical tone, as panelists dissected the concept of trust, its architectural, regulatory and operational dimensions.


– Mid-discussion the tone became collaborative and ecosystem-focused, emphasizing community building, open benchmarking, and complementary strengths.


– The later AI-for-science segment adopted a forward-looking, visionary tone, balancing excitement about accelerated discovery with caution about reproducibility and equity.


– Throughout, the tone remained optimistic and solution-oriented, concluding with a reaffirmation of shared values and a call for inclusive, people-centric AI deployment.


Speakers

Speakers (from the provided list)


Estelle David – Representative of Business France; opened the summit and highlighted French-India AI collaborations. Area: International trade & AI partnership. [S1][S2]


Joelle Pineau – Chief AI Officer (as mentioned in the panel) and Vice President of AI Research at Meta (external source). Area: AI research, AI governance. [S4][S3]


Sandeep Kumar Saxena – Chief Growth Officer, HCL Technologies. Area: AI-driven services and growth markets.


Tanuj Mittal – Senior Director, Customer Solution Experience, Dassault Systèmes. Area: Industrial AI platforms and digital twins.


Valerian Giesz – Co-Founder and CEO of Candela (quantum-computing startup). Area: Photonic quantum computers, quantum AI. [S9]


Antoine Petit – CEO and Chairman, CNRS France (Centre National de la Recherche Scientifique). Area: Scientific research, AI for science. [S10]


Raj Reddy – Professor, founding director of the Robotics Institute, Carnegie Mellon University; 1994 Turing Award winner. Area: AI, robotics, multilingual AI. [S11]


Julie Huguet – Director of the French Tech Mission (LaFrenchTech). Area: French startup ecosystem, AI impact summit. [S12]


Amit Sheth – Founder, Indian AI Research Organization (IRO). Area: AI research, neurosymbolic models for health, sustainability, pharma. [S13][S14]


David Sadek – VP Research Technology & Innovation, Global CTUI and Quantum Computing, Thales. Area: AI security, “friendly hacking”, AI ethics. [S15]


Irakli Beridze – Head of Center of AI and Robotics, UNICRI (UN Interregional Crime and Justice Research Institute). Area: AI for law-enforcement, responsible AI frameworks. [S18][S17]


Audience – Members of the audience who asked questions; no specific titles provided.


Arun Sasheesh – Associate Partner & Country Director, TNP Consultants; moderator of the high-level panel. [S23]


Abhay Karandikar – Secretary, Department of Science and Technology, India; moderator of the “AI for Science” session. [S25]


Moderator – Unnamed conference moderator who introduced speakers and managed transitions.


Neelakantan Venkataraman – Vice President & Global Business Head, Cloud AI & Edge Data Communications, Tata Communications. Area: Cloud AI, edge computing, AI-center of excellence. [S30]


Additional speakers (not in the provided list)


Saloni – Session coordinator/moderator (addressed by Arun Sasheesh).


Mark Vialmopillier – Mentioned as the founding director of the Robotics Institute at Carnegie Mellon University (historical reference).


Julie Rouget – Introduced herself as “Julie Rouget, director of the French Tech mission”; appears to be the same person as Julie Huguet but named differently in the transcript.


Professor Zuel Pino – Referred to as “Ms. Joelle Pino, Chief AI Officer” (different spelling of Pineau’s name).


Professor Antonin Petit – Alternate spelling of Antoine Petit (already listed).


Professor Raj Reddy – Already listed; appears again in later sections.


Professor Abhay Karandikar – Already listed; appears again as moderator.


Professor Irakli Beridze – Already listed; appears again in later sections.


Professor Joelle Pineau – Already listed; appears with alternate spelling.


Professor Amit Sheth – Already listed; appears again.


Professor David Sadek – Already listed; appears again.


Professor Neelakantan Venkataraman – Already listed; appears again.


Professor Tanuj Mittal – Already listed; appears again.


Professor Sandeep Kumar Saxena – Already listed; appears again.


Professor Valerian Giesz – Already listed; appears again.


Professor Antoine Petit – Already listed; appears again.


Professor Raj Reddy – Already listed; appears again.


Professor Mark Vialmopillier – Already listed.


Professor Saloni – Already listed.


Professor Mark Vialmopillier – Already listed.


Professor Raj Reddy – Already listed.


Professor Raj Reddy – Already listed.


(Note: Some names appear multiple times with slight spelling variations; they are consolidated above.)


Full session reportComprehensive analysis and detailed insights

Opening remarks (Estelle David) – Estelle David of Business France opened the AI Impact Summit, welcoming Prime Minister Modi and President Macron at the French pavilion and noting that the week was a great opportunity to showcase French innovation. She highlighted that roughly one hundred French companies were present, spanning quantum-ready photonics, secure edge AI, mobility systems, cybersecurity, digital twins and green-tech, and that all participants share the conviction that AI is “the next frontier” [1-5]. She also thanked the Platinum, Gold and Silver sponsors-CMS CGM, Total, BNP Paribas, Capgemini, Schneider Electric and MBDA-who supported the event [70-73]. David then outlined a series of concrete Franco-Indian agreements signed during the week, illustrating the summit’s focus on “real partnerships, real signatures and real commitments”. The first was a strategic partnership between Dacia Technology and GT Solved, signed in Bangalore at the French consulate [8]. A second deal saw ExoTrail and Druva Space contract for the delivery of fourteen satellite-propulsion systems, symbolising cooperation in the space sector [9]. Additional signatures included a collaboration between H-Company and St James Hospital in Bangalore, a partnership linking North France Invest with the TIAB, an alliance between T-U-B and a leading Indian innovation ecosystem, and a later H-Company-St John’s Hospital initiative announced by President Macron [10-13][46-51]. David emphasized that these outcomes would not have been possible without the extensive network coordinated by Business France and its partners, praising close collaboration with LaFrenchTech, Numium, Yuja Advisory, the Franco-Thai Chamber of Commerce, the Indo-French Chamber of Commerce and IFKI, which together mobilised French AI champions in India [14-15].


Keynote (Julie Huguet) – Julie Huguet, Director of the French Tech mission, introduced the summit as a bridge-building opportunity and reminded the audience that France now ranks among the world’s top three AI ecosystems (San Francisco, New York and Paris) [39-40]. She stressed shared values-trustworthiness, low environmental footprint and a positive impact for humanity-and cited President Macron’s announcement of the H-Company-St John’s Hospital collaboration to make hospitals more efficient and save lives [46-51]. Huguet showcased four French startups ready to leverage India’s scale: Agri-Co (digital agriculture), White Lab Genomics (AI-accelerated gene-therapy), Candela (scalable quantum technologies) and Edge Company (autonomous AI agents) [54-58]. She highlighted the complementary strengths of India’s scale and speed with France’s deep-tech excellence, scientific force and industrial capability [62-65].


High-level panel (moderated by Arun Sasheesh) – Arun Sasheesh framed trust as the prerequisite for AI scaling, recalling the Indian Prime Minister’s “human-manner” concept and the French President’s reference to UPI as an example of how trust enables massive scale, arguing that “trust is the only way to scale” and that large organisations will adopt AI only when they trust it [84-94][281-283].


Neelakantan Venkataraman (Tata Communications) – Neelakantan defined trust in simple terms – “I have your back and I will not fail you” – and insisted that it must be built into every layer of the AI stack, from data lineage to explainability, zero-trust networking, advanced guard-railing and end-to-end governance. He highlighted the AI Centre of Excellence (AI COE) that has moved projects from pilots to production, and noted that trust has shifted from a soft guidance in early pilots to a baked-in regulatory requirement, citing India’s DPDP and the EU AI Act as examples of codified standards [115-117][130-142][135-137].


Valerian Giesz (Candela) – Valerian Giesz, co-founder of Candela, presented a five-pillar model of trust for quantum-AI systems: trustability, predictability, verifiability, security and accountability. To operationalise these pillars, Candela released the MERLIN benchmarking framework, which provides a shared baseline for quantum-AI results and aims to foster a community that bridges quantum and AI research [159-168][172-176][259-267].


David Sadek (Thales) – David Sadek outlined four complementary pillars of trustworthy AI. His team conducts “friendly hacking” to expose algorithmic vulnerabilities, ensures explainability of AI recommendations (e.g., a digital copilot’s decision), adheres to ethical and regulatory compliance (the EU AI Act and French digital ethics charter), and pursues “frugal AI” to minimise carbon footprints while developing AI-for-green applications such as aircraft-trajectory optimisation [188-197].


Sandeep Kumar Saxena (HCL Technologies) – Sandeep Kumar Saxena described how trust is cultivated within organisations. He recounted building AI-driven sales, forecasting and analytics tools for his own use, certifying every team member on AI, and launching “AI products made in India for India and the world”. At the summit he showcased seven solutions for enterprises, citizens and governments [215-224][220-222][217-219]. He argued that trust is built iteratively, through leadership commitment and demonstrable utility for customers.


Tanuj Mittal (Dassault Systèmes) – Tanuj Mittal traced the evolution of trust from a focus on model accuracy to a comprehensive lifecycle approach. He highlighted the need for data lineage, human-in-the-loop oversight, virtual-twin simulations of real-world conditions (e.g., testing a car in Indian road environments), built-in checks to prevent mistakes, and end-to-end validation from conception to decommissioning. He reinforced his point with the UPI example, noting that once users trust a platform, massive transaction volumes follow automatically [227-245][281-283].


Ecosystem mindset – Across the panel, speakers converged on an ecosystem mindset as essential for democratising AI. Neelakantan stressed that “we can’t do it all” and called for ecosystem-wide partnerships [253-256]; Valerian urged the community to “break the walls between quantum and AI” and to share benchmarks through MERLIN [259-267]; Julie highlighted the complementary strengths of India’s scale and France’s deep-tech excellence [62-65].


Transition moment – Mark Vialmopillier offered a brief tribute to Professor Raj Reddy, founder of the CMU Robotics Institute and co-winner of the 1994 Turing Award [300-304].


Keynote (Raj Reddy) – Raj Reddy, a Turing-Award-winning founder of the Robotics Institute, presented a forward-looking, people-centric vision, calling for measurable multilingual AGI that can serve villagers in their native languages and for “personal sovereign edge models” that operate offline to preserve privacy. He also urged the development of humane AI-powered weapons that disable rather than destroy, framing AI as a tool for peace as well as progress [294-324][340-347][306-312].


AI for Science panel (moderated by Prof Abhay Karandikar) – Professor Abhay Karandikar positioned AI as a core pillar capable of compressing decades of research into months, while warning that equitable access remains a major challenge and that the digital divide must be bridged [368-374][369-372].


Amit Sheth (IRO) – Amit Sheth outlined IRO’s strategy to create high-end talent, develop compact neurosymbolic models for domains such as healthcare, sustainability and pharma, and build an open knowledge-graph for drug discovery. He cited the recent FDA-approved arthritis drug developed with a pharma knowledge-graph as an example of AI-driven innovation [386-430][566-572].


Antoine Petit (CNRS) – Antoine Petit described the virtual “AI for Science, Science for AI” centre, which seeks interdisciplinary cooperation between mathematicians, computer scientists and domain experts. He warned that AI can generate large numbers of scientific papers, many of which may be false, creating a risk of wasted effort and misinformation [462-470][479-482].


Joelle Pineau (Chief AI Officer) – Joelle Pineau emphasized the reproducibility crisis and proposed two essential ingredients: transparent public release of artefacts and standardised evaluation criteria. She noted that AI can itself accelerate reproducibility through open challenges and shared benchmarks [548-558].


Audience Q&A – An audience member highlighted a trend whereby foundational scientific models are released openly while fine-tuned commercial versions remain proprietary, potentially limiting equitable access [608-617]. Pineau counter-argued that open-sourcing large models (e.g., the LAMA series) dramatically expands adoption and scientific progress, despite industry resistance [618-628].


Policy perspective – Irakli Beridze of UNICRI presented the UN-backed responsible-AI toolkit for law-enforcement, now being piloted in India, Kazakhstan, Nigeria, Oman and Brazil. The toolkit provides practical frameworks, multi-stakeholder dialogues and policy recommendations to ensure AI is used responsibly while addressing public concerns [511-538][536-538].


Conclusion & action items – The summit reaffirmed that Franco-Indian collaboration is deepening through concrete partnership deals, that trust must be baked into every layer of AI systems, and that an ecosystem-driven, open-collaboration model is essential for scaling AI responsibly. Action items include formalising the Dacia-GT, ExoTrail-Druva and H-Company-St James Hospital agreements, launching Candela’s MERLIN benchmark, continued support from Business France and LaFrenchTech for matchmaking events, IRO’s development of neurosymbolic models and open pharma knowledge-graphs, and the rollout of UNICRI’s responsible-AI toolkit in India. Unresolved issues remain around defining universal metrics for multilingual AGI, balancing open-source foundations with proprietary commercial models, preventing the proliferation of AI-generated false papers, bridging the digital divide for the poorest populations, and establishing harmonised global guidelines for responsible AI [272-275][592-603].


Overall assessment – The summit demonstrated a strong consensus on the need for trustworthy, scalable AI built on complementary national strengths, while highlighting substantive debates on implementation pathways, openness versus commercial protection, and safeguards for scientific integrity. The diverse yet convergent perspectives suggest that future Franco-Indian initiatives will need to integrate architectural trust mechanisms, ecosystem partnerships, open-science practices and policy harmonisation to achieve inclusive, responsible AI impact [84-94][130-142][159-168][188-197][259-267][548-558][618-628][511-538].


Session transcriptComplete transcript of the session
Estelle David

We were also very proud yesterday to welcome the different leaders who came for the summit and especially Prime Minister Modi and President Macron to come on the pavilion and discover the companies and speak with our companies. So as you see, through this week, the French AI delegation was actually more than what you are seeing on the pavilion. Altogether, it was about 100 French companies who came. And actually, when you will meet them, you can find in different sectors like quantum -ready photonics, secure edge AI, mobility systems. cybersecurity, digital twin, and green tech. And actually, all of them wrote, and they are all convinced and trust. that AI is the next frontier. So now just to share with you what is making this week very special.

Actually it’s as you with what I said you can see that was very intense that’s for sure but it’s not only intensity actually as you will see it’s also a lot of results achieved and results with real partnerships real signature and real commitments between our two countries. I would just name a few for the AI just maybe the first with that Dacia technology and GT solved where they signed a strategic partnership on Monday evening in Bangalore at the French consulate during the French AI night and that really shows strengthening of Franco -Indian cooperation and engineering automation in intelligence. Thank you. A second one in a different sector between ExoTrail and Druva Space, where they signed a major contract in the space industry to deliver 14 satellite propulsion systems, which is also a very strong symbol of the cooperation between France and India in terms of space.

Another signature between H -Company and St. James Hospital. And a final one that I can mention is actually a partnership between North France Invest and the TIAB that are actually uniting all together, which will create new bridges between actually one of the most Europe, most dynamic industrial region. And the other one is the T -U -B, which is actually a partnership between the two. One of India’s most powerful innovation ecosystem. So as you can see, when we see all these signatures, and I’m not just talking about AI. you can see that the dynamism between France and India is very strong but now actually when you see all this it wouldn’t have been possible without the strength of our collective network and Business France the trade and investment agency is really proud to collaborate and we have collaborated very closely with different partners with definitely LaFrenchTech and thank you Julie for the long standing partnership supporting the French startup and for bringing all these startups here in India with Numium the leading French digital and tech association helping the structure and mobilize the presence of French AI champions in India also some other partners Yuja Advisory Achoo but also the co -organizer of this event, this panel at the main summit, the Franco -Thai Chamber of Commerce, Indo -French Chamber of Commerce, IFKI.

I’m still in my… So thank you, thank you to all of you. Now we are actually arriving to today’s session where we are gathering today most influential leaders shaping the future of AI. So I won’t be long, but we are really honored to welcome Julie Huguet, Director of the Mission French Tech. Also Arun Sadesh, Associate Partner and Country Director for TNP Consultants. Nila Khan, Veta Karam, Vice President and Global Business Head, Cloud, AI and Age. From Tata Communication. Valerian Ghez, Co -Founder and CEO of Canvela. Dr. David Sadek, VP Research Technology and Innovation Global CTUI and Quantum Computing from Thales. Sandeep Kumar Saxena, Chief Growth Officer from HCL Technologies. And finally, Tanuj Mittal, Senior Director Customer Solution Experience from Dassault Systèmes.

So we’ll be really happy to hear your experience. And before I conclude, just two thanks also to our partners, because you know this event has been also been possible thanks to them. Our Platinum sponsors, CMS CGM, Total. Our gold sponsors, BNP Paribas, Capgemini, Schneider Electric, and the silver sponsor, MBDA. Again, thank you very much, all of you. Thank you to our co -organizer, IFKI, and I wish you a fruitful session. maybe just before I end also a big thanks to the teams the different teams, business friends teams but all the French team all together who worked like crazy to make this week possible

Moderator

applause applause thank you very much Estelle we now move forward to our keynote address it is my pleasure to invite Miss Julie Rouget director of LaFrenchTech Julie leads one of the world’s most dynamic innovation ecosystems LaFrenchTech representing thousands of deep tech companies and scale -ups shaping Europe’s technological leadership Julie over to you applause

Julie Huguet

thank you good morning everyone thank you I’m Julie Rouget I’m director of the French Tech mission, so we support the growth of French startups in France and abroad. I’m truly delighted to discover the tech ecosystem here in India, a country that trains around 1 .5 million engineers every year. I think it’s the highest number in the world, so I’m very impressed. The AI Impact Summit is an opportunity to create more bridges between France and India, and exactly one year ago, actually, we hosted the AI Summit in Paris. That moment helped us, helped our ecosystem to structure itself. It was the opportunity to attract investment, to unlock talent, to accelerate the creation of French startups. Today, the French tech ecosystem is strong and ambitious.

According to Deal Room, the top three AI ecosystems globally are now San Francisco, New York, and Paris. We are very proud of it and we are really sure that the AI summit helped us to build this strong ecosystem. Across France, AI is becoming a pillar of our industrial transformation. We already have major European leaders such as Mistral AI or H -Company. And I’m convinced that the AI Impact Summit here in Delhi would be as valuable for India as it was for us. For the French tech, this week in India was of course a great opportunity to showcase French innovation. But it was also an opportunity to deepen our partnership with India. Beyond business, I’m truly convinced that we share common values, trustworthy, low environmental footprint, positive impact for humanity.

We support innovation when it reinforces our economies. Of course, we are committed to making the world a better place for all of us. but also when it brings real progress for humanity. Innovation only makes sense when it serves the greatest number. And to give you a concrete example, the French President Macron announced yesterday that H -Company and St. John’s Hospital in Bangalore have started a collaboration to make hospitals more efficient and to contribute to save thousands of lives. In healthcare, in agriculture, climate, and many other sectors, Franco -Indian partnerships are key for innovation with real impact. This is why I was really happy the whole week to be here with outstanding French startups, companies already working with India, like Estelle told us a bit earlier, and others ready to build strong and strategic partnerships here.

And thank you. And maybe I will introduce a few of them. Agri -Co is transforming agriculture through digital tools that connect farmers directly to markets. White Lab Genomics uses artificial intelligence to accelerate gene therapy development. Candela is building scalable quantum technologies that will shape the future of computing. And Edge Company develops advanced AI agents capable of computer use to perform complex tasks autonomously, just like a human would. For these innovations to become global leaders, international development is key. And we all know that the world is changing. Economic alliances are evolving. We see it with Canada, Latin America, Gulf countries, and obviously here in India. Today, India represents a scale of 1 .4 billion people. 200 ,000 startups. It’s huge.

France represents deep tech excellence, scientific force, industrial capability. And I think this complementary is powerful. In France, we like to schedule meetings weeks in advance. In India, we learn to be a bit more flexible. And honestly, innovation also requires agility and perhaps a bit of Indian wisdom. That’s what we learned as well this week. And it was, like Estelle said, a very important week for the startups who came with us. So I wish you all a good session and a great day. And thank you for being here with us this morning. And .

Moderator

Thank you so much, Julie. We will now move to our high -level panel discussion, where leaders from telecom, quantum, industrial AI, cloud infrastructure, and enterprise digital transformation will reflect on how our two countries can jointly accelerate trusted AI across sectors. I am pleased to introduce our moderator for this session, Mr. Arun Sardesh, Associate Partner and Country Director, TNP Consultants. Joining Arun on the panel are an exceptional group of leaders, Neelakantan Venkataraman, Vice President and Global Business Head, Cloud AI and Edge Data Communications. Valerian Ghiaz, Co -Founder and COO, Coindella. Dr. David Sadeg, Vice President, Research, Technology and Innovation, Global CTO, AI and Quantum Computing, Thales. Mr. Sandeep Kumar Saxena, Chief Growth Officer, HCL Technologies Tanuj Mittal, Senior Director, Customer Solution Experience, Daso System With that, ladies and gentlemen, it is my pleasure to hand over the session to our moderator.

Arun Sasheesh

Thank you, Saloni. Good morning, everyone. It’s actually a pleasure and a privilege to be part of this summit and being a moderator to such an esteemed panel. I would like to start by thanking Business France, IFKI, and the AI Impact Summit organizers for giving us the opportunity to discuss something that is very important about trusted AI. So maybe I’ll start with actually what happened here yesterday. Our prime minister talked about human manner is the concept that he introduced. Our French president talked about scaling, and he used UPI, the Indian payment system, as a good example of scale. And if you really think about it, there is a large element of trust involved in it. The way that in India we accepted UPI means we trust it.

And when we trust things, scale is possible. So usually when people talk about topics such as scale or, sorry, so trust or safety, there’s a bit of pessimism at times talking about challenges. But if you really think about it, there is a large element of trust involved in it. But in this particular session, I’d like to be more optimistic. and present trust as the only way to scale. If you want the large corporations, the banks, the governments to adopt AI, they need to trust us. And only when these organizations adopt AI, we can really achieve scale. So that’s the, you know, I’d like to set the tone with that comment. And maybe, you know, in the last five years, especially after COVID, we have facing changes quite rapidly, right?

I mean, things are moving from one thing to another. We all started our career, and today we are talking about AI. So a lot of evolution in our lives as well. So I want to start from that point to introduce yourself, but also tell us. The evolutions that you have gone through, and how do you define trust? Maybe we’ll start with you, Neil.

Neelakantan Venkataraman

Thank you. A very warm good morning to all of you, and thank you, Business France, for having me here. It’s a pleasure to be here and talking to all of you, and hopefully we’ll have a nice interaction. So personally, you know, we’ve been… So just to introduce myself, I head the cloud business for TataCom, which includes your general purpose cloud. Now AI cloud. Edge and dedicated private clouds for our enterprise customers. We are an international company. 80 % still comes from India, and 20 % comes from outside of India. So we were… As part of our cloud business, we did have a large AI ML offering. And about four years back, when suddenly the transformer architecture came into the scene, and we were able to do it, we were, you know, we didn’t know about it at all.

Actually, we were, I would reckon that we were like, we didn’t know about it at all. And so when it came up, you know, we thought, what is this new architecture which has come up and how it’s going to impact? And OpenAI and ChatGPT came up. And then we started thinking how we’re going to apply this to our businesses internally and also how we’re going to offer it as a service to our customers. So our journey has been a journey of learning a lot in the last three years, I would say. All of us are learning and it’s been pretty fast -paced. It’s been pretty steep in terms of technical. We had to, you know, through the organizational levels, right from the CEO to the bottom most, we had to do learning of what will it take for this new world to adopt Gen AI and how do we adopt Gen AI within the company and how do we adopt Gen AI within the company and how do we adopt Gen AI outside and offer it to our customers.

So tremendous scale of changes and the potential for innovation for our customers and for the company. So now we have established an AI COE within the company about three and a half years back. We had a lot of pilots which were going on within the company, and now they are into production. And similarly for our customers and enterprise world and beyond enterprise government and institutions which work very closely with government, who work on citizen -scale projects, all of us have seen that, right? So truly in the last five years, it’s moved from, I would say, POCs and pilots to now production. And production at an entry level. I would say scale. It is yet to be achieved.

It’s production. to say that, okay, there is a return on investment in the enterprise context and there is a reasonable outcome for citizen scale projects. And therefore, we should start putting it into production and then, of course, scale it. And scaling means that trust has to be put on steroids. So let me talk about trust now. So I would, you know, describe trust as something which is, in a very simple word, I have your back and I will not fail you. That’s trust. You know, beyond that, there’s nothing. So when we deploy these systems, the stack, and then when we deploy the use cases and the applications, you know, inherently, trust has to be foundational element.

It cannot be a bolt -on on top of what we have built. So it has to be built at every layer. And trust has also evolved within AI system. In the last five years, you know, it started off. by, you know, because it was a POC pilot, so you’re not really exposing it to the end users in a big way. It was in a closed group, user group, and therefore it was more of good to have. But now it’s moved to foundational, it’s more architectural in nature, right? Every element of the architecture needs to have trust built in. And from a regulatory point of view also, trust has also evolved, right? So, earlier it was all about, okay, a soft guidance on trust, saying that you need to be, you know, ethical, you need to have transparency, but now it’s in the, baked in into the regulatory policies and requirements, whether it is the DPDP, which has been operationalized in India, or the EU AI Act, which is already operational.

So now it is, you know, it is in black and white. And from a technology point of view, as I said, trust is foundational, it is architectural whether you have explainability built in in terms of the outcomes, whether the behavior of the systems is predictable it is explainable, you should be able to explain, it should be auditable the data which is fed into the models and trained and the inferencing happens and the outcomes which happen you need to have a very clear data lineage, you need to have end to end governance and we talked about edge computing, I think we talked about edge so you need to have governance, end to end governance, we talked about billions of devices which could be inferencing at scale and therefore whatever happens in the cloud and what happens at the edge, you need to be able to you know the entire workflow and the process has to have end to end visibility in terms of the governance and finally resiliency is also trust, it should not be broken, so from Tadak’s communications point of view when we talk about trust being the bedrock and foundational element of AI And therefore, it will scale while you put it to production.

We meant at every scale at the infra level, we build in some of the trust components, including, you know, zero trust networking, because, you know, networking is the invisible layer which carries data across AI platforms to the, you know, the software layer and the platform layer. We have advanced guardrailing technology, data lineage, data governance models, and the entire end -to -end data pipelining and management. So I’ll just hand it back to you. Long answer. Sorry for that.

Arun Sasheesh

No, no, not at all. It’s very important. And, you know, for us, Tata is synonymous to trust. So I have to mention that. So, well, you know, being a French company, I know about Quandela. But what do you like to talk about Quandela, your evolution, and how do you define trust in a quantum computing perspective? Thank you

Valerian Giesz

very much. Yeah, so maybe you know, I will just introduce a little bit Candela. It’s a startup coming from the CNRS lab. We use CNRS technology to build photonic quantum computers. Actually, we are a full -stack company developing software and hardware. And now, actually, we partner with industries like Thales to move quantum from the lab to industry, to the real world, and to deploy systems. And basically, as a CEO, trust is a key, is a pillar in our roadmap because actually we need to build reliable systems. We need to demonstrate compliance, security in order to demonstrate scaling. That’s very important for us. So for me, when you asked about what means trust with my vision, and I’m an engineer, basically, it’s easy.

First, trust. It’s trust. Trustability. Trustability because we need to trace the systems, the models, the data that we use for AI. Even for quantum, we use quantum artificial intelligence, we develop quantum machine learning. And for all of this, it’s important to trace the results and to get reproducible runs. Second thing will be predictability. Predictability is you need to know basically where are the limits of the models and where are the failures as well. And this is also why it’s important to investigate this. Verifiability is the third one because we need to benchmark the performance. Actually now we are at this step. At Candela we released a framework which is called MERLIN for machine learning. And it’s very useful.

And it’s very useful. It’s used to benchmark applications and performance on quantum computers and using AI techniques and run stress tests on the applications. Fourth, security. And the fifth pillar, which is accountability as well. How to make sure that we have a clear ownership along the value chain of AI on quantum computing between hardware providers, software providers, certificate providers. We need to have clear ownership about everything. And with this, all together, we will be able to work in trust. We will be able to build the trust for the end users, and we will be able to scale. That’s for me. Thank you. Thank

Arun Sasheesh

you, Valeria. And Dr. David, you are in charge. You are in charge of AI and quantum computing at Thales. Both evolving topics. How do you see this? And what is trust for you? You have multiple… topics in hand so hello

David Sadek

team doing what we call friendly hacking, which actually friendly attacks our own algorithms to identify their breaches, their vulnerabilities, and to propose countermeasures. And by the way, this team won a challenge from our MOD, French MOD, two years ago because the team succeeded in retrieving sensitive data which were used to train the system. The third pillar is explainability of our system. So, if you have a digital copilot in a cockpit recommending to a pilot to make a left in 45 miles, for example, so the pilot should be entitled to ask the question, why should I do that, especially if she or he has had in mind to do something different. And the system should be able to answer because there is a threat, there is a thunderstorm, and not because the layer number three of the neural net was activated at 30%.

Okay? and finally the fourth pillar which is last but not least is what we call responsibility and responsibility actually is twofold there is one stream uh which is the uh compliance of ethics principles of laws of regulation principles as you know in europe we have this ai act and talus also issued a digital ethics charter a few years ago which comes in 10 commitments actually we are really working to achieve it’s on our strategic roadmap business roadmap now and the second stream is about the uh uh full carbon footprint and energy consuming so we have teams working on frugal ai to minimize the volume of data which are used to train systems for example this is minimizing the the footprint of the technology itself ai technology And we have also the complement of this is what we call AI for green, how to use AI to minimize the footprint of applications like working on optimizing the trajectories of aircraft, for example, to minimize what we call the condensation traits which are generated by the aircrafts.

So just to conclude this first part, I would say that trust actually is not a label. It’s not a promise. It’s a proof. Things have to be proved in our business. Thank you.

Arun Sasheesh

Thank you, David. Sandeep, coming to you, we are in the service industry. Our whole operation is built on relationship and trust. So how are you coping up with these new challenges? This of new technologies coming up, what’s your take on this?

Sandeep Kumar Saxena

Thank you. Thank you for inviting me here. So it’s a very valid question. And I will not answer it in a very technical way because I’m sure all of you have covered all the aspects around technology, architecture, governance. So my name is Sandeep. Been in London for the last 24 years. And I’m moving to India next month to accelerate the India business. And, of course, when I was in, I was managing the European business for HCL Tech. We’re just about a $15 billion company providing services. Services, and I took this job of growth markets, too, which is India, Middle East, Africa, France. It gave me a very different perspective because I’m managing about $1 .5 billion business. And now here I come in a completely different world.

And I started like a startup. So I built my own systems, which was based on AI. Like we say, before you preach anybody. You learn yourself. so I built all my systems today for growth markets too which is what I lead is built on AI so my inside sales engine my business analytics my forecasting everything is based on AI so I have reached from analytics to reasoning I am hoping I will reach to predictability in some way because the agents are still not predictive they are still reasoning but that’s where I started so if you look at my business and every person in my sales team or my delivery teams is certified on AI I myself started it, see if you have to embrace AI, it starts from the top, starts from the leader and we talked about trust, it starts from you if you as a leader in Vive there is no excel sheet in my world there is no powerpoint in my world you ask a question using voice you get an answer on a dashboard I can show you right here of course I will not tell you what is my forecast for this quarter but you ask a question you have it you ask a question about a company you get it in 2 and half minutes and that is the power of AI we were having you know earlier lot of people trying to dig data from here from there it doesn’t exist it is 2 and half minutes you ask for the market approach or anything that you want to do so in my view imbibe yourself it is an iterative process you do not build trust just like that you build it over a period of time you have to be patient you have to learn you have to make somebody learn and that is the learning process that continues over a period of time and then you build trust.

So my advice to anybody, and the reason I moved to India is very exciting. It’s a land of opportunity, saying, coming home. And you are in NCR, which we call it Delhi. It is the home of HCL Tech. So we have a very unique proposition or advantage in India or globally, which is we have what we call as AI products. Very proudly, it is made in India for India and for the world, which is HCL software. We have expertise of our global services, working with a lot of customers across the globe. So what it gave me an opportunity is to bring AI products, services together into what I call as AI solutions. so in this AI impact summit we have lost 7 solutions which is not just for enterprises it is for citizens it is for the governments as well more than welcome hall 4, 4 .5 please if you have not visited go and visit what we are talking about so these are the solutions which will make you know it will help us protect ourselves, fraud detection system, compliance system, training system, skilling system, not just enterprises so to me AI is about people progress and planet thank you

Arun Sasheesh

coming to you Tanuj Dassault is such a flag bearer of French innovation how do you how do you see this whole evolution and what is trust means at Dassault thank you

Tanuj Mittal

Arun and good morning everyone I represent the systems which champions the cause of industrial AI platforms. Now to this point of trust, the definition, the expectation itself has evolved I would say over the last several years. Five years back, for example, AI was still in silos and the definition of trust was mostly centered around the accuracy of the output. So you have a model, you feed data, you put a query, if the results are near to your expectation you are happy. But that is no more the situation because of widespread understanding of AI as a topic and adoption as well. Now there are new dimensions which got added to make it trustworthy and quite a few points which I wanted to highlight.

I think the highlight is already covered with my fellow panelists but for the sake of clarity and at the cost of repetition I will say it again the first one is of course the lineage of the data so the AI platform the industrial AI platform needs to ensure by design that the data which is being leveraged to solve a problem is ethical it has traceability there is no mischievous data which is being leveraged that done when the output comes it is credible and it is trustworthy by the people who are going to use it the second point which I wanted to highlight is about people in the loop we still have to go a long way where we trust a totally automated system without human intervention we still like to have at least at the governance level, people in the loop who will ensure that the processing, the output given by the machines is indeed in line with the objective for which it was created.

100 % trust only on machines is still a little far. So people in the loop is definitely which built trust for all of us. Another aspect and particularly in an industrial AI perspective is to simulate the result of an AI model in a real world environment. For example, when you design a car, you design a car in context. The car has to run on roads and the condition of roads changes from place to place. And if you really need to trust a car, which was, for example, developed. elsewhere in the world but being used in India, people will trust if that car at least is tested in the real world environment of India as a context. You have virtual twins of not only the product now, for Dassault system you also have virtual twins of the environment.

So you can simulate how that car will behave when it actually gets on road in Indian conditions. That builds trust. Another example is what kind of checks and balances which are there in the model itself that it does not let you make mistake whether the mistake is unintentional or whether it is deliberate. What kind of compliance you have already built in the model. If that is robust, the chances of getting a wrong output or a broken output is very low. The system is very robust. is far lesser and that builds trust. And the last one, point which I wanted to highlight, AI applications, unless it is end -to -end, from conceptualization to decommissioning, if it is still in silos, the overall output is less trustworthy as compared to, imagine a situation where right from conception up to decommissioning, you have been able to simulate the whole process multiple times again, prove it, streamline it, and then launch it.

That builds a lot of trust for the people who are actually going to build that system in the physical world and the consequent people who are going to use it. So these are some of my views. Arun, back to

Arun Sasheesh

Thank you. Thank you, Tanuj. I think we have some more time, but I’m glad that a lot of you guys, all of you, in fact, went. Thank you. The deep strength of French innovation, French technology, and two star walls of Indian scale and speed, in a way. So I just maybe quickly want everybody’s point of view on what is the mindset change that you are looking for to build trust and the democratization of AI at scale. So what is the mindset that you are looking for, a change of mindset, Neela, quickly?

Neelakantan Venkataraman

I think I would say that the mindset change which we have to move towards is a mindset of an ecosystem. Because we can’t do it all. For example, we partner with Thales on many of the security components which we provide as part of a solution. So it’s an ecosystem play. And we need to work very closely to make… …make sure the trust is not broken. and the trust architecture is maintained across the ecosystem.

Arun Sasheesh

Valerio?

Valerian Giesz

I think on my side, priority should be to break the walls between quantum and AI and build a huge community. And also this is why at Candela we released Merlin, which is a framework which aims to do that. Because that’s the point. Trust comes from benchmarking and reproducibility and not from one -off charts. And Merlin has been released with one very pragmatic first mission, establish trust between AI community, AI developers, using quantum computers that are brand new technology, which is now available. And we actually published some reproductions of papers. We are here to show quantum machine learning results in a controlled environment. We are turning scattered clays, names into… shared baseline and to build a community and invite people to use them.

So, yeah, my main topic is let’s break the walls and let’s share about what we learned in order to establish trust all together and build a common baseline, especially between France and India. In France, we can develop the technologies. In India, we can scale the technologies. So we have an ecosystem and a community.

Arun Sasheesh

What’s your take, David?

David Sadek

Well, I would say that in France, we have spent like decades to build something which is really supposed to work in context where failure is forbidden. I mean, with companies as Thales, as Dassault, as Airbus, and it has taken us, you know, decades to do this. and so we are living in a world of certification, of regulation of mathematics proofs so trust has to be proved this is very important we cannot afford as I said earlier that you know just declare trust, say ok please trust us when you deal with critical systems you have to prove the trust and I used to say that trust is gained by drop and is lost by bucket so this is very important and in India has been doing something equally extraordinary I would say in record time with this digital infrastructure for billion human scale which is really extraordinary and I think that the combination between depth and scale between France and India is really the very challenge here.

And to keep trust within this challenge is probably the way to go to make people adopt AI at large scale. Thank you.

Arun Sasheesh

Sandeep, for you. Can you just say one word?

Sandeep Kumar Saxena

Yeah. Just be open -minded and learn to adopt change. Adaptability. Very simple. There is nothing else.

Arun Sasheesh

And you, Tanul?

Tanuj Mittal

Yeah, quickly. The scale is directly proportional to the trust we built in the system, for sure. Yeah. And I’ll build on the example you gave initially and our prime minister also quoted. UPI, when it was launched in 2016, last year in December, it clocked some 21 billion transactions, translating to some 30 lakh crore worth of money transactions. with each other and today UPI is being used even by the most digitally illiterate person in India he doesn’t hesitate to put his trust in a system with his money so if you build the trust then the scale comes automatically

Arun Sasheesh

thank you gentlemen I think we are almost finished our time thank you very much I encourage you to meet with the speakers and thank you very much for your time

Moderator

thank you once again to our moderator and to all our distinguished panelists I would now invite all the speakers to please remain on stage for a brief momentum presented by Mr. Mark Vialmopillier and for a group photo ladies and gentlemen please join me in applauding our speakers as we take this moment together Thank you. He was the founding director of the Robotics Institute at the Carnegie Mellon University and he was instrumental in helping to create the Rajiv Gandhi University of Knowledge Technologies in India to cater to the educational needs of the low -income gifted rural youth. He and Edward Fonningham won the 1994 Turing Award, sometimes known as the Nobel Prize of Computer Science, for their exemplary work in the field of artificial intelligence.

Now, I now request Professor Raj Reddy to take the stage to deliver his keynote. note.

Raj Reddy

phone in your pocket, it was listening to you and using it to guide your discussion. I’m hoping we’ll create user -friendly interfaces so that when I speak in Telugu, you can hear in Hindi, and when you speak in English, I can hear in my preferred language. And I think we are there. We can get there very quickly. And it’s being done already. There are two startups in India called Sarvam and Bharat Jain. Both are trying to do it. My request is that we create a quantitative measurable matrix. That we have achieved this goal. What that means to me is, it’s not enough. Already people will say, we already have multilingual intelligence. We have systems that will speak, and you can speak in one language.

But it’s not usable. It is not, especially if you’re a person in a village, and you don’t even know where to begin. So the first issue is, how do we create a multilingual AGI, and how do we make sure that we have measurable progress? There’s a statement, if you can’t measure it, you can’t improve it. We need to improve the existing models, and they will probably need more computation, more memory, and more bandwidth. In the 50 years ago, we created a thing called 3M computers, MIP, megabyte, and… megapixel. Today, we should create 3T computers, a terabyte of memory and teraflop of computational power and terabit bandwidth. That’s where we should aim for. That means every one of us should have in our pocket an AI companion that actually has what we call foundation edge models.

And they require not, right now, the many models that are on the edge are like three billion bytes or nine billion bytes. We’re off by a factor of 100. And we need to get there. And India can kind of, where am I? How am I doing for time? Anyway, somebody, it used to be that there’ll be a time map. thing here but whenever it is time tell me I’ll stop okay so that’s one the second important point I want to make is people at the bottom of the pyramid most of the talks I’ve heard most of the expectations assume you are AI enabled and you can actually make you effective use of AI I come from a little village I guarantee you not one of them knows anything about computers or AI and they simply you know are not going to be benefit from this whole technology so what we need to do is just like the agricultural revolution of some Swaminathan we need to figure out a way how to get this technology to people at the bottom of the pyramid.

Again, I’m sure you’ll find, I’d be happy to talk about any of these for much longer, but we only have a short time. Then, in order to do both of these things, I said we need a teraflop, terabyte systems, and what we need are personal sovereign edge models. And currently, if you talk to anyone, they’ll say, already we can have access to AI. It is not private. It is not, you know, personal and secure. We need systems because they’re always going to the cloud to access the AI models. As soon as you do that, you have no privacy. In the future, we want systems which are personal, autonomous, and can be used to do things.

So, I’m going to talk about the AI model. cognitive assistants that are always on, always working, always learning. And that is the challenge of how to get there without… We have to cut it off from the grid. We cannot let it go to the grid because then it’s no longer private. And so anyway, there is a whole set of issues of that kind. How much time do we have? Anyway, somebody tell me. There are three or four other topics we can talk about. One is, I had a child come and say, if AI is going to teach me and knows everything, why should I go to school? Yeah. And so the answer to that will take longer than two minutes, but I only have two minutes.

But you can figure it out. But basically what we need… to do is essentially teach the kid learning to learn using AI, have a dialogue, learning to think, you have to teach them critical thinking. Right now, most kids in India don’t even open their mouth in classrooms. They’re afraid. So we need to kind of get over the barrier, let them talk and think and go through critical thinking and learning to do. You have to learn how to execute. With that, I’m going to stop, but I want to leave you with one other thing which you can figure out. One of the things I remember from Vedas is Om Shanti Shanti Shanti. Peace. . . . . . . . . . . . . . . . . . one of our keynote speakers, they said autonomous weapons are going to AI -based autonomous weapons are going to destroy the world.

That’s a risk. Why don’t we have humane weapons? When a missile is going to hit a hospital or a school, it is easy with AI to discover that and deflect the missile. Why should we even kill the soldiers? They’re innocent. They’re just somebody recruited and they’re being bombed and killed. We should build weapons, humane weapons, that will disable them rather than destroy them. There are lots of very interesting issues of this kind. We need to think about that. Thank you. Thank you. Namaskar.

Moderator

A very good morning, ladies and gentlemen. Our next session is a panel discussion on AI for science. The panel will be moderated by Professor Abhay Karandikar, Secretary, Department of Science and Technology, and he’s also the chair for the AI for Science Working Group. I would now request the panelists to please come on the dais, Professor Karandikar. The other panelists for the session are Mr. Irakli Berids, Head of Center of AI and Robotics, UNICRI, Professor Abhay Karandikar, Professor Antoin Petit, CEO and Chairman, CNRS France. We have Ms. Joelle Pino, Chief AI Officer. And we also have Mr. Amit Sheth, Founder, Indian AI Research Organization. A very warm welcome again to the panelists. I will… Right. Group photograph.

Okay, I request all on the dais to please come forward for a group photograph. We’ll have the photograph for you on your mementos. Thank you, panelists. Thank you, Professor Karandikar. I now hand it over to our moderator, Professor Abhay Karandikar, Secretary, Department of Science and Technology, to carry forward the panel discussion. Sir, over to you.

Abhay Karandikar

Thank you. Thank you, Ekta. So, distinguished panelists, we have a very distinguished panelist today on the panel, colleagues and all the members of the global scientific community. It is my pleasure to welcome you to this panel on AI for Science, and we consider it to be a very core pillar of our vision for this India AI Impact Summit. And as today we stand at the threshold of a new research, paradigm, our goal is not just to witness the AI revolution. but to steer it towards a more equitable, inclusive and transparent future. You know, in today’s AI world, we are moving beyond traditional methods where AI -driven models and automated experimentations have a potential to compress the decades of research into months.

And the rapid advances of these technologies, however, has not been so far equitably distributed and that is one challenge. Many regions still face the significant barriers. But still, the realm of possibility for using AI for scientific discovery continues to have, you know, a lot of excitements. Today, we are joined by leaders who represent the entire spectrum of scientific innovations, policy makers, institution builders. and from the governance and national research ecosystem. I look forward to the panelists’ insights on, you know, what are the exciting possibilities in AI for science and how we can bridge the digital divide and build a genuinely reciprocal global scientific ecosystem. So with this, I think I will begin with, you know, first a few questions.

I will request the panelists to answer. Of course, they are free to elaborate on any other things. And then I think we will open this floor to the audience for the introduction. So let me begin with, you know, Dr. Amit on the far end. So, Amit, you have been building, I think, IRO as a national -style institution in India. If you can just tell, you know, how can this be a national -style institution in India? How can this model? help overcome the specific barriers that we have identified in this region, you know, such as inadequate compute and fragmented data sets. And also, you know, I would like you to elaborate how can we ensure that AI research which gets conducted in our center of excellence actually can reach the translational stage addressing the real world challenges.

So if you can just, you know, take five to seven minutes on this. I think you can just do this.

Amit Sheth

Hello. Yeah. Thank you very much, Professor Karandika. This is a perfect question for me to talk about. This is why I’m here. I moved from USA after 44 years here to address, exactly the question you asked. I was on. Two days ago, I was on another panel, and I asked this question to the audience. How, if I were to be the founder of DeepSeek, had all the funding that he had and has, can I find those 200, 250 engineers, AI engineers and researchers that he had access to, to build DeepSeek? Out of around 100 people in the audience, three people raised their hand, saying, yeah, we might, we may. Of those three, two were students. So only one, you know, mature person basically thought that we can have that.

And I think that gives an answer of what we need to do. So India is well on its way, I mean, to grow. Many people who know something about the AI. and they will certainly have the ability, the skills necessary. Say, India has been big in IT services and whatever IT services need, they will be able to supply. The skill set that people would have here, that would be adequate. But we have noticed that two members, very important members of IRO’s board are Ajay Chaudhary and Sharath Sharma. And they have extensively talked about or lamented that India has not been a product nation. They have not made any global products. Virtually, I mean, hardly, you know, any global brands exist, have been developed in India.

And for that, we need more than skills. We need people at high end of expertise. That means our own indigenous research capacity, our own ability to train innovatively. And that’s what we need to do. And that’s what we need to do. A very good model has been that, you know, we do bachelors here. Take an example of Arvind Srinivasan. He did IIT Madras. Then you go outside. He did his PhD in Berkeley. I did mine in Ohio State. And then he worked for companies, three companies, DeepMind, OpenAI, and Google. And then he did his company. But that also in U .S. We want that to be done here, right? So the same ecosystem in which he got trained after leaving India, we want to provide that in India, right?

And there are, I think, a lot of things happening. As you know, there is a 40 % decrease in Indians going to the United States for studies. And that will continue for a while now, right? With most of you. You know of the results. You know of the results. So, first and foremost, IRO is developing an environment to create high -end talent of innovators. Secondly, and by the way, if you see, IRO’s founders are professors who have graduated nearly 200 top -end PhDs. So we know how to create that. Secondly, we have created a broad variety of collaborations with various universities, and we are starting to do that in industry. And we are creating a significant infrastructure to support IP creation, to licensing that, or to work with the corporates and startups to who will make the products.

So the idea would be that we’ll co -innovate, join. We’ll jointly work at IRO with the companies, with the startups, with the entrepreneurs. and we have already lined up large amount of investors, angels, seed, as well as growth stage. They are all hungry for deep tech AI startups and that we will provide comprehensive environment for us to take. Now, some of us also, founders have also done companies. Three of my four companies that I have done are AI companies licensing the research I did in my university. Ramesh Jain has done more companies than I have, and he’s also a co -founder. So we have the understanding of that entire pipeline it takes from lab to global products.

And so this is what we are going to do for India. And this was it. Okay. Thank you. Thank you.

Abhay Karandikar

Now, let me just switch the gears and go to Professor Antonin. You have been the chairman and CEO of CNRS France. so I think CNRS as you know operates at a scale you know that most research organizations can only imagine so two questions what do you think what structural shift the national research and funding agency need to make to support the interoperable scientific ecosystem that can sustain AI research beyond just short term pilot and so the added question is that is there a need to build an AI for science platform like as a mega science facility

Antoine Petit

so thanks for this invitation yes two words about CNRS CNRS in French means Centre National de la Recherche Scientifique and probably you don’t need an AI translator to understand that it means National Center for Scientific Research and And it’s true that we’re a big institution. We employ more than 35 ,000 people, among which 30 ,000 scientists. And we cover all fields of science. And clearly, AI opened a new era in science, in some sense, because AI is not only an accelerator of existing techniques. It forces us to imagine new ways to do science. Just to illustrate this, if you look at material sciences, what I will see is roughly, before you define new materials and then you study the properties of these materials.

Now you say, I would like to have a material with such properties. And then thanks to AI, you will build the material. With high probability that it will verify these properties. So in some sense, you see, it’s not that global acceleration. It’s a reverse, in some sense, of a way to do science. And this opens a new era in which you need really to have talents, of course, but you also need cooperation between different sciences. And that’s probably a challenge for an old institution, if I may. Like CNRS, we were organized classically in science. We cover all sciences, including humanity and social sciences. But you see that with AI, you need really new ways to cooperate between scientists.

And this means that, as usual, the key point is talents. And it means that we have to build ways to push people to interact. And that’s why we created, some years ago, a virtual center called AI for Science, Science for AI. And that’s why we created a virtual center called AI for Science, Science for AI. And we have to create some kind of virtual loop. And that’s why we created a virtual center called AI for Science, Science for AI. between, in some sense, producers of AI, mathematicians, computer scientists, and consumers of AI, which can come from every discipline. But the trick is that this producer will not produce tools or software that will be simply used by consumers, but consumers will have new, in some sense, new attempts for new ways to do research.

And that’s clearly something we try to do. And, of course, in addition, we need absolutely to have computer facility at the highest level, even if we also try, as a lot of people try to work, to have more frugal AI in order to not have a carbon footprint which will stop to develop this AI. And so that’s clearly a challenge for a center like Celeris, but I know that it is a challenge all over the world. And probably a key point is to really start from scientific use cases in order to, as I said, to rethink the way to do science. So do we need to have a platform for that? I don’t know. We clearly need to have cooperation.

That’s absolutely key. And Celeris, we have a long tradition of cooperation with India and with DST in particular. And clearly, from my point of view, the way I feel India approach AI in a very, very pragmatic way can be an example for us. You really try to apply AI for your citizens. And in some sense, for science, I think that… The process should be the same. we should start from very pragmatic scientific questions in different fields and to see, thanks once again to cooperation between data scientists, computer scientists, mathematicians and colleagues from the other fields, how we can apply AI. But also for science, AI for science has also some risk. In particular, you can produce a lot of papers thanks to AI.

And it’s not clear whether these papers were right or not. And in some sense, we can lose all our time by producing false papers by AI and then refaring these papers also by AI. And that’s a difficulty we all face. I think that none of us has a solution right today. But… But it’s clearly also an issue, but… be optimistic and let us think that AI for science once again will allow us to make progress and to discover also new results but also new ways to access to these results and in particular there are right now fascinating applications of AI to mathematics a bit frightened in some sense because new results have been obtained in mathematics without the help of any human and does it mean that AI will replace scientists I

Abhay Karandikar

ok so do you think AI will replace scientists or it will act as a co -scientist or a hybrid scientist that for me so let me just introduce I think

Joelle Pineau

Professor Zuel Pino so you have an academic background as well as you are now a chief AI officer so you have worked in the industry street as well. So just your take. the properties of new crystals. And in this particular case, once you’ve done the ranking, you take your top -ranked candidates, and you still need to run them through a wet lab to verify the properties. Your mathematical model has some imperfections, some approximations, some errors. But by having the ability to rank the candidate’s solutions, you cut down the search times drastically. In the old days, you had to list the list of possible solutions, and you had to test them one by one in the lab using your intuition of the order in which to test them.

But now you have a ranking algorithm that tells you in what order to rank them. So for those of you who remember the web pre -page rank algorithm where the search tree to find a website of interest was incredibly long, and all of a sudden you had a good ranking algorithm. It was a complete game -changer in order to retrieve information. And now it’s a complete game -changer in terms of finding candidate solutions to problems in AI. And so this process that I described for this one case applies across… across all sorts of other areas, whether it’s biology, whether it’s mathematical theorems, and so on and so forth. So this is not like magic. There is like an organization to how you take the data, how you use it in a generative model, how you do the ranking, and then how you verify your solutions.

And the verification process changes depending on what the domain is. In some cases, the better your model of the data, and we hear a lot about world models, the ability to predict the properties of the system means that you can accelerate further the discovery. However, you get better ranking, and you have to take fewer solutions to the lab. And so that’s just to give you a sense of how to use it in practice to make this a little bit more concrete for people. Thank you. Now let me come to Dr. Irakli Behriz. Irakli leads the United Nations Interregional Crime and Justice Research Institute Center for AI, where he manages one of the first sort of UN programs dedicated to AI research.

So, Irakli, what did you do? What is your take on the, you know, this risk versus benefits, you know, if you see that in your experience this AI for science can potentially pose and what, you know, even other speakers have raised?

Irakli Beridze

Thank you very much. Thank you for the question and thanks to the organizers for putting this together and inviting me to the panel. It’s a really pleasure to share the panel with the distinguished speakers who spoke before me. I will give some reflections what we are doing and how we’re looking at the discoveries of the science, including the social science and other things, how it translates into the policy developments at some of the United Nations streams and how we are working with that. So I’m leading a center for artificial intelligence and robotics for one of the UN agencies called UNICRI. And our mandate is anything related to AI. Crime prevention, criminal justice, rule of law, human rights, AI literacy now.

The center itself opened in 2017 in The Hague in the Netherlands, and we have a global mandate supporting law enforcement agencies all over the world to use AI and in a responsible way. We develop specialized toolkits and policy frameworks for that. We also support investigators to use AI to solve concrete crimes. And at the same time, we are assessing risks, how criminals and malicious actors can use artificial intelligence, and how we can support sort of global frameworks to ensure that AI is used in a beneficial way and risks are mitigated properly. So this is the type of framework what we are doing. A couple of questions now sort of starting from the broad side, from the United Nations.

Obviously, UN just approved a scientific advisory board. This is an extremely positive development. And just an hour ago, there was a panel about science related to the AI governance and how it is so crucial to understand and especially for the policy makers and sort of broader audience what we are trying to actually govern and what we are hoping is that the Scientific Advisory Board is going to do just that and quoting Secretary General of the United Nations who said that policy should be as smart as the technology it aims to guide and it is so true and right now there is quite a lot of sort of misconceptions and misconnects in that sense. Now a little bit about the law enforcement and how sort of how we are looking at it.

There are a number of things and there is a lot of aspects that could be touched upon. Several years ago when I started the center itself and we started sort of our programs especially on the responsible use of AI by law enforcement, most of the law enforcement agencies were not using AI. We are talking about back in 2018. or they didn’t even know what were the tools. And we had sort of a really handful of examples here and there. And now, last summer, we conducted regular global meetings, AI for law enforcement, and this one was hosted in Brazil. And we had so many use cases that we didn’t know actually sort of what to showcase. Right?

On the one hand, this is a really good development. So we have law enforcement needs to use AI and it needs to solve problems. And right now, without AI tools, the vast amount of data which exists there cannot be interpreted, cannot be put in place, but at the same time, it has to be done in a responsible way. So what we are doing is that we’re developing specialized toolkits for responsible use of AI, and that involves the multi -stakeholder dialogues. And we bring scientists there, we bring law enforcement agencies, governments, and academia to put together those findings and frameworks so that… this could be applied directly in the policy translation. So India is one of the pilot countries right now.

We have five countries where this toolkit has been implemented and this is India, Kazakhstan, Nigeria, Oman and Brazil. A couple of days ago we had a meeting at the Central Bureau of Investigation and we understood that there’s a lot of progress already made in the implementation of this particular project. At the same time we are, we have launched a rather sort of a scientific project on how to ensure that public trusts use of AI by law enforcement and in a few weeks we’re going to issue policy recommendations and the report which comes out of it which is again a very crucial form of the governance of AI in that particular field where AI is being used.

AI has been used by law enforcement but public has a fear to it and has a misunderstanding. perhaps or right understanding on how it is being used and applied in reality. So all of this stuff is being happening there. Thank you.

Abhay Karandikar

Thank you all the panelists. I think before we just open, I just had one quick question not in any order, but just to Dr. Pino, I had this question for you since you made a very important point of AI to be looked at as an instrument. Now, you know, one question I had is that there is this reproducibility crisis in science. You know, so what do you think? Do you need any standard or any methodology so that, you know, AI generated discoveries are considered, you know, as real or as reliable as, you know.

Joelle Pineau

I do appreciate the question. I’ve been in I’m quite concerned about the reproducibility more generally in the field of AI for a number of years, starting at around 2018, and have published quite a few papers specifically on this topic of reproducibility. I’ll keep it very, very short. I do think this is an issue. I do think AI can be an instrument to accelerate the reproducibility of scientific findings, because specifically in those cases, the question is already there often. There’s a candidate methodology, and so that means we can apply the wheels of AI in using reasoning methods and generative methods to accelerate reproducibility. We’ve looked at doing that and running reproducibility challenges. I’ve run an annual reproducibility challenge around some of the AI conferences, and so I think there’s a lot of opportunity there to do that.

I would emphasize there’s two ingredients that are necessary, which often are associated with discussions of responsible use of AI. One. So that is transparency. So to facilitate reproducibility, it helps to have the artifacts of the scientific process be publicly available. and the second one is evaluations. And so just to reproduce a method without being very specific about how you’re going to specify the criteria can be difficult. So I think by spending some time on transparency and evaluation, we can really facilitate this process.

Abhay Karandikar

Okay. Amit, your…

Amit Sheth

Yeah, so I think we’ve gotten great things out like productivity and other things that Kali from Cohit mentioned. About using very large models trained on arbitrary data, we are bringing… We plan to bring to India something very unique. From the very beginning, in fact, when I had a chance to talk to the Prime Minister, we said that we need to have… India make its mark in the particular… in a new form of AI. And in this case, I get the chance to perfectly explain what we are doing. We want to solve, instead of using a big model and use it as an instrument or partner, we are developing models that are very specific. We call it compact custom neurosymbolic models such that we solve specific problem deeply.

IRO has taken the topics of healthcare, sustainability and environmental science and pharma as initial domains. And recently in pharma, there is a company called Benevent AI, and they had FDA approval of a new drug, remote arthritis drug, where it was developed by use of knowledge graph and deep learning. So in our case, we want to create specific model for specific problem, problem solving. And trained, neurosymbolic means that we can make the models explainable, safe, aligned, grounded, with deeper reasoning options and planning and so on and so forth. And so I think this is an alternative model for AI that is likely to come up and would solve the problems deeply, very specifically with high value.

Abhay Karandikar

Okay. Just quickly, I just wanted to ask you this question that what do you think that AI for science can act as a bridge to solve problems in some of the priority sectors, like climate resilience or agriculture or energy, particularly for countries which have a limited experimental facility?

Antoine Petit

I have two hours, right? Yes. No, no. Clearly, as I said before, AI will play a key role in particular because it has this ability to treat a huge amount of data. I said before that… We are also a consumer of AI. If I look at the domains who produce the most amount of data, it’s not at all mathematics, computer science. It’s particle physics and astronomy. And they need new techniques based on AI to treat properly this data. But coming back to North -South relations, as you said, I’m convinced that we need cooperations. We live at a period where sovereignty becomes a buzzword. But sovereignty does not mean, from my point of view, isolation. We need to collaborate.

We need to share. We need to develop open science and open software. And clearly this is not in opposition with the will of sovereignty. And clearly, to be brief, I think that we need to… start from use case either use case coming from civil society or use case coming from science and we as developed countries we do not have as you know France has a history with Africa which is particular and during a long time we try to explain to African people what they need and now we have understood at least I hope that the main point is to understand what all they need and to try to develop cooperation in order to to feel these things so thank you,

Abhay Karandikar

actually you made an important point of the responsible AI what do you think you know that about the shared global ethics you know for the AI that AI driven scientific breakthroughs are governed by some kind of a shared ethical frame

Irakli Beridze

Yes. Okay. Yes. Thanks a lot. So there are not, I mean, many, many things happening at the moment in the world. On the one hand, we have the global digital divide where a lot of countries are investing in the technology and advancing and including in education and scientific breakthroughs. And then you have quite a large portion of the world which is staying either behind or may have a potential to stay behind. For example, right now only half of the world has either AI or digital strategies and have governmental spendings or allocations to that. Another half doesn’t. So that digital divide is very dangerous and there are numerous calls how to minimize that. And on the level of the United Nations, there are many type of streams there, but I don’t think it’s enough and I think that a lot more has to be done.

And hopefully the scientific breakthroughs… through the AI and some shared platforms and some shared collaboration that can be bridged and this could be benefited. And when you see the title of this AI Impact Summit, I cannot share it more or cannot resonate more that welfare of all, happiness for all, AI should certainly benefit all and not selected few. And I think that summits like this and hosting a summit in Global South should give a renewed impetus for doing all of that. Thank you very much.

Abhay Karandikar

Thank you very much. Now since we are running out of time, we just have time for two quick questions. So we can take from here. Yes, please, go ahead.

Audience

So my question is for Dr. Pino and Dr. Kashi. You know, I work at the intersection of AI and synthetic biology. Google defined release Alka -Volume from the mobile site. And then they announced Alka -Volume 4. What is it? Or ground discovery? And we have chosen to get… So it’s very interesting that the fundamental model in fundamental science was released in public domain. But the one which has commercial applications and drug discovery, Google has chosen to keep private. My question is, do you see this as a trend where the scientific foundation models as far as they relate to fundamental science will be released in open source, but if they are fine -tuned for commercial applications, they will be kept private?

Do you see this as a trend, and what do we do about that, Professor Sheth, in India?

Joelle Pineau

Of course I can’t speak to DeepMind’s strategy. That belongs to them. I’ve been in deep disagreement about their open sourcing strategy for many years, respectfully so. I do think that the circulation of scientific assets and ideas is absolutely for the benefit of all. I will say it is possible to go against that trend. I was, in 2023, responsible for a language model called LAMA. At the time, all of these… The industry was against open sourcing large language models. against that. We open source the Lama 1 model, Lama 2, Lama 3. Today we’re looking at over 3 billion downloads of these family of models. It’s possible to see disturbances to those trends and I think specifically in the field of scientific research there’s so much more to be gained by sharing assets and sharing ideas than keeping it closed.

But that takes courage, that is going against the grain and it takes vision.

Amit Sheth

I want to express deep admiration for that approach and trend that you started in making open source model. India has to develop its own model so we just had a whole day yesterday with the pharma industry, they are our partners and with the access to information they can provide, that is they can provide, data they can provide, we will develop our own model for drug discovery. we are ourselves developing a very large pharma knowledge graph we have already developed a good one decent one now and we will be training our own model with deep pharma drug related you know knowledge and our version thank you

Abhay Karandikar

so just one last question we will have in the end just be brief I think 30 seconds and then I will have one of the panelists to answer another 40 seconds

Audience

my question is

Abhay Karandikar

yeah go ahead

Audience

my question is is there any government guidelines for responsible global AI

Abhay Karandikar

any you want to answer this right

Irakli Beridze

so there are numerous guidelines on the responsible use of AI in many different domains from our side the sort of angle of the UN where I am working we did develop guidelines and not only guidelines but practical framework on the responsible use of AI in law enforcement and law enforcement is one of the probably most sensitive applications of artificial intelligence and that guidelines or that toolkit, that practical framework is now unveiled and it’s working and it’s been tested in many countries and as I mentioned it India is one of the first country which is implementing it and it’s very admirable. Thank you. So

Abhay Karandikar

thank you very much. With this I think we are time up and we have to close the session. I would like to thank all the panelists. Thank you. Thank you all. I just would like to give away the mementos for the panel discussion. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (41)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“Estelle David of Business France opened the AI Impact Summit, noting that roughly one hundred French companies were present across sectors such as quantum‑ready photonics, secure edge AI, mobility systems, cybersecurity, digital twins and green‑tech.”

The knowledge base states that Estelle David opened the summit by showcasing a French AI delegation of about 100 companies across sectors like quantum computing, cybersecurity and green tech, confirming the reported figure and sector breadth.

Confirmedmedium

“A partnership between H‑Company and St James Hospital in Bangalore was signed during the summit, and a collaboration between North France Invest and the TIAB was also announced.”

Source [S6] explicitly mentions the signature between H-Company and St James Hospital and the partnership between North France Invest and the TIAB, confirming these specific agreements.

Additional Contextmedium

“France now ranks among the world’s top three AI ecosystems (San Francisco, New York and Paris).”

While the ranking is not verified in the knowledge base, the source provides context that France hosts more than 1,100 AI startups and is actively doubling the number of AI scientists and engineers, underscoring its strong AI ecosystem.

Additional Contextlow

“India trains hundreds of thousands of AI engineers each year, giving it the second‑largest developer community in the world.”

Source [S118] reports that India produces about 500,000 AI engineers annually, confirming the scale of India’s AI talent pool referenced in the broader discussion of AI ecosystems.

External Sources (129)
S1
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — The summit’s opening presentations by Estelle David from Business France (the trade and investment agency) and Julie Hug…
S2
Announcement of New Delhi Frontier AI Commitments — -David: Role/Title: Not specified (invited as distinguished leader of organization), Area of expertise: Not specified
S3
Meta’s AI research VP Joelle Pineau announces departure — Joelle Pineau, the Vice President of AI research at Meta,announcedshe will be leaving the company by the end of May, aft…
S4
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — – Amit Sheth- Joelle Pineau – Joelle Pineau- Audience
S5
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — I’m still in my… So thank you, thank you to all of you. Now we are actually arriving to today’s session where we are g…
S6
https://dig.watch/event/india-ai-impact-summit-2026/scaling-trusted-ai_-how-france-and-india-are-building-industrial-innovation-bridges — I’m still in my… So thank you, thank you to all of you. Now we are actually arriving to today’s session where we are g…
S7
https://dig.watch/event/india-ai-impact-summit-2026/scaling-trusted-ai_-how-france-and-india-are-building-industrial-innovation-bridges — I’m still in my… So thank you, thank you to all of you. Now we are actually arriving to today’s session where we are g…
S8
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — -Tanuj Mittal- Senior Director Customer Solution Experience, Dassault Systèmes
S9
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — -Valerian Giesz- Co-Founder and CEO of Candela (quantum computing company)
S10
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — -Antoine Petit- CEO and Chairman, CNRS France (Centre National de la Recherche Scientifique)
S11
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — -Raj Reddy- Professor, founding director of the Robotics Institute at Carnegie Mellon University, 1994 Turing Award winn…
S12
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — -Julie Huguet- Director of LaFrenchTech Mission, supports growth of French startups in France and abroad
S14
Survival Tech Harnessing AI to Manage Global Climate Extremes — -Professor Seth- Referenced in transcript but appears to be referring to Amit Sheth
S15
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — -David Sadek- VP Research Technology and Innovation Global CTUI and Quantum Computing, Thales
S16
https://dig.watch/event/india-ai-impact-summit-2026/scaling-trusted-ai_-how-france-and-india-are-building-industrial-innovation-bridges — I’m still in my… So thank you, thank you to all of you. Now we are actually arriving to today’s session where we are g…
S17
The Future of AI in the Judiciary: Launch of the UNESCO Guidelines for the use of AI Systems in the Judiciary — Dr. Irakli Beridze:Yeah, very quick, 15 seconds. I have two basically comments. One is that it became obvious that, I me…
S18
AI for Good Impact Awards — LJ Rich: It’s a real pleasure to hear from somebody who is behind so much innovation for young people, and I think that …
S20
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S21
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S22
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S23
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — -Arun Sasheesh- Associate Partner and Country Director, TNP Consultants; Panel moderator -Saloni- Session coordinator/m…
S24
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S25
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — – Antoine Petit- Joelle Pineau- Abhay Karandikar – Raj Reddy- Irakli Beridze- Abhay Karandikar
S26
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — -Ashwini Vaishnaw- Role/Title: Honorable Minister (appears to be instrumental in India’s semiconductor industry developm…
S27
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S28
Keynote-Vinod Khosla — -Moderator: Role/Title: Moderator of the event; Area of Expertise: Not mentioned -Mr. Jeet Adani: Role/Title: Not menti…
S29
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S30
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — – Arun Sasheesh- Tanuj Mittal- Neelakantan Venkataraman – Neelakantan Venkataraman- David Sadek- Valerian Giesz
S31
Is Geopolitical ‘Coopetition’ Possible? — There are major signs of cooperation especially in the medical area and space
S32
Bridging the AI innovation gap — Partnership and Collaboration
S33
UNSC meeting: Artificial intelligence, peace and security — France:- The United Nations- France and international partnerships- Individual countries France:Madam President, I than…
S34
India and France to strengthen digital partnerships — Indian Prime Minister Narendra Modi’s two-day visit to France, where he held discussions with French President Emmanuel …
S35
Inclusive AI_ Why Linguistic Diversity Matters — The France-India partnership exemplified how countries with complementary strengths can collaborate to enhance rather th…
S36
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ebba Busch Deputy Prime Minister Sweden — She positioned the partnership as combining complementary strengths: India provides scale and speed (the engine), while …
S37
AI That Empowers Safety Growth and Social Inclusion in Action — And these assessments provide a kind of clear -eyed look at how regional landscapes can evolve, inviting us to move beyo…
S38
Conversational AI in low income & resource settings | IGF 2023 — Addressing healthcare inequity requires collaboration and the appropriate use of technology. Inequities exist not only a…
S39
Paris competes for Europe’s AI leadership as major conference approaches — France is set tohosttech executives and political figures this week, including former US Secretary of State John Kerry a…
S40
New report analyses GenAI startups in Europe and Israel — A report published byventure capital firm Accelshows the state of affairs of Europe and Israel’s generative AI (GenAI). …
S41
https://dig.watch/event/india-ai-impact-summit-2026/trusted-connections_-ethical-ai-in-telecom-6g-networks — And on the other side, you have a possibility of generating revenue by providing AI through the telecom network, which P…
S42
https://dig.watch/event/india-ai-impact-summit-2026/agents-of-change-ai-for-government-services-climate-resilience — Yeah, so one of the challenges in this is you can project it too much. It’s an exponential curve. It’s very hard to proj…
S43
Masterclass#1 — State limitations are underscored in the context of cyber threats. The norms, devised by the United Nations and other re…
S44
Defending Truth — What actions do stakeholders need to take to preserve a healthy trust ecosystem?
S45
Scaling AI for Billions_ Building Digital Public Infrastructure — “Because trust is starting to become measurable, right, through provenance, through authenticity, as well as verificatio…
S46
AI Policy Summit Opening Remarks: Discussion Report — The discussion identified several concrete commitments:
S48
Keynote Adresses at India AI Impact Summit 2026 — The discussion revealed significant financial commitments underpinning the partnership. Google announced substantial inv…
S49
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — “And the philosophy here is that AI is a tool which is helping the humankind to make a decision”[28]. “Trust is importan…
S50
AI Meets Agriculture Building Food Security and Climate Resilien — “AI must be transparent, auditable, and explainable”[96]. “Without trust, scale will not happen”[99]. “based on open sta…
S51
Multistakeholder Partnerships for Thriving AI Ecosystems — “And I would say it’s not an innovation gap, it’s a power gap.”[19]. “So all those things need framework and need govern…
S52
Open Forum #33 Building an International AI Cooperation Ecosystem — Participant: ≫ Distinguished guests, dear friends, it is a great honor to speak to you today on a topic that is reshapin…
S53
Welcome Address — “strong IT background, dynamic startup ecosystem, make India a natural hub for affordable, scalable, and secure AI solut…
S54
Free Science at Risk? / Davos 2025 — This panel discussion focused on the complex issue of research security and international collaboration in science. The …
S55
WS #462 Bridging the Compute Divide a Global Alliance for AI — The panel discussion revealed both the complexity of addressing global compute access challenges and the potential for m…
S56
How Small AI Solutions Are Creating Big Social Change — Low to moderate disagreement level. The speakers largely agreed on core principles (community-centered approach, partner…
S57
Global AI Policy Framework: International Cooperation and Historical Perspectives — The speakers demonstrate significant consensus on key principles including the need for inclusive governance, building o…
S58
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Larissa Zutter stands out as a senior AI policy advisor, closely studying the socio-economic implications of artificial …
S59
What policy levers can bridge the AI divide? — *Note: This summary is based on a transcript with significant audio quality issues, resulting in some unclear or fragmen…
S60
To share or not to share: the dilemma of open source vs. proprietary Large Language Models — Cost reduction in technology deployment Lama Impact Grants program Low cost due to the ability to build on existing mo…
S61
DeepSeek: Some trade-related aspects of the breakthrough  — Although so far proprietary models have predominated in the market, open source has been gaining traction, as noted by Y…
S62
US government seeks input on risks and benefits of Open AI models — The US Department of Commerce’s National Telecommunications and Information Administration (NTIA) is invitingcommentson …
S63
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — Tension between open sourcing fundamental science models versus keeping commercially applicable models private
S64
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion — And questions of how we scale responsibly, how we engender trust in the technology, because in order for AI to be useful…
S65
Democratizing AI Building Trustworthy Systems for Everyone — I think thanks to the contributions from all of those experts. I truly think it is a testament to the industry that we a…
S66
The strategic imperative of open source AI — Meta’s Chief AI Scientist, Yann LeCun, captured this shift clearly. Responding to those who see DeepSeek’s rise as ‘Chin…
S67
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Adham Abouzied presented research showing that open source approaches significantly reduce both development costs and en…
S68
Why science metters in global AI governance — And also mentioned here. So this is where we are suggesting that this could be one way to look at. It’s not that everyth…
S69
AI Safety at the Global Level Insights from Digital Ministers Of — This identifies a critical gap in the science-to-policy pipeline – the need for translational work that converts scienti…
S70
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — ## Introduction and Context Setting Alex Moltzau: Yes, thank you so much. My name is Alex Maltzau. And I work as a seco…
S71
Laying the foundations for AI governance — ### Science-Based Policy as Common Ground Artemis Seaford: So the greatest obstacle, in my opinion, to translating AI g…
S72
Policy Network on Artificial Intelligence | IGF 2023 — Sarayu Natarajan advocates for a context-specific and rule of law approach in dealing with the issue of misinformation a…
S73
Artificial intelligence (AI) – UN Security Council — During the discussions, several key points emerged regarding the dual-edged nature of AI in this context. On one hand, A…
S74
WSIS Action Line C7 E-science: Assessment of progress made over the last 20 years — Such a strategy liberates editors from the financial pressures characteristic of commercial entities, allowing for a con…
S75
Science under siege from AI, integrity of research at risk — AI is rapidlytransformingthe landscape of scientific research, but not always for the better. A growing concern is the p…
S76
AI Policy Summit Opening Remarks: Discussion Report — The discussion identified several concrete commitments:
S77
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — The summit’s opening presentations by Estelle David from Business France (the trade and investment agency) and Julie Hug…
S79
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — Industry representatives provided concrete examples of this collaboration in action. Sanjay Mehrotra from Micron describ…
S80
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion — And questions of how we scale responsibly, how we engender trust in the technology, because in order for AI to be useful…
S81
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — “And the philosophy here is that AI is a tool which is helping the humankind to make a decision”[28]. “Trust is importan…
S82
AI Meets Agriculture Building Food Security and Climate Resilien — But let me emphasize, AI is not a magic. As Honorable PM said in his inaugural session, AI must be built on trusted data…
S83
Multistakeholder Partnerships for Thriving AI Ecosystems — “And I would say it’s not an innovation gap, it’s a power gap.”[19]. “So all those things need framework and need govern…
S84
Welcome Address — “strong IT background, dynamic startup ecosystem, make India a natural hub for affordable, scalable, and secure AI solut…
S85
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — All three industry leaders emphasized the need for collaborative, ecosystem-wide approaches rather than proprietary solu…
S86
Open Forum #33 Building an International AI Cooperation Ecosystem — **Sajid Rahman**, ICANN board member, emphasized that AI’s growth is “unprecedented compared to previous technological w…
S87
WS #462 Bridging the Compute Divide a Global Alliance for AI — The panel discussion revealed both the complexity of addressing global compute access challenges and the potential for m…
S88
What policy levers can bridge the AI divide? — Statement that audience will ‘really enjoy this next panel’ and emphasis on the distinguished nature of the guests
S89
Open Forum #30 High Level Review of AI Governance Including the Discussion — Juha Heikkila: Thank you Yoichi and thank you very much for this invitation. So I think it’s very useful to understand t…
S90
How Small AI Solutions Are Creating Big Social Change — Low to moderate disagreement level. The speakers largely agreed on core principles (community-centered approach, partner…
S91
Artificial intelligence (AI) – UN Security Council — The global focus on Artificial Intelligence (AI) capacity-building efforts has been a significant topic of discussion am…
S92
High-level AI Standards panel — Bilel Jamoussi: Thank you very much, Dr. Cho. Certainly, collaboration, inclusivity, and human-centered standards. Thank…
S93
Opening Remarks (50th IFDT) — The overall tone was formal yet warm and celebratory. Speakers expressed pride in the IFDT’s accomplishments and gratitu…
S94
Governments and Technical Community: A Successful Model of Multistakeholder Collaboration for Achieving the SDGs — The tone was consistently formal, diplomatic, and celebratory throughout the session. It maintained a positive, collabor…
S95
Partner2Connect High-Level Dialogue — The tone was consistently optimistic and collaborative throughout the discussion. It began with celebratory announcement…
S96
Opening of the session — Greece appreciates high-level discussions on cybersecurity, such as those initiated by the Republic of Korea. In an add…
S97
Opening Ceremony — The tone is consistently formal, diplomatic, and optimistic yet cautionary. Speakers maintain a celebratory atmosphere a…
S98
Secure Finance Risk-Based AI Policy for the Banking Sector — The discussion maintained a thoughtful, forward-looking tone throughout, characterized by cautious optimism about AI’s p…
S99
Critical Infrastructure in the Digital Age: From Deep Sea Cables to Orbital Satellites — The discussion maintained a balanced tone that was simultaneously informative and concerning. It began with an education…
S100
Panel 2 – Anticipating and Mitigating Risks Along the Global Subsea Network  — The discussion maintained a professional, collaborative tone throughout, with participants demonstrating technical exper…
S101
Strengthening Corporate Accountability on Inclusive, Trustworthy, and Rights-based Approach to Ethical Digital Transformation — The discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while ack…
S102
Global Perspectives on Openness and Trust in AI — These key comments fundamentally transformed what could have been a technical discussion about open-source AI into a sop…
S103
Science as a Growth Engine: Navigating the Funding and Translation Challenge — The discussion maintained a consistently thoughtful and collaborative tone throughout. While panelists acknowledged seri…
S104
Collaborative Innovation Ecosystem and Digital Transformation: Accelerating the Achievement of Global Sustainable Development Goals (SDGs) — The discussion maintained a consistently professional, collaborative, and optimistic tone throughout. Speakers demonstra…
S105
Centering People and Planet in the WSIS+20 and beyond — The session explored whether the WSIS vision remains relevant after 20 years and how to address persistent digital inequ…
S106
Trusted Connections_ Ethical AI in Telecom & 6G Networks — The discussion maintained a consistently optimistic and forward-looking tone throughout. Speakers expressed confidence i…
S107
Lift-off for Tech Interdependence? / DAVOS 2025 — The tone of the discussion was generally optimistic and excited about technological progress, while also acknowledging c…
S108
AI Development Beyond Scaling: Panel Discussion Report — The tone began as optimistic and technically focused, with researchers enthusiastically presenting their innovative appr…
S109
When Code and Creativity Collide: AI’s Transformation of Music and Creative Expression — The tone was thoughtful and forward-looking, with both speakers showing cautious optimism rather than fear. Harvey Mason…
S110
Safeguarding Children with Responsible AI — The discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in B…
S111
AI Governance Dialogue: Presidential address — The tone remained consistently optimistic and collaborative throughout both presentations. President Karis spoke with co…
S112
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — The discussion maintained a consistently optimistic and collaborative tone throughout. Speakers expressed enthusiasm abo…
S113
DC-BAS: Blockchain Assurance for the Internet We Want and Can Trust — The overall tone was optimistic and forward-looking. Speakers were enthusiastic about the potential of these technologie…
S114
AI for Good Technology That Empowers People — The tone was consistently optimistic and collaborative throughout, with speakers demonstrating genuine enthusiasm for so…
S115
Discussion Report: AI Implementation and Global Accessibility — The tone was consistently optimistic and collaborative throughout the conversation. Both speakers maintained a construct…
S116
AI, Data Governance, and Innovation for Development — The tone of the discussion was largely optimistic and solution-oriented. Speakers acknowledged significant challenges bu…
S117
Keynote-HE Emmanuel Macron — Artificial intelligence Reference to previous address by Antonio Guterres; formal titles and protocol; mention of the A…
S118
https://dig.watch/event/india-ai-impact-summit-2026/keynote-he-emmanuel-macron — India trains hundreds of thousands of AI engineers every year. With 500 ,000 engineers, India has the second largest dev…
S119
Closure of the session — France is spearheading a significant international initiative to develop an action-oriented, state-driven mechanism to b…
S120
Final Report — ITU͛Ɛ Ϯϱ th Anniversary celebrations were graciously supported by the Kingdom of Saudi Arabia (Platinum spon…
S121
High-Level Track Facilitators Summary and Certificates — She emphasizes the importance of partnerships and acknowledges various stakeholders including UN partners, co-organizers…
S122
The WSIS welcome Part I: Meet the Movers Behind It — Tomas Lamanauskas:So thank you very, very much, Rob. So let’s give a round of applause of all our partners. And indeed y…
S123
Agenda item 6 — Gratitude extended to sponsors supporting women in cybersecurity. In closing, acknowledgment was given to international…
S124
Economic Diplomacy: India’s Experience — 2 This was affirmed by the ambassadors and high commissioners of France, Germany, Singapore and the UK at meetings held …
S125
https://dig.watch/event/india-ai-impact-summit-2026/founders-adda-raw-conversations-with-indias-top-ai-pioneers — So for example, anything and everything that is required we are basically making the entire suite of the… automation l…
S126
Space for Sustainable Development — In a high-level dialogue on “space for sustainable development,” with a particular focus on connectivity, six distinguis…
S127
Space Diplomacy: Exploring New Opportunities – ADF 2024 — The International Space Station (ISS) is a prime example of international cooperation in space. The forum on space and …
S129
Opening of the session — France: Thank you, Mr. Chair. My delegation aligns itself with the statement delivered by the European Union, and we w…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
E
Estelle David
3 arguments118 words per minute742 words374 seconds
Argument 1
Partnership deals across AI, space, and healthcare illustrate deep cooperation (Estelle David)
EXPLANATION
Estelle highlighted a series of signed agreements made during the summit that span artificial intelligence, satellite propulsion, and hospital collaborations, demonstrating concrete outcomes of Franco‑Indian cooperation. These deals show that the partnership goes beyond rhetoric to real joint projects and investments.
EVIDENCE
She listed a strategic AI partnership between Dacia Technology and GT Solved, a major contract between ExoTrail and Druva Space for 14 satellite propulsion systems, a collaboration between H-Company and St. James Hospital, as well as joint initiatives between North France Invest and TIAB and the T-U-B partnership, all signed during the event [8-13].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The summit highlighted multiple signed agreements in AI, satellite propulsion and healthcare, confirming deep Franco-Indian cooperation [S1]; medical and space collaborations were specifically noted as major signs of partnership [S31]; and a broader digital partnership agenda was outlined in the India-France agreement [S34].
MAJOR DISCUSSION POINT
Franco‑Indian partnership deals
Argument 2
Broad participation of French AI companies across diverse sectors demonstrates the depth and breadth of France’s AI ecosystem.
EXPLANATION
Estelle notes that around one hundred French companies attended the summit, covering areas such as quantum‑ready photonics, secure edge AI, mobility systems, cybersecurity, digital twins and green technologies, illustrating the wide‑ranging expertise within France.
EVIDENCE
She states that “Altogether, it was about 100 French companies … you can find in different sectors like quantum-ready photonics, secure edge AI, mobility systems, cybersecurity, digital twin, and green tech” [4-5].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Estelle’s opening remarks described a delegation of about 100 French firms spanning quantum-ready photonics, secure edge AI, mobility, cybersecurity, digital twins and green tech, illustrating sectoral breadth [S1].
MAJOR DISCUSSION POINT
Diverse French AI sector representation
Argument 3
Collaboration with Business France and partner networks is crucial for mobilising French AI champions in India.
EXPLANATION
Estelle credits the collective network of Business France, LaFrenchTech, Numium and other partners for enabling the successful presence of French startups at the summit, highlighting the importance of coordinated institutional support.
EVIDENCE
She acknowledges “the strength of our collective network and Business France … we have collaborated very closely with different partners with definitely LaFrenchTech and … Numium … the co-organiser of this event, the Franco-Thai Chamber of Commerce, Indo-French Chamber of Commerce, IFKI” [14].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The event’s description emphasizes the role of Business France together with LaFrenchTech, Numium and chambers of commerce in enabling French startups’ presence [S1].
MAJOR DISCUSSION POINT
Role of institutional networks in AI partnership
J
Julie Huguet
4 arguments128 words per minute624 words291 seconds
Argument 1
Complementary strengths—French deep‑tech excellence and Indian scale—fuel shared‑value partnerships (Julie Huguet)
EXPLANATION
Julie argued that France’s deep‑tech capabilities combined with India’s massive market and engineering capacity create a powerful synergy for AI collaboration. This complementarity enables joint innovation, investment attraction, and the scaling of French startups in India.
EVIDENCE
She cited France’s ranking among the top three global AI ecosystems, the presence of leading French AI firms such as Mistral AI and H-Company, the French President’s announcement of a hospital-AI partnership, and India’s scale of 1.4 billion people and 200 000 startups, emphasizing the powerful complementarity of French expertise and Indian scale [39-65].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of the France-India AI partnership stress the complementary nature of French deep-tech and India’s massive market and engineering capacity [S35]; a similar view is expressed in a keynote on trusted AI that highlights India’s scale as the “engine” and France’s precision as the “filter” [S36].
MAJOR DISCUSSION POINT
French‑Indian complementary strengths
Argument 2
AI drives innovation in healthcare, agriculture, climate, grounded in shared values (Julie Huguet)
EXPLANATION
Julie emphasized that AI is being applied to critical sectors such as health, agriculture and climate, reflecting shared values of trust, low environmental footprint and positive societal impact. She presented concrete examples of French‑Indian collaborations that aim to improve lives and the planet.
EVIDENCE
She mentioned the French President’s announcement of a partnership between H-Company and St. James Hospital to make hospitals more efficient, and described AI-driven initiatives in healthcare, agriculture and climate that embody common values of trust and sustainability [46-52].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Case studies on AI-enabled healthcare in low-resource settings illustrate how AI is used to improve health outcomes while respecting equity and sustainability values [S38].
MAJOR DISCUSSION POINT
AI for societal impact
Argument 3
France has risen to become one of the world’s top three AI ecosystems, underscoring its growing global influence.
EXPLANATION
Julie cites a ranking that places Paris alongside San Francisco and New York as a leading AI hub, signalling France’s emergence as a major player in AI research and industry.
EVIDENCE
She reports that “according to Deal Room, the top three AI ecosystems globally are now San Francisco, New York, and Paris” [39-40].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Reports on Paris’s AI leadership place it alongside San Francisco and New York as a top global AI hub [S39]; a separate analysis of European generative-AI startups also highlights France’s prominent position [S40].
MAJOR DISCUSSION POINT
France’s global AI standing
Argument 4
Key French AI leaders such as Mistral AI and H‑Company exemplify the country’s deep‑tech strength and ambition.
EXPLANATION
Julie mentions prominent French AI firms to illustrate the nation’s capacity for cutting‑edge AI development and its ambition to lead in the field.
EVIDENCE
She notes “We already have major European leaders such as Mistral AI or H-Company” [42].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Coverage of France’s AI ecosystem frequently cites Mistral AI and H-Company as flagship deep-tech firms driving innovation [S39].
MAJOR DISCUSSION POINT
Prominent French AI companies
N
Neelakantan Venkataraman
2 arguments156 words per minute1114 words428 seconds
Argument 1
Trust must be baked into every layer of the stack and meet regulatory standards (Neelakantan Venkataraman)
EXPLANATION
Neelakantan explained that trust cannot be an afterthought; it must be embedded at each architectural layer of AI systems and comply with regulations such as India’s DPDP and the EU AI Act. This foundational trust is essential for moving from pilots to production at scale.
EVIDENCE
He described trust as “I have your back and I will not fail you”, insisting it be built into the stack, data lineage, explainability, auditability and zero-trust networking, and noted that regulatory guidance has shifted from soft guidance to concrete policies like DPDP and the EU AI Act [130-143].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Recent discussions on trustworthy AI stress that trust is now measurable through provenance, authenticity and verification, and must be embedded across the stack to satisfy regulations such as DPDP and the EU AI Act [S45]; broader trust-ecosystem frameworks also underline this requirement [S44].
MAJOR DISCUSSION POINT
Embedded trust in AI architecture
DISAGREED WITH
Arun Sasheesh, David Sadek, Tanuj Mittal
Argument 2
An ecosystem partnership model is needed to preserve trust across sectors (Neelakantan Venkataraman)
EXPLANATION
He argued that no single organization can ensure trust alone; a collaborative ecosystem of partners is required to maintain a consistent trust architecture across different domains. This ecosystem approach leverages joint security and compliance components.
EVIDENCE
He stated that “we can’t do it all” and highlighted partnerships such as with Thales for security components, emphasizing the need for an ecosystem to keep trust intact [253-257].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A masterclass on AI governance advocates a collaborative ecosystem model for effective enforcement of norms and standards [S43]; trust-ecosystem literature further stresses the need for multi-partner arrangements to maintain consistent trust guarantees [S44].
MAJOR DISCUSSION POINT
Ecosystem mindset for trust
V
Valerian Giesz
2 arguments132 words per minute541 words244 seconds
Argument 1
Quantum‑AI trust rests on traceability, predictability, verifiability, security, and accountability (Valerian Giesz)
EXPLANATION
Valerian outlined five pillars that define trust for quantum‑AI systems: the ability to trace data and models, predict system limits, verify performance, ensure security, and maintain clear accountability across the value chain. These pillars are necessary to move quantum technologies from the lab to real‑world deployments.
EVIDENCE
He listed “trustability”, traceability, predictability, verifiability, security and accountability as essential, and described Candela’s MERLIN benchmarking framework that provides reproducible runs and performance validation [162-179].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Trust frameworks for emerging AI technologies identify traceability, predictability, verifiability, security and accountability as core pillars for moving quantum-AI from lab to production [S44].
MAJOR DISCUSSION POINT
Quantum‑AI trust pillars
Argument 2
Breaking walls between quantum and AI and sharing benchmarks creates a trustworthy community (Valerian Giesz)
EXPLANATION
Valerian advocated for dismantling silos between quantum computing and AI, proposing shared benchmarking tools to foster a common baseline and community trust. By releasing the MERLIN framework, Candela aims to establish reproducible standards that both French and Indian researchers can use.
EVIDENCE
He explained the release of MERLIN to benchmark quantum machine-learning applications, its use for reproducibility, and the goal of building a shared community between France and India [259-267].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The same trust-ecosystem literature calls for dismantling silos between quantum computing and AI, and for shared benchmarking tools to foster a common baseline and community trust [S44].
MAJOR DISCUSSION POINT
Collaboration between quantum and AI
D
David Sadek
3 arguments128 words per minute555 words258 seconds
Argument 1
Trust is demonstrated through friendly hacking, explainability, and ethical responsibility (David Sadek)
EXPLANATION
David described how Thales validates AI systems by actively attacking them (friendly hacking), ensuring they can explain their decisions, and embedding ethical and regulatory responsibilities. These practices turn trust from a promise into provable evidence.
EVIDENCE
He recounted a “friendly hacking” team that identified vulnerabilities, gave an example of a digital copilot needing to explain a maneuver, and highlighted responsibility through compliance with the EU AI Act, carbon-footprint reduction and AI-for-green initiatives, concluding that trust must be proved [188-197].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Best-practice guides for trustworthy AI highlight proactive “friendly hacking”, explainability and compliance with ethical and regulatory mandates as concrete ways to prove trustworthiness [S45].
MAJOR DISCUSSION POINT
Operational trust mechanisms
DISAGREED WITH
Arun Sasheesh, Neelakantan Venkataraman, Tanuj Mittal
Argument 2
Combining French depth with Indian speed provides the foundation for trusted AI (David Sadek)
EXPLANATION
David noted that France has spent decades building highly reliable, certified AI systems for critical sectors, while India has rapidly deployed digital infrastructure at massive scale. The synergy of French depth and Indian speed can overcome trust challenges for large‑scale AI adoption.
EVIDENCE
He contrasted France’s long-term certification and proof-based trust culture with India’s fast-moving digital infrastructure, arguing that their combination is essential for scaling AI responsibly [272-275].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of the France-India partnership note that France contributes deep, certified AI expertise while India offers rapid, large-scale deployment capabilities, a synergy likened to “precision” plus “scale” [S36]; complementary-strength discussions also underline this blend of depth and speed [S35].
MAJOR DISCUSSION POINT
France‑India complementary capabilities
Argument 3
Responsibility includes ethics, carbon‑footprint reduction, and proof‑based trust (David Sadek)
EXPLANATION
David emphasized that responsible AI must address ethical principles, minimize energy consumption, and provide demonstrable proof of trustworthiness. Initiatives such as frugal AI and AI‑for‑green illustrate how environmental stewardship is part of responsible AI.
EVIDENCE
He described efforts to reduce data volume for training, develop frugal AI, and apply AI to lower aircraft emissions, linking these actions to the broader responsibility agenda [194-198].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Responsible-AI reports stress the importance of ethical design, frugal AI and carbon-footprint reduction as integral to proof-based trust frameworks [S37]; trust measurement literature also links these dimensions to demonstrable trust metrics [S45].
MAJOR DISCUSSION POINT
Ethical and environmental responsibility
S
Sandeep Kumar Saxena
3 arguments142 words per minute687 words289 seconds
Argument 1
Leadership‑driven AI adoption and iterative learning build organisational trust (Sandeep Kumar Saxena)
EXPLANATION
Sandeep argued that AI adoption must start at the top, with leaders modelling AI‑enabled decision‑making, and that trust is built gradually through iterative learning and certification of staff. This top‑down approach creates a culture where AI is trusted and widely used.
EVIDENCE
He explained that his AI-driven sales and forecasting tools are used by himself and his teams, that every employee is AI-certified, and that trust grows over time through patient, continuous learning [215-218].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Trust-building studies emphasize top-down leadership, iterative learning and staff certification as key levers for cultivating organisational confidence in AI systems [S45].
MAJOR DISCUSSION POINT
Leadership and iterative trust building
Argument 2
Openness and adaptability are essential for embracing AI change (Sandeep Kumar Saxena)
EXPLANATION
He stressed that organisations need to be open‑minded and adaptable, learning from both French and Indian practices, to successfully integrate AI. Flexibility and willingness to change are key to staying competitive.
EVIDENCE
He noted the contrast between French scheduling and Indian flexibility, urging openness and adaptability, and later summed up with the phrase “just be open-minded and learn to adopt change” [65-68][277-279].
MAJOR DISCUSSION POINT
Adaptability for AI adoption
Argument 3
AI solutions for citizens—fraud detection, compliance, training, skilling—enhance public welfare (Sandeep Kumar Saxena)
EXPLANATION
Sandeep presented a portfolio of AI‑powered solutions aimed at everyday citizens, including fraud detection, compliance monitoring, and skill‑building tools, illustrating how AI can directly improve public services and safety.
EVIDENCE
He listed specific AI products such as fraud detection systems, compliance monitoring, training and skilling platforms that are being showcased at the summit for citizen-level impact [221-224].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Examples of AI-enabled public-service tools for fraud detection, compliance monitoring and skill-building illustrate how AI can directly improve citizen welfare, as discussed in healthcare-AI equity case studies [S38].
MAJOR DISCUSSION POINT
AI for public welfare
A
Arun Sasheesh
1 argument124 words per minute652 words312 seconds
Argument 1
Trust is the only way to achieve large‑scale AI adoption (Arun Sasheesh)
EXPLANATION
Arun asserted that without trust from corporations, banks and governments, AI cannot be deployed at the scale needed for societal transformation. Trust therefore becomes the prerequisite for any large‑scale AI rollout.
EVIDENCE
He linked the Indian public’s trust in UPI to its scaling, repeated that “trust is the only way to scale”, and emphasized that large organisations will adopt AI only when they trust it [86-94].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Trust-measurement frameworks argue that large-scale AI rollout depends on quantifiable trust signals such as provenance and verification, echoing the claim that trust is a prerequisite for scaling [S45]; ecosystem-trust literature reinforces this point [S44].
MAJOR DISCUSSION POINT
Trust as prerequisite for scale
DISAGREED WITH
Neelakantan Venkataraman, David Sadek, Tanuj Mittal
T
Tanuj Mittal
2 arguments134 words per minute745 words332 seconds
Argument 1
Trust evolves to data lineage, human‑in‑the‑loop oversight, simulation, and end‑to‑end validation (Tanuj Mittal)
EXPLANATION
Tanuj described how the notion of trust has progressed from simple accuracy to comprehensive data provenance, continuous human oversight, realistic simulation of AI outputs, and full lifecycle validation. These layers are now required for industrial AI acceptance.
EVIDENCE
He explained the shift from accuracy-only models to requirements for ethical data lineage, people-in-the-loop governance, virtual twin simulations, built-in compliance checks, and end-to-end validation before deployment [227-245].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Emerging AI governance models describe a progression from simple accuracy to full data lineage, human-in-the-loop governance, virtual-twin simulation and end-to-end validation as essential trust components [S45]; these align with the identified trust pillars [S44].
MAJOR DISCUSSION POINT
Evolution of trust in industrial AI
DISAGREED WITH
Arun Sasheesh, Neelakantan Venkataraman, David Sadek
Argument 2
Trust drives massive user adoption, as shown by UPI’s nationwide uptake (Tanuj Mittal)
EXPLANATION
He used India’s Unified Payments Interface (UPI) as a case study, showing that widespread public trust enabled billions of transactions and adoption even among digitally illiterate users, illustrating the link between trust and scale.
EVIDENCE
He cited UPI’s 21 billion transactions worth 30 lakh crore in a year and its use by even the most digitally illiterate citizens, arguing that trust directly fuels scale [281-283].
MAJOR DISCUSSION POINT
Trust leading to scale (UPI example)
A
Abhay Karandikar
1 argument123 words per minute858 words418 seconds
Argument 1
AI can compress decades of research, but equitable access and inclusion are critical (Abhay Karandikar)
EXPLANATION
Abhay highlighted AI’s potential to accelerate scientific discovery dramatically, yet warned that benefits must be shared globally to avoid widening the digital divide. Inclusive access to AI tools and data is essential for equitable progress.
EVIDENCE
He noted that AI can turn decades of research into months, but emphasized that many regions still face barriers to AI adoption, stressing the need for equitable distribution and inclusion [369-372].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Inclusive-AI analyses stress that while AI can accelerate scientific discovery, ensuring equitable access and preventing digital divides are essential for responsible deployment [S35].
MAJOR DISCUSSION POINT
AI acceleration vs. equitable access
A
Amit Sheth
2 arguments130 words per minute1046 words480 seconds
Argument 1
IRO builds high‑end talent and compact neurosymbolic models for domain‑specific breakthroughs (Amit Sheth)
EXPLANATION
Amit described the Indian Research Organization’s (IRO) strategy of cultivating top‑tier talent and developing compact, neurosymbolic AI models tailored to specific sectors such as healthcare, sustainability and pharma. This approach aims to produce high‑impact, domain‑focused breakthroughs.
EVIDENCE
He recounted IRO’s talent pipeline, collaborations with universities and industry, and the creation of compact neurosymbolic models for pharma, citing examples like the Benevent AI FDA-approved arthritis drug developed via knowledge graphs [386-440][561-573].
MAJOR DISCUSSION POINT
Talent and neurosymbolic AI for breakthroughs
Argument 2
IRO develops open knowledge graphs and custom models for pharma, emphasizing openness (Amit Sheth)
EXPLANATION
Amit emphasized IRO’s commitment to open science by building a large pharma knowledge graph and training proprietary models that remain open, fostering transparency and collaboration in drug discovery.
EVIDENCE
He stated that IRO is creating its own pharma knowledge graph and will train a custom model for drug discovery, underscoring the open-source ethos [630-633].
MAJOR DISCUSSION POINT
Open knowledge graphs for pharma
A
Antoine Petit
2 arguments135 words per minute1028 words456 seconds
Argument 1
CNRS’s AI‑for‑Science virtual centre promotes interdisciplinary cooperation and warns of AI‑generated false papers (Antoine Petit)
EXPLANATION
Antoine explained that CNRS has launched a virtual AI‑for‑Science centre to foster collaboration across disciplines, but cautioned that AI‑generated papers risk polluting scientific literature if not properly vetted.
EVIDENCE
He described the virtual centre’s role in linking AI producers and consumers, the need for interdisciplinary loops, and warned that AI can produce false papers that waste researchers’ time [444-484].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
UN discussions on AI and security highlight the risk of AI-generated false scientific papers and the need for interdisciplinary safeguards, mirroring the concerns raised about the CNRS virtual centre [S33].
MAJOR DISCUSSION POINT
AI‑for‑Science virtual centre & false paper risk
DISAGREED WITH
Other panelists (implicit)
Argument 2
AI‑generated false scientific papers pose risks, demanding ethical safeguards (Antoine Petit)
EXPLANATION
He reiterated the danger that AI‑generated manuscripts could undermine scientific integrity, calling for ethical safeguards and rigorous validation to prevent misinformation in academia.
EVIDENCE
He highlighted the specific risk that AI can generate large numbers of papers that may be incorrect, leading to wasted effort and potential misinformation [479-482].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The same UN-AI briefing underscores the potential for AI-generated misinformation in academia and calls for ethical safeguards and rigorous validation [S33].
MAJOR DISCUSSION POINT
Risk of AI‑generated false papers
J
Joelle Pineau
2 arguments171 words per minute836 words291 seconds
Argument 1
Reproducibility requires transparent artifact sharing and standardized evaluation criteria (Joelle Pineau)
EXPLANATION
Joelle argued that to ensure AI‑generated scientific results are trustworthy, researchers must make code, data and models publicly available and agree on clear evaluation metrics. Transparency and standardized benchmarks are essential for reproducibility.
EVIDENCE
She discussed her work on reproducibility challenges, emphasizing the need for publicly available artifacts and well-defined evaluation criteria to enable reliable replication of AI research [548-558].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Trust-ecosystem literature stresses that reproducibility hinges on open sharing of code, data and models together with clear evaluation metrics [S44].
MAJOR DISCUSSION POINT
Transparency and evaluation for reproducibility
DISAGREED WITH
Antoine Petit
Argument 2
Open‑sourcing large models accelerates progress despite industry resistance (Joelle Pineau)
EXPLANATION
Joelle highlighted that releasing large language models to the public, as she did with the LAMA series, can dramatically increase adoption and scientific progress, even though many industry players oppose open‑sourcing.
EVIDENCE
She recounted the open-source release of LAMA 1-3, noting over three billion downloads and arguing that openness benefits scientific research despite resistance from commercial entities [618-628].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of European generative-AI startups note that open releases of foundational models drive rapid adoption and scientific progress, even as commercial entities resist openness [S40].
MAJOR DISCUSSION POINT
Open‑source large models
DISAGREED WITH
Audience
A
Audience
1 argument166 words per minute158 words56 seconds
Argument 1
Trend: open scientific foundation models versus closed commercial fine‑tuned models (Audience)
EXPLANATION
An audience member observed a growing pattern where foundational AI models are released openly for research, while versions fine‑tuned for commercial applications remain proprietary, raising concerns about accessibility and equity.
EVIDENCE
The question referenced the release of foundational models like Google’s Alpha-Volume in the public domain contrasted with commercial fine-tuned versions kept private, asking whether this trend will continue and its implications [608-617].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Recent reports on GenAI startups observe a growing pattern where foundational models are released openly for research while fine-tuned commercial versions remain proprietary [S40].
MAJOR DISCUSSION POINT
Open vs. proprietary model trend
DISAGREED WITH
Joelle Pineau
I
Irakli Beridze
1 argument162 words per minute1140 words421 seconds
Argument 1
UN‑backed toolkit offers responsible AI guidelines for law enforcement and tackles the digital divide (Irakli Beridze)
EXPLANATION
Irakli described the United Nations’ development of a practical framework and guidelines for the responsible use of AI in law enforcement, which has already been piloted in several countries including India, aiming to bridge the digital divide and ensure ethical AI deployment.
EVIDENCE
He explained UNICRI’s mandate, the creation of toolkits for responsible AI, implementation in five countries (India, Kazakhstan, Nigeria, Oman, Brazil), and recent policy recommendations to improve public trust in AI-enabled law enforcement [507-541][637-642].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
UN-led initiatives on responsible AI for law enforcement have produced practical toolkits piloted in several countries, aiming to bridge the digital divide and ensure ethical deployment [S33].
MAJOR DISCUSSION POINT
UN responsible AI toolkit for law enforcement
R
Raj Reddy
1 argument113 words per minute950 words502 seconds
Argument 1
Multilingual AGI, personal sovereign edge models, and humane weapons aim to benefit the bottom of the pyramid (Raj Reddy)
EXPLANATION
Raj called for measurable progress toward multilingual AI assistants that work in local languages, personal edge AI models that preserve privacy, and the development of humane AI‑enabled weapons that protect civilians, all targeted at improving lives of the most vulnerable.
EVIDENCE
He cited startups working on multilingual interfaces, the need for a quantitative matrix to assess progress, the vision of personal sovereign edge models that keep data private, and the concept of humane weapons that disable rather than destroy targets, emphasizing benefits for the poorest [296-304][309-324][342-346].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Inclusive-AI research highlights the importance of multilingual AI assistants for low-resource languages and edge-centric models that preserve privacy, aligning with the vision of bottom-of-the-pyramid impact [S35]; discussions on AI-enabled edge solutions emphasize the “engine-filter” analogy of Indian scale and French precision [S36].
MAJOR DISCUSSION POINT
Inclusive, ethical AI for the underserved
M
Moderator
3 arguments39 words per minute525 words805 seconds
Argument 1
LaFrenchTech is a leading European innovation ecosystem that represents thousands of deep‑tech companies and scale‑ups, making it pivotal for Europe’s technological leadership.
EXPLANATION
The moderator highlights Julie Rouget’s role as director of LaFrenchTech and emphasizes that the organisation brings together a vast number of deep‑tech firms, positioning Europe at the forefront of technology development.
EVIDENCE
During the opening, the moderator introduces Julie Rouget as director of the French Tech mission and notes that she leads “one of the world’s most dynamic innovation ecosystems … representing thousands of deep-tech companies and scale-ups shaping Europe’s technological leadership” [31].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Coverage of Paris’s AI leadership notes that LaFrenchTech aggregates thousands of deep-tech firms and scale-ups, positioning Europe at the forefront of technological development [S39]; broader European AI ecosystem analyses also underline France’s central role [S40].
MAJOR DISCUSSION POINT
Importance of LaFrenchTech ecosystem
Argument 2
A high‑level, cross‑sector panel is essential for France and India to jointly accelerate trusted AI across multiple domains.
EXPLANATION
The moderator frames the upcoming panel as a platform where leaders from telecom, quantum, industrial AI, cloud infrastructure and digital transformation will discuss how the two countries can work together to build trust in AI systems.
EVIDENCE
The moderator announces the panel’s purpose: “to reflect on how our two countries can jointly accelerate trusted AI across sectors” and lists the sectors that will be covered, such as telecom, quantum and industrial AI [74-75].
MAJOR DISCUSSION POINT
Joint acceleration of trusted AI
Argument 3
The AI for Science panel brings together a diverse set of international experts to explore AI’s role in accelerating scientific discovery and fostering global cooperation.
EXPLANATION
By introducing the AI for Science session and its distinguished panelists, the moderator underscores the significance of AI as a tool for scientific research and the need for collaborative, cross‑national efforts.
EVIDENCE
The moderator announces the next session, describing it as “a panel discussion on AI for science” and lists the expert panelists, emphasizing the importance of AI in scientific advancement and international collaboration [351-358].
MAJOR DISCUSSION POINT
AI for scientific acceleration and cooperation
Agreements
Agreement Points
Similar Viewpoints
Unexpected Consensus
Differences
Different Viewpoints
Open‑source foundational models versus proprietary fine‑tuned commercial models
Speakers: Audience, Joelle Pineau
Trend: open scientific foundation models versus closed commercial fine‑tuned models (Audience) Open‑sourcing large models accelerates progress despite industry resistance (Joelle Pineau)
An audience member warned that while foundational AI models are increasingly released openly, the versions that are fine-tuned for commercial use remain proprietary, raising concerns about accessibility and equity [608-617]. Joelle responded that open-sourcing large language models (e.g., the LAMA series) dramatically increases adoption and scientific progress, even though many industry players oppose it, arguing that openness benefits the whole community [618-628]. The two positions clash over whether openness should be the default for both foundational and commercial AI assets.
POLICY CONTEXT (KNOWLEDGE BASE)
Policy discussions highlight the trade-off between openness, cost reduction, and transparency versus commercial control; governments (e.g., NTIA) are soliciting input on risks and benefits of open foundation models, while industry leaders argue open-source models are surpassing proprietary ones [S60][S62][S63][S66].
How to achieve trustworthy AI at scale
Speakers: Arun Sasheesh, Neelakantan Venkataraman, David Sadek, Tanuj Mittal
Trust is the only way to achieve large‑scale AI adoption (Arun Sasheesh) Trust must be baked into every layer of the stack and meet regulatory standards (Neelakantan Venkataraman) Trust is demonstrated through friendly hacking, explainability, and ethical responsibility (David Sadek) Trust evolves to data lineage, human‑in‑the‑loop oversight, simulation, and end‑to‑end validation (Tanuj Mittal)
Arun argued that without trust, AI cannot scale, positioning trust as the prerequisite for any large-scale rollout and citing the UPI example as proof of trust-driven scaling [86-94]. Neelakantan emphasized that trust must be embedded architecturally across all layers and aligned with regulations such as DPDP and the EU AI Act, treating it as a technical and compliance requirement [130-143]. David described operational trust as proof-based, using friendly-hacking exercises, explainability, and responsibility (including carbon-footprint reduction) to demonstrate trustworthiness [188-197]. Tanuj traced the evolution of trust from simple accuracy to comprehensive data lineage, human oversight, virtual-twin simulation, and full lifecycle validation before deployment [227-245]. While all agree trust is essential, they disagree on the primary mechanism: cultural prerequisite, architectural embedding, proof-based testing, or lifecycle governance.
POLICY CONTEXT (KNOWLEDGE BASE)
Building trusted AI at scale is framed as a cornerstone for responsible deployment, with panels emphasizing digital sovereignty, scaling responsibly, and multi-stakeholder collaboration to engender trust in AI systems [S64][S65][S63].
Managing AI‑generated scientific outputs: risk of false papers versus reproducibility solutions
Speakers: Antoine Petit, Joelle Pineau
CNRS’s AI‑for‑Science virtual centre promotes interdisciplinary cooperation and warns of AI‑generated false papers (Antoine Petit) Reproducibility requires transparent artifact sharing and standardized evaluation criteria (Joelle Pineau)
Antoine highlighted that AI can produce large numbers of scientific papers, many of which may be incorrect, risking wasted effort and misinformation in the literature [479-482]. Joelle countered that reproducibility challenges-through open sharing of code, data, and clear evaluation metrics-can mitigate such risks and actually accelerate trustworthy scientific discovery [548-558]. The disagreement lies in whether the primary concern is the prevalence of false outputs or the establishment of transparent, standardized processes to ensure reliability.
POLICY CONTEXT (KNOWLEDGE BASE)
Reports warn that AI-generated errors threaten research integrity, prompting calls for reproducibility frameworks and e-science strategies to safeguard scientific outputs [S75][S74][S68][S73].
Whether a dedicated AI‑for‑Science platform is needed
Speakers: Antoine Petit, Other panelists (implicit)
CNRS’s AI‑for‑Science virtual centre promotes interdisciplinary cooperation and warns of AI‑generated false papers (Antoine Petit) Discussion on building a platform was left open, with no consensus reached (implicit from panel flow)
When asked if a dedicated AI-for-Science platform is required, Antoine expressed uncertainty, noting that while cooperation is essential, he was not convinced a single platform is the answer [471-473]. Other speakers (e.g., David, Joelle) discussed tools and frameworks but did not commit to a unified platform, indicating a lack of agreement on the structural solution for AI-driven scientific research.
POLICY CONTEXT (KNOWLEDGE BASE)
The WSIS Action Line on e-science advocates dedicated infrastructure to support reproducibility and reduce commercial pressures, suggesting a policy rationale for a specialized AI-for-Science platform [S74][S68].
Unexpected Differences
Open‑source versus proprietary AI models in the context of scientific research
Speakers: Audience, Joelle Pineau
Trend: open scientific foundation models versus closed commercial fine‑tuned models (Audience) Open‑sourcing large models accelerates progress despite industry resistance (Joelle Pineau)
The audience’s concern that commercial fine‑tuned models will remain closed, limiting equitable access, was not directly addressed by other panelists and contrasts with Joelle’s strong advocacy for open‑sourcing large models. This divergence was unexpected because most participants focused on trust, governance, or collaboration rather than the openness of model releases.
POLICY CONTEXT (KNOWLEDGE BASE)
The open-source vs. proprietary debate extends to scientific research, where openness is linked to reproducibility and transparency, while proprietary models raise concerns about control and dual-use risks [S60][S63][S66][S74].
Severity of AI‑generated false scientific papers
Speakers: Antoine Petit, Joelle Pineau
CNRS’s AI‑for‑Science virtual centre warns of AI‑generated false papers (Antoine Petit) Reproducibility requires transparent artifact sharing and standardized evaluation criteria (Joelle Pineau)
Antoine emphasizes the risk that AI‑generated papers could flood the literature with incorrect results, a problem he frames as a major threat. Joelle, while acknowledging reproducibility challenges, focuses on solutions (transparency, benchmarks) and does not treat the risk as a crisis. The difference in perceived severity and priority of the issue was not anticipated given the overall collaborative tone of the summit.
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses of AI safety and research integrity identify AI-generated fake papers as a high-severity threat, calling for governance measures to mitigate misinformation in the scholarly record [S75][S73][S72].
Overall Assessment

The panelists largely converged on the importance of trust, collaboration, and the complementary strengths of France and India for AI advancement. Disagreements centered on the mechanisms to achieve trustworthy AI (cultural prerequisite vs. architectural embedding vs. proof‑based testing), the openness of AI models (open‑source versus proprietary), and the handling of AI‑generated scientific outputs (risk of false papers versus reproducibility frameworks). These divergences are substantive but not antagonistic, reflecting different professional lenses (policy, engineering, research) rather than fundamental conflict.

Moderate – while there is clear consensus on high‑level goals (trusted AI, France‑India partnership, societal impact), the speakers differ on implementation pathways and policy nuances. The implications are that coordinated action will require reconciling these approaches—e.g., integrating regulatory compliance, technical safeguards, open‑source incentives, and reproducibility standards—to build a unified, trustworthy AI ecosystem across both nations.

Partial Agreements
All these speakers share the overarching goal of building trustworthy AI that can be deployed at national or global scale and of strengthening France‑India collaboration. However, they diverge on the pathways: Arun treats trust as the prerequisite for scaling; Neelakantan stresses architectural embedding and regulatory compliance; David focuses on proof‑based testing and ethical responsibility; Tanuj highlights data lineage and simulation; Julie and Estelle emphasize complementary national strengths and concrete partnership deals as the engine for trust‑enabled scaling. The consensus on the goal coexists with differing strategic emphases.
Speakers: Arun Sasheesh, Neelakantan Venkataraman, David Sadek, Tanuj Mittal, Julie Huguet, Estelle David
Trust is the only way to achieve large‑scale AI adoption (Arun Sasheesh) Trust must be baked into every layer of the stack and meet regulatory standards (Neelakantan Venkataraman) Trust is demonstrated through friendly hacking, explainability, and ethical responsibility (David Sadek) Trust evolves to data lineage, human‑in‑the‑loop oversight, simulation, and end‑to‑end validation (Tanuj Mittal) Complementary strengths—French deep‑tech excellence and Indian scale—fuel shared‑value partnerships (Julie Huguet) Partnership deals across AI, space, and healthcare illustrate deep Franco‑Indian cooperation (Estelle David)
Both speakers agree that AI should serve societal needs and improve public welfare. Julie frames this through sector‑wide impact (health, agriculture, climate) and shared values, while Sandeep illustrates concrete citizen‑facing AI products (fraud detection, compliance, skilling). They differ in focus—strategic sectoral vision versus specific service‑level applications—but share the same overarching objective.
Speakers: Julie Huguet, Sandeep Kumar Saxena
AI drives innovation in healthcare, agriculture, climate, grounded in shared values (Julie Huguet) AI solutions for citizens—fraud detection, compliance, training, skilling—enhance public welfare (Sandeep Kumar Saxena)
Takeaways
Key takeaways
Franco‑Indian AI collaboration is deepening, with concrete partnership deals in AI, space, healthcare and industry, leveraging French deep‑tech expertise and Indian scale and market reach. Trust is identified as the essential prerequisite for scaling AI; it must be embedded at every layer of the stack, include data lineage, explainability, security, accountability and be validated through regulatory compliance. Different sectors (cloud, quantum, defense, industrial AI) converge on similar trust pillars – traceability, predictability, verifiability, security, human‑in‑the‑loop oversight and end‑to‑end validation. An ecosystem mindset – partnership across companies, research institutes and governments – is required to democratise AI and preserve trust across borders. AI can dramatically accelerate scientific discovery, but equitable access, reproducibility, transparent artifact sharing and standardized evaluation are critical to avoid a reproducibility crisis and the proliferation of AI‑generated false papers. Open‑source scientific foundation models are advocated to accelerate progress, while acknowledging commercial fine‑tuned models may remain proprietary; openness is seen as a strategic choice rather than a requirement. Ethical considerations (carbon‑footprint, responsible use, humane weapons, privacy‑preserving edge models) are integral to trustworthy AI and must be addressed through shared frameworks and guidelines. AI for societal impact – multilingual AGI, personal sovereign edge models, AI‑driven solutions for healthcare, agriculture, fraud detection, skilling and climate – is emphasized as a way to benefit the bottom of the pyramid.
Resolutions and action items
Formal signing of multiple Franco‑Indian partnership agreements (e.g., Dacia Technology‑GT Solutions, ExoTrail‑Druva Space, H‑Company‑St. James Hospital, North France Invest‑TIAB, T‑U‑B) to develop joint AI, space and healthcare solutions. Launch of the MERLIN benchmarking framework by Candela to create a shared baseline for quantum‑AI trust and reproducibility. Business France, LaFrenchTech and partner organisations commit to continue facilitating ecosystem‑wide collaborations and to organise future matchmaking events. IRO (Indian AI Research Organization) will build high‑end talent pipelines, develop compact neurosymbolic models for healthcare, sustainability and pharma, and create an open knowledge‑graph for drug discovery. UNICRI’s responsible‑AI toolkit for law‑enforcement is being piloted in India (and four other countries) with a view to produce policy recommendations and reports. Joint call for transparent artifact sharing and standardized evaluation criteria to improve reproducibility of AI‑generated scientific results (raised by Joelle Pineau). Raj Reddy’s request for a quantitative, measurable matrix to track progress on multilingual AGI and personal sovereign edge models.
Unresolved issues
How to define and implement a universally accepted metric for progress in multilingual AGI and personal sovereign edge models. Balancing open‑source release of scientific foundation models with commercial protection of fine‑tuned models – no consensus on policy or incentives. Ensuring AI‑generated scientific papers are reliable and preventing a flood of false results; concrete standards or verification mechanisms remain undefined. Bridging the digital divide so that AI benefits reach the bottom of the pyramid, especially in rural and low‑literacy populations. Establishing global, harmonised guidelines for responsible AI that are adopted across jurisdictions; current guidelines are fragmented. Operationalising end‑to‑end trust across heterogeneous ecosystems (cloud, edge, quantum, industrial) without a clear governance framework.
Suggested compromises
Combine French deep‑tech depth with Indian speed and market scale to jointly develop and scale trusted AI solutions. Adopt an ecosystem partnership model where each stakeholder contributes specific trust components, preserving overall system integrity. Release open benchmarking tools (e.g., MERLIN) while allowing companies to keep proprietary fine‑tuned models for commercial use. Implement human‑in‑the‑loop oversight for critical AI applications, acknowledging that full automation is not yet trustworthy. Promote open‑source large models (as demonstrated by LLaMA) to counter industry resistance, while encouraging responsible commercial exploitation of derived solutions.
Thought Provoking Comments
Trust is the only way to scale. If you want large corporations, banks, governments to adopt AI, they need to trust us. And when we trust things, scale is possible – just look at how India accepted UPI.
Sets a foundational premise linking trust directly to the ability to achieve scale, framing the entire panel discussion around trust as a prerequisite rather than a side‑effect.
Established the central theme of the session, prompting each panelist to frame their perspectives on AI around trust. It shifted the conversation from generic AI benefits to a focused debate on how trust can be engineered and measured.
Speaker: Arun Sasheesh (Moderator)
I would describe trust in a very simple word: I have your back and I will not fail you. Trust must be built at every layer – from data lineage, explainability, zero‑trust networking to end‑to‑end governance – it cannot be a bolt‑on.
Provides a concrete, multi‑layered definition of trust that moves the discussion from abstract values to specific technical and regulatory components.
Prompted other speakers to elaborate on technical implementations of trust (e.g., Valerian’s pillars, David’s friendly‑hacking). It deepened the technical depth of the dialogue and introduced regulatory context (DPDP, EU AI Act).
Speaker: Neelakantan Venkataraman (Tata Communications)
We see trust as five pillars: trustability (traceability), predictability (knowing limits), verifiability (benchmarking), security, and accountability (clear ownership). We released the MERLIN framework to benchmark quantum‑AI results and build a shared baseline.
Introduces a structured trust framework specific to quantum AI and announces a tangible tool (MERLIN) for community‑wide benchmarking, bridging theory and practice.
Shifted the conversation toward community building and standardisation, influencing later remarks about reproducibility (Joelle Pineau) and the need for shared baselines across France and India.
Speaker: Valerian Ghez (Candela)
Trust is not a label, it’s a proof. We do friendly‑hacking to find vulnerabilities, we ensure explainability for critical decisions, we pursue frugal AI to reduce carbon footprint, and we develop AI‑for‑green to optimise aircraft trajectories.
Frames trust as demonstrable evidence through concrete practices (security testing, explainability, sustainability), expanding the trust narrative beyond technical safeguards to ethical and environmental dimensions.
Added new dimensions (responsibility, sustainability) to the trust discussion, prompting others (e.g., Sandeep and Tanuj) to reference societal impact and scale, and reinforcing the idea that trust must be proven.
Speaker: David Sadek (Thales)
AI adoption must start at the top. I built AI‑driven sales, forecasting and analytics tools for myself, certified every team member, and we now offer ‘AI products made in India for India and the world’. Trust is built iteratively, not overnight.
Highlights leadership‑driven cultural change and the practical rollout of AI products, linking trust to internal adoption and user experience rather than just external compliance.
Broadened the conversation to include organizational change management, influencing Tanuj’s remarks on trust leading to mass adoption and reinforcing the theme that trust is cultivated over time.
Speaker: Sandeep Kumar Saxena (HCL Technologies)
When UPI was launched in 2016 it now handles 21 billion transactions a year, even for digitally illiterate users. Trust built the scale – if you build trust, scale follows automatically.
Provides a powerful, data‑driven illustration of trust translating into massive adoption, grounding the abstract trust‑scale link in a real Indian success story.
Reinforced Arun’s opening claim with empirical evidence, solidifying consensus that trust is the catalyst for scale and prompting other panelists to reference similar Indian examples.
Speaker: Tanuj Mittal (Dassault Systèmes)
We need a quantitative, measurable matrix for multilingual AGI. It’s not enough to claim multilingual capability; we must measure progress. Also, we must create personal sovereign edge models to protect privacy and consider humane weapons that disable rather than destroy.
Challenges the community to move from aspirational statements to measurable outcomes, introduces novel ethical considerations (humane weapons), and stresses privacy‑first edge AI.
Shifted the tone toward accountability and metrics, prompting later discussion on reproducibility (Joelle Pineau) and open‑source vs private models (Joelle and Amit). It added a forward‑looking, ethical dimension to the trust conversation.
Speaker: Raj Reddy (Professor, former Carnegie Mellon)
India is not a product nation; we lack global products despite strong talent. We must build high‑end research capacity, IP pipelines, and ecosystems that turn talent into globally competitive products.
Provides a candid critique of India’s innovation ecosystem, moving the dialogue from partnership to self‑sufficiency and product creation.
Prompted a shift toward discussing how to convert research into marketable products, influencing later remarks about building specific neurosymbolic models (Amit) and the need for ecosystem collaboration (Antoine Petit).
Speaker: Amit Sheth (Founder, IRO)
AI is not just an accelerator; it reverses the scientific method. We now ask for a material with desired properties and AI designs it. This requires new interdisciplinary cooperation and raises the risk of AI‑generated false papers.
Introduces a paradigm‑shifting view of AI as a tool that changes how science is conducted, while also warning about new risks (misinformation).
Expanded the discussion to the meta‑level of scientific methodology, leading to Joelle Pineau’s focus on reproducibility and the broader conversation about responsible AI in research.
Speaker: Antoine Petit (CNRS)
Reproducibility needs transparency and clear evaluation criteria. AI can actually accelerate reproducibility by making artifacts publicly available and running reproducibility challenges.
Addresses a core crisis in AI research, offering concrete solutions (transparency, evaluation) that tie back to trust as proof.
Provided actionable steps for the community, linking back to earlier trust frameworks and reinforcing the need for open standards, which later influenced the open‑source debate.
Speaker: Joelle Pineau (Chief AI Officer)
The UN has developed practical frameworks for responsible AI in law enforcement, now being piloted in India and other countries. Bridging the digital divide requires such shared guidelines and collaborative toolkits.
Highlights global governance efforts and concrete policy tools, emphasizing the role of international cooperation in building trust.
Shifted the conversation from corporate/technical trust to policy and global equity, reinforcing the summit’s theme of “AI for all” and supporting Amit’s call for broader ecosystem collaboration.
Speaker: Irakli Beridze (UNICRI)
Overall Assessment

The discussion coalesced around the central premise that trust is the prerequisite for AI scale. Arun’s opening claim framed trust as the linchpin, and each subsequent speaker deepened this premise from different angles—technical architecture (Neelakantan), quantum‑AI standards (Valerian), security and sustainability (David), organizational culture (Sandeep), real‑world Indian examples (Tanuj), measurable metrics and ethics (Raj Reddy), ecosystem productisation (Amit), paradigm‑shifting scientific methodology (Antoine), reproducibility practices (Joelle), and global governance (Irakli). These pivotal comments acted as turning points, steering the dialogue from abstract enthusiasm to concrete frameworks, metrics, and policy, and ultimately reinforced the summit’s goal of forging a trusted, scalable AI partnership between France and India.

Follow-up Questions
How can we create a multilingual AGI with measurable progress?
Establishing a multilingual artificial general intelligence with clear metrics is crucial for inclusive access and to evaluate real-world impact across diverse language communities.
Speaker: Prof. Raj Reddy
How can AI technologies be effectively delivered to people at the bottom of the socioeconomic pyramid, especially in rural areas?
Ensuring that AI benefits the most vulnerable populations addresses equity concerns and prevents a digital divide that could exacerbate existing inequalities.
Speaker: Prof. Raj Reddy
How can we develop personal, sovereign edge AI models that ensure privacy and operate offline from the cloud?
Personal, on‑device AI models protect user data and privacy, a prerequisite for widespread adoption in sensitive applications such as health and finance.
Speaker: Prof. Raj Reddy
How can we design humane AI‑powered weapons that disable rather than destroy, ensuring ethical use in conflict?
Exploring non‑lethal, AI‑driven defense systems aligns military technology with humanitarian principles and international law.
Speaker: Prof. Raj Reddy
What structural shifts are needed in national research and funding agencies to support interoperable AI scientific ecosystems beyond short‑term pilots?
Long‑term, interoperable ecosystems are essential for sustained AI research impact; identifying needed policy and funding reforms will guide future investments.
Speaker: Prof. Antoine Petit
Is there a need for a dedicated AI‑for‑Science mega‑platform/facility, and what would its scope be?
A centralized platform could provide shared compute, data, and standards, accelerating cross‑disciplinary AI research and reducing duplication of effort.
Speaker: Prof. Antoine Petit
What standards or methodologies are required to ensure AI‑generated scientific discoveries are reliable and reproducible?
Defining transparent evaluation and reproducibility protocols will build confidence in AI‑driven results and prevent the propagation of erroneous findings.
Speaker: Prof. Joelle Pineau
How can the AI community establish open‑source practices for foundational scientific models while balancing commercial interests?
Open‑source scientific models can accelerate innovation, but commercial incentives must be reconciled; guidelines are needed to navigate this tension.
Speaker: Prof. Joelle Pineau and Dr. Amit Sheth
How can global guidelines for responsible AI be harmonized across nations and sectors?
Consistent responsible‑AI frameworks are vital for cross‑border collaboration, trust, and preventing regulatory fragmentation.
Speaker: Mr. Irakli Beridze
How can we break down silos between quantum computing and AI to build a shared community and benchmarking framework?
Integrating quantum and AI research through common benchmarks (e.g., Merlin) will foster reproducibility, accelerate progress, and create a unified ecosystem.
Speaker: Valerian Ghez
How can trust be operationalized across ecosystem partners (e.g., Tata and Thales) to maintain end‑to‑end governance?
Implementing consistent trust mechanisms across partners ensures data integrity, compliance, and reliable AI deployment at scale.
Speaker: Neelakantan Venkataraman
How can AI for law enforcement be implemented responsibly across diverse jurisdictions, ensuring public trust?
Developing adaptable toolkits and policy recommendations is essential to balance security benefits with civil liberties in varied legal contexts.
Speaker: Irakli Beridze
How can AI be leveraged to address climate resilience, agriculture, and energy challenges in countries with limited experimental facilities?
Applying AI to priority sectors can compensate for infrastructure gaps, but requires tailored models and collaborative frameworks to be effective.
Speaker: Prof. Antoine Petit (question posed by Abhay Karandikar)
How can the scientific community prevent the proliferation of false papers generated by AI and maintain research integrity?
Establishing validation mechanisms and ethical standards is critical to safeguard the credibility of AI‑augmented scientific publishing.
Speaker: Prof. Antoine Petit

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.