Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges
20 Feb 2026 17:00h - 18:00h
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges
Summary
The AI Impact Summit highlighted deepening Franco-Indian collaboration on artificial intelligence, with leaders from both countries convening to showcase joint initiatives. Estelle David noted that about one hundred French firms across quantum-ready photonics, secure edge AI, mobility, cybersecurity, digital twins and green tech participated, and she cited several concrete agreements-including a strategic partnership between Dacia Technology and GT Solved, a satellite propulsion contract between ExoTrail and Druva Space, and a healthcare collaboration between H-Company and St John’s Hospital-that illustrate growing bilateral trust and investment [3-4][8-11][12-14].
Julie Huguet, director of LaFrenchTech, emphasized that France now ranks among the world’s top three AI ecosystems and that the summit serves to build bridges, share common values such as low environmental impact, and accelerate French startup growth, citing the recent Macron-announced partnership between H-Company and St John’s Hospital to improve hospital efficiency [39-44][50-51]. She also presented four French startups-Agri-Co, White Lab Genomics, Candela and Edge Company-as exemplars of technologies ready to benefit from India’s scale [54-58].
In the high-level panel, moderator Arun Sardesh framed trust as the prerequisite for AI scaling, arguing that large organisations will adopt AI only when they trust it [84-94]. Neelakantan Venkataraman defined trust as “having your back” and stressed that it must be embedded at every layer of the AI stack, from data lineage to compliance with regulations such as India’s DPDP and the EU AI Act [130-141]. Valerian Ghez (Candela) added that trust requires traceability, predictability, verifiability, security and accountability, and announced the Merlin benchmarking framework to create a shared baseline between quantum and AI communities [160-168][172-176][259-267]. David Sadek of Thales outlined four pillars-security through “friendly hacking,” explainability, regulatory responsibility and frugal AI for reduced carbon footprint-insisting that trust must be demonstrated, not merely promised [188-197]. Tanuj Mittal linked trust to scale by referencing India’s UPI system, noting that once users trust a platform, massive transaction volumes naturally follow [281-283].
The subsequent “AI for Science” session, chaired by Prof. Karandikar, stressed that AI can compress years of research into months but warned that equitable access and reproducibility remain major challenges [369-372][380-384]. Antoine Petit described CNRS’s virtual “AI for Science, Science for AI” centre to foster interdisciplinary collaboration, while cautioning about the risk of AI-generated false papers [462-470][479-482]. Joelle Pineau argued that transparency and standardized evaluation are essential to address the reproducibility crisis, and that AI itself can accelerate reproducibility through open challenges [550-558].
Overall, participants agreed that sustained Franco-Indian cooperation, robust trust frameworks embedded across technology, regulation and governance, and open scientific practices are essential to scale AI responsibly and deliver broad societal benefits [8-11][126-129][272-275][592-603].
Keypoints
Major discussion points
– Franco-Indian AI partnership and concrete outcomes – The opening remarks highlighted a series of signed agreements (e.g., Dacia-GT, ExoTrail-Druva Space, H-Company-St James Hospital) that illustrate “real partnerships, real signatures and real commitments between our two countries” [8-12]. Julie later reinforced the strategic value of the summit, noting that the French President announced a new collaboration between H-Company and St James Hospital to improve hospital efficiency [50-52].
– Trust as the cornerstone for scaling AI – Multiple speakers argued that trust must be built into every layer of AI systems to achieve scale. Arun emphasized that “trust is the only way to scale” and that large organisations will adopt AI only when they trust it [84-92]. Neelakantan defined trust as “I have your back and I will not fail you” and described its evolution from pilot to production, stressing architectural embedding and regulatory codification [130-142]. Valerian listed pillars such as traceability, predictability, verifiability, security and accountability [159-167]. David added “trust is not a label … it’s a proof” and outlined technical, explainability and responsibility dimensions [188-196]. Tanuj illustrated the link between trust and scale with the UPI example [281-283].
– Ecosystem-driven innovation and open collaboration – The panel repeatedly called for an ecosystem mindset rather than isolated effort. Neelakantan said “the mindset of an ecosystem… we can’t do it all” [253-256]. Valerian advocated “breaking the walls between quantum and AI” and building a community through shared benchmarks like the MERLIN framework [259-267]. Julie highlighted complementary strengths: India’s “scale, speed” and France’s “deep-tech excellence, scientific force, industrial capability” [62-65].
– AI for scientific discovery, reproducibility and global cooperation – The second panel focused on using AI to accelerate research while addressing reproducibility and equity. Karandikar framed AI for science as a “core pillar” to compress decades of research into months and stressed the need to bridge the digital divide [368-374]. Amit described the IRO initiative to create high-end talent, IP pipelines and industry-academic collaborations [386-430]. Antoine explained CNRS’s virtual “AI for Science, Science for AI” centre and warned about the risk of AI-generated false papers [444-482]. Joelle emphasized transparency and evaluation as keys to reproducible AI-driven science [548-558].
– Inclusive, people-centric vision for AI’s societal impact – Throughout the summit speakers invoked shared values and the need to reach the “bottom of the pyramid.” Julie spoke of “trustworthy, low environmental footprint, positive impact for humanity” [46-49]. Raj Reddy called for measurable multilingual AI that serves villagers and stressed personal, sovereign edge models for privacy [294-324]. Karandikar and Irakli highlighted the digital-divide challenge and the importance of AI benefiting “all, not a selected few” [368-371][595-599].
Overall purpose / goal of the discussion
The AI Impact Summit was convened to deepen Franco-Indian collaboration, showcase French AI startups, and create concrete partnership opportunities while jointly addressing how to build trusted, scalable AI across sectors. A secondary aim was to explore AI for scientific research, promote reproducibility, and discuss policies that ensure AI’s benefits are inclusive, ethical, and globally distributed.
Overall tone and its evolution
– The session opened with a celebratory and diplomatic tone, praising high-level visits and announcing partnership signings.
– It then shifted to a technical-analytical tone, as panelists dissected the concept of trust, its architectural, regulatory and operational dimensions.
– Mid-discussion the tone became collaborative and ecosystem-focused, emphasizing community building, open benchmarking, and complementary strengths.
– The later AI-for-science segment adopted a forward-looking, visionary tone, balancing excitement about accelerated discovery with caution about reproducibility and equity.
– Throughout, the tone remained optimistic and solution-oriented, concluding with a reaffirmation of shared values and a call for inclusive, people-centric AI deployment.
Speakers
Speakers (from the provided list)
– Estelle David – Representative of Business France; opened the summit and highlighted French-India AI collaborations. Area: International trade & AI partnership. [S1][S2]
– Joelle Pineau – Chief AI Officer (as mentioned in the panel) and Vice President of AI Research at Meta (external source). Area: AI research, AI governance. [S4][S3]
– Sandeep Kumar Saxena – Chief Growth Officer, HCL Technologies. Area: AI-driven services and growth markets.
– Tanuj Mittal – Senior Director, Customer Solution Experience, Dassault Systèmes. Area: Industrial AI platforms and digital twins.
– Valerian Giesz – Co-Founder and CEO of Candela (quantum-computing startup). Area: Photonic quantum computers, quantum AI. [S9]
– Antoine Petit – CEO and Chairman, CNRS France (Centre National de la Recherche Scientifique). Area: Scientific research, AI for science. [S10]
– Raj Reddy – Professor, founding director of the Robotics Institute, Carnegie Mellon University; 1994 Turing Award winner. Area: AI, robotics, multilingual AI. [S11]
– Julie Huguet – Director of the French Tech Mission (LaFrenchTech). Area: French startup ecosystem, AI impact summit. [S12]
– Amit Sheth – Founder, Indian AI Research Organization (IRO). Area: AI research, neurosymbolic models for health, sustainability, pharma. [S13][S14]
– David Sadek – VP Research Technology & Innovation, Global CTUI and Quantum Computing, Thales. Area: AI security, “friendly hacking”, AI ethics. [S15]
– Irakli Beridze – Head of Center of AI and Robotics, UNICRI (UN Interregional Crime and Justice Research Institute). Area: AI for law-enforcement, responsible AI frameworks. [S18][S17]
– Audience – Members of the audience who asked questions; no specific titles provided.
– Arun Sasheesh – Associate Partner & Country Director, TNP Consultants; moderator of the high-level panel. [S23]
– Abhay Karandikar – Secretary, Department of Science and Technology, India; moderator of the “AI for Science” session. [S25]
– Moderator – Unnamed conference moderator who introduced speakers and managed transitions.
– Neelakantan Venkataraman – Vice President & Global Business Head, Cloud AI & Edge Data Communications, Tata Communications. Area: Cloud AI, edge computing, AI-center of excellence. [S30]
Additional speakers (not in the provided list)
– Saloni – Session coordinator/moderator (addressed by Arun Sasheesh).
– Mark Vialmopillier – Mentioned as the founding director of the Robotics Institute at Carnegie Mellon University (historical reference).
– Julie Rouget – Introduced herself as “Julie Rouget, director of the French Tech mission”; appears to be the same person as Julie Huguet but named differently in the transcript.
– Professor Zuel Pino – Referred to as “Ms. Joelle Pino, Chief AI Officer” (different spelling of Pineau’s name).
– Professor Antonin Petit – Alternate spelling of Antoine Petit (already listed).
– Professor Raj Reddy – Already listed; appears again in later sections.
– Professor Abhay Karandikar – Already listed; appears again as moderator.
– Professor Irakli Beridze – Already listed; appears again in later sections.
– Professor Joelle Pineau – Already listed; appears with alternate spelling.
– Professor Amit Sheth – Already listed; appears again.
– Professor David Sadek – Already listed; appears again.
– Professor Neelakantan Venkataraman – Already listed; appears again.
– Professor Tanuj Mittal – Already listed; appears again.
– Professor Sandeep Kumar Saxena – Already listed; appears again.
– Professor Valerian Giesz – Already listed; appears again.
– Professor Antoine Petit – Already listed; appears again.
– Professor Raj Reddy – Already listed; appears again.
– Professor Mark Vialmopillier – Already listed.
– Professor Saloni – Already listed.
– Professor Mark Vialmopillier – Already listed.
– Professor Raj Reddy – Already listed.
– Professor Raj Reddy – Already listed.
(Note: Some names appear multiple times with slight spelling variations; they are consolidated above.)
Opening remarks (Estelle David) – Estelle David of Business France opened the AI Impact Summit, welcoming Prime Minister Modi and President Macron at the French pavilion and noting that the week was a great opportunity to showcase French innovation. She highlighted that roughly one hundred French companies were present, spanning quantum-ready photonics, secure edge AI, mobility systems, cybersecurity, digital twins and green-tech, and that all participants share the conviction that AI is “the next frontier” [1-5]. She also thanked the Platinum, Gold and Silver sponsors-CMS CGM, Total, BNP Paribas, Capgemini, Schneider Electric and MBDA-who supported the event [70-73]. David then outlined a series of concrete Franco-Indian agreements signed during the week, illustrating the summit’s focus on “real partnerships, real signatures and real commitments”. The first was a strategic partnership between Dacia Technology and GT Solved, signed in Bangalore at the French consulate [8]. A second deal saw ExoTrail and Druva Space contract for the delivery of fourteen satellite-propulsion systems, symbolising cooperation in the space sector [9]. Additional signatures included a collaboration between H-Company and St James Hospital in Bangalore, a partnership linking North France Invest with the TIAB, an alliance between T-U-B and a leading Indian innovation ecosystem, and a later H-Company-St John’s Hospital initiative announced by President Macron [10-13][46-51]. David emphasized that these outcomes would not have been possible without the extensive network coordinated by Business France and its partners, praising close collaboration with LaFrenchTech, Numium, Yuja Advisory, the Franco-Thai Chamber of Commerce, the Indo-French Chamber of Commerce and IFKI, which together mobilised French AI champions in India [14-15].
Keynote (Julie Huguet) – Julie Huguet, Director of the French Tech mission, introduced the summit as a bridge-building opportunity and reminded the audience that France now ranks among the world’s top three AI ecosystems (San Francisco, New York and Paris) [39-40]. She stressed shared values-trustworthiness, low environmental footprint and a positive impact for humanity-and cited President Macron’s announcement of the H-Company-St John’s Hospital collaboration to make hospitals more efficient and save lives [46-51]. Huguet showcased four French startups ready to leverage India’s scale: Agri-Co (digital agriculture), White Lab Genomics (AI-accelerated gene-therapy), Candela (scalable quantum technologies) and Edge Company (autonomous AI agents) [54-58]. She highlighted the complementary strengths of India’s scale and speed with France’s deep-tech excellence, scientific force and industrial capability [62-65].
High-level panel (moderated by Arun Sasheesh) – Arun Sasheesh framed trust as the prerequisite for AI scaling, recalling the Indian Prime Minister’s “human-manner” concept and the French President’s reference to UPI as an example of how trust enables massive scale, arguing that “trust is the only way to scale” and that large organisations will adopt AI only when they trust it [84-94][281-283].
Neelakantan Venkataraman (Tata Communications) – Neelakantan defined trust in simple terms – “I have your back and I will not fail you” – and insisted that it must be built into every layer of the AI stack, from data lineage to explainability, zero-trust networking, advanced guard-railing and end-to-end governance. He highlighted the AI Centre of Excellence (AI COE) that has moved projects from pilots to production, and noted that trust has shifted from a soft guidance in early pilots to a baked-in regulatory requirement, citing India’s DPDP and the EU AI Act as examples of codified standards [115-117][130-142][135-137].
Valerian Giesz (Candela) – Valerian Giesz, co-founder of Candela, presented a five-pillar model of trust for quantum-AI systems: trustability, predictability, verifiability, security and accountability. To operationalise these pillars, Candela released the MERLIN benchmarking framework, which provides a shared baseline for quantum-AI results and aims to foster a community that bridges quantum and AI research [159-168][172-176][259-267].
David Sadek (Thales) – David Sadek outlined four complementary pillars of trustworthy AI. His team conducts “friendly hacking” to expose algorithmic vulnerabilities, ensures explainability of AI recommendations (e.g., a digital copilot’s decision), adheres to ethical and regulatory compliance (the EU AI Act and French digital ethics charter), and pursues “frugal AI” to minimise carbon footprints while developing AI-for-green applications such as aircraft-trajectory optimisation [188-197].
Sandeep Kumar Saxena (HCL Technologies) – Sandeep Kumar Saxena described how trust is cultivated within organisations. He recounted building AI-driven sales, forecasting and analytics tools for his own use, certifying every team member on AI, and launching “AI products made in India for India and the world”. At the summit he showcased seven solutions for enterprises, citizens and governments [215-224][220-222][217-219]. He argued that trust is built iteratively, through leadership commitment and demonstrable utility for customers.
Tanuj Mittal (Dassault Systèmes) – Tanuj Mittal traced the evolution of trust from a focus on model accuracy to a comprehensive lifecycle approach. He highlighted the need for data lineage, human-in-the-loop oversight, virtual-twin simulations of real-world conditions (e.g., testing a car in Indian road environments), built-in checks to prevent mistakes, and end-to-end validation from conception to decommissioning. He reinforced his point with the UPI example, noting that once users trust a platform, massive transaction volumes follow automatically [227-245][281-283].
Ecosystem mindset – Across the panel, speakers converged on an ecosystem mindset as essential for democratising AI. Neelakantan stressed that “we can’t do it all” and called for ecosystem-wide partnerships [253-256]; Valerian urged the community to “break the walls between quantum and AI” and to share benchmarks through MERLIN [259-267]; Julie highlighted the complementary strengths of India’s scale and France’s deep-tech excellence [62-65].
Transition moment – Mark Vialmopillier offered a brief tribute to Professor Raj Reddy, founder of the CMU Robotics Institute and co-winner of the 1994 Turing Award [300-304].
Keynote (Raj Reddy) – Raj Reddy, a Turing-Award-winning founder of the Robotics Institute, presented a forward-looking, people-centric vision, calling for measurable multilingual AGI that can serve villagers in their native languages and for “personal sovereign edge models” that operate offline to preserve privacy. He also urged the development of humane AI-powered weapons that disable rather than destroy, framing AI as a tool for peace as well as progress [294-324][340-347][306-312].
AI for Science panel (moderated by Prof Abhay Karandikar) – Professor Abhay Karandikar positioned AI as a core pillar capable of compressing decades of research into months, while warning that equitable access remains a major challenge and that the digital divide must be bridged [368-374][369-372].
Amit Sheth (IRO) – Amit Sheth outlined IRO’s strategy to create high-end talent, develop compact neurosymbolic models for domains such as healthcare, sustainability and pharma, and build an open knowledge-graph for drug discovery. He cited the recent FDA-approved arthritis drug developed with a pharma knowledge-graph as an example of AI-driven innovation [386-430][566-572].
Antoine Petit (CNRS) – Antoine Petit described the virtual “AI for Science, Science for AI” centre, which seeks interdisciplinary cooperation between mathematicians, computer scientists and domain experts. He warned that AI can generate large numbers of scientific papers, many of which may be false, creating a risk of wasted effort and misinformation [462-470][479-482].
Joelle Pineau (Chief AI Officer) – Joelle Pineau emphasized the reproducibility crisis and proposed two essential ingredients: transparent public release of artefacts and standardised evaluation criteria. She noted that AI can itself accelerate reproducibility through open challenges and shared benchmarks [548-558].
Audience Q&A – An audience member highlighted a trend whereby foundational scientific models are released openly while fine-tuned commercial versions remain proprietary, potentially limiting equitable access [608-617]. Pineau counter-argued that open-sourcing large models (e.g., the LAMA series) dramatically expands adoption and scientific progress, despite industry resistance [618-628].
Policy perspective – Irakli Beridze of UNICRI presented the UN-backed responsible-AI toolkit for law-enforcement, now being piloted in India, Kazakhstan, Nigeria, Oman and Brazil. The toolkit provides practical frameworks, multi-stakeholder dialogues and policy recommendations to ensure AI is used responsibly while addressing public concerns [511-538][536-538].
Conclusion & action items – The summit reaffirmed that Franco-Indian collaboration is deepening through concrete partnership deals, that trust must be baked into every layer of AI systems, and that an ecosystem-driven, open-collaboration model is essential for scaling AI responsibly. Action items include formalising the Dacia-GT, ExoTrail-Druva and H-Company-St James Hospital agreements, launching Candela’s MERLIN benchmark, continued support from Business France and LaFrenchTech for matchmaking events, IRO’s development of neurosymbolic models and open pharma knowledge-graphs, and the rollout of UNICRI’s responsible-AI toolkit in India. Unresolved issues remain around defining universal metrics for multilingual AGI, balancing open-source foundations with proprietary commercial models, preventing the proliferation of AI-generated false papers, bridging the digital divide for the poorest populations, and establishing harmonised global guidelines for responsible AI [272-275][592-603].
Overall assessment – The summit demonstrated a strong consensus on the need for trustworthy, scalable AI built on complementary national strengths, while highlighting substantive debates on implementation pathways, openness versus commercial protection, and safeguards for scientific integrity. The diverse yet convergent perspectives suggest that future Franco-Indian initiatives will need to integrate architectural trust mechanisms, ecosystem partnerships, open-science practices and policy harmonisation to achieve inclusive, responsible AI impact [84-94][130-142][159-168][188-197][259-267][548-558][618-628][511-538].
We were also very proud yesterday to welcome the different leaders who came for the summit and especially Prime Minister Modi and President Macron to come on the pavilion and discover the companies and speak with our companies. So as you see, through this week, the French AI delegation was actually more than what you are seeing on the pavilion. Altogether, it was about 100 French companies who came. And actually, when you will meet them, you can find in different sectors like quantum -ready photonics, secure edge AI, mobility systems. cybersecurity, digital twin, and green tech. And actually, all of them wrote, and they are all convinced and trust. that AI is the next frontier. So now just to share with you what is making this week very special.
Actually it’s as you with what I said you can see that was very intense that’s for sure but it’s not only intensity actually as you will see it’s also a lot of results achieved and results with real partnerships real signature and real commitments between our two countries. I would just name a few for the AI just maybe the first with that Dacia technology and GT solved where they signed a strategic partnership on Monday evening in Bangalore at the French consulate during the French AI night and that really shows strengthening of Franco -Indian cooperation and engineering automation in intelligence. Thank you. A second one in a different sector between ExoTrail and Druva Space, where they signed a major contract in the space industry to deliver 14 satellite propulsion systems, which is also a very strong symbol of the cooperation between France and India in terms of space.
Another signature between H -Company and St. James Hospital. And a final one that I can mention is actually a partnership between North France Invest and the TIAB that are actually uniting all together, which will create new bridges between actually one of the most Europe, most dynamic industrial region. And the other one is the T -U -B, which is actually a partnership between the two. One of India’s most powerful innovation ecosystem. So as you can see, when we see all these signatures, and I’m not just talking about AI. you can see that the dynamism between France and India is very strong but now actually when you see all this it wouldn’t have been possible without the strength of our collective network and Business France the trade and investment agency is really proud to collaborate and we have collaborated very closely with different partners with definitely LaFrenchTech and thank you Julie for the long standing partnership supporting the French startup and for bringing all these startups here in India with Numium the leading French digital and tech association helping the structure and mobilize the presence of French AI champions in India also some other partners Yuja Advisory Achoo but also the co -organizer of this event, this panel at the main summit, the Franco -Thai Chamber of Commerce, Indo -French Chamber of Commerce, IFKI.
I’m still in my… So thank you, thank you to all of you. Now we are actually arriving to today’s session where we are gathering today most influential leaders shaping the future of AI. So I won’t be long, but we are really honored to welcome Julie Huguet, Director of the Mission French Tech. Also Arun Sadesh, Associate Partner and Country Director for TNP Consultants. Nila Khan, Veta Karam, Vice President and Global Business Head, Cloud, AI and Age. From Tata Communication. Valerian Ghez, Co -Founder and CEO of Canvela. Dr. David Sadek, VP Research Technology and Innovation Global CTUI and Quantum Computing from Thales. Sandeep Kumar Saxena, Chief Growth Officer from HCL Technologies. And finally, Tanuj Mittal, Senior Director Customer Solution Experience from Dassault Systèmes.
So we’ll be really happy to hear your experience. And before I conclude, just two thanks also to our partners, because you know this event has been also been possible thanks to them. Our Platinum sponsors, CMS CGM, Total. Our gold sponsors, BNP Paribas, Capgemini, Schneider Electric, and the silver sponsor, MBDA. Again, thank you very much, all of you. Thank you to our co -organizer, IFKI, and I wish you a fruitful session. maybe just before I end also a big thanks to the teams the different teams, business friends teams but all the French team all together who worked like crazy to make this week possible
applause applause thank you very much Estelle we now move forward to our keynote address it is my pleasure to invite Miss Julie Rouget director of LaFrenchTech Julie leads one of the world’s most dynamic innovation ecosystems LaFrenchTech representing thousands of deep tech companies and scale -ups shaping Europe’s technological leadership Julie over to you applause
thank you good morning everyone thank you I’m Julie Rouget I’m director of the French Tech mission, so we support the growth of French startups in France and abroad. I’m truly delighted to discover the tech ecosystem here in India, a country that trains around 1 .5 million engineers every year. I think it’s the highest number in the world, so I’m very impressed. The AI Impact Summit is an opportunity to create more bridges between France and India, and exactly one year ago, actually, we hosted the AI Summit in Paris. That moment helped us, helped our ecosystem to structure itself. It was the opportunity to attract investment, to unlock talent, to accelerate the creation of French startups. Today, the French tech ecosystem is strong and ambitious.
According to Deal Room, the top three AI ecosystems globally are now San Francisco, New York, and Paris. We are very proud of it and we are really sure that the AI summit helped us to build this strong ecosystem. Across France, AI is becoming a pillar of our industrial transformation. We already have major European leaders such as Mistral AI or H -Company. And I’m convinced that the AI Impact Summit here in Delhi would be as valuable for India as it was for us. For the French tech, this week in India was of course a great opportunity to showcase French innovation. But it was also an opportunity to deepen our partnership with India. Beyond business, I’m truly convinced that we share common values, trustworthy, low environmental footprint, positive impact for humanity.
We support innovation when it reinforces our economies. Of course, we are committed to making the world a better place for all of us. but also when it brings real progress for humanity. Innovation only makes sense when it serves the greatest number. And to give you a concrete example, the French President Macron announced yesterday that H -Company and St. John’s Hospital in Bangalore have started a collaboration to make hospitals more efficient and to contribute to save thousands of lives. In healthcare, in agriculture, climate, and many other sectors, Franco -Indian partnerships are key for innovation with real impact. This is why I was really happy the whole week to be here with outstanding French startups, companies already working with India, like Estelle told us a bit earlier, and others ready to build strong and strategic partnerships here.
And thank you. And maybe I will introduce a few of them. Agri -Co is transforming agriculture through digital tools that connect farmers directly to markets. White Lab Genomics uses artificial intelligence to accelerate gene therapy development. Candela is building scalable quantum technologies that will shape the future of computing. And Edge Company develops advanced AI agents capable of computer use to perform complex tasks autonomously, just like a human would. For these innovations to become global leaders, international development is key. And we all know that the world is changing. Economic alliances are evolving. We see it with Canada, Latin America, Gulf countries, and obviously here in India. Today, India represents a scale of 1 .4 billion people. 200 ,000 startups. It’s huge.
France represents deep tech excellence, scientific force, industrial capability. And I think this complementary is powerful. In France, we like to schedule meetings weeks in advance. In India, we learn to be a bit more flexible. And honestly, innovation also requires agility and perhaps a bit of Indian wisdom. That’s what we learned as well this week. And it was, like Estelle said, a very important week for the startups who came with us. So I wish you all a good session and a great day. And thank you for being here with us this morning. And .
Thank you so much, Julie. We will now move to our high -level panel discussion, where leaders from telecom, quantum, industrial AI, cloud infrastructure, and enterprise digital transformation will reflect on how our two countries can jointly accelerate trusted AI across sectors. I am pleased to introduce our moderator for this session, Mr. Arun Sardesh, Associate Partner and Country Director, TNP Consultants. Joining Arun on the panel are an exceptional group of leaders, Neelakantan Venkataraman, Vice President and Global Business Head, Cloud AI and Edge Data Communications. Valerian Ghiaz, Co -Founder and COO, Coindella. Dr. David Sadeg, Vice President, Research, Technology and Innovation, Global CTO, AI and Quantum Computing, Thales. Mr. Sandeep Kumar Saxena, Chief Growth Officer, HCL Technologies Tanuj Mittal, Senior Director, Customer Solution Experience, Daso System With that, ladies and gentlemen, it is my pleasure to hand over the session to our moderator.
Thank you, Saloni. Good morning, everyone. It’s actually a pleasure and a privilege to be part of this summit and being a moderator to such an esteemed panel. I would like to start by thanking Business France, IFKI, and the AI Impact Summit organizers for giving us the opportunity to discuss something that is very important about trusted AI. So maybe I’ll start with actually what happened here yesterday. Our prime minister talked about human manner is the concept that he introduced. Our French president talked about scaling, and he used UPI, the Indian payment system, as a good example of scale. And if you really think about it, there is a large element of trust involved in it. The way that in India we accepted UPI means we trust it.
And when we trust things, scale is possible. So usually when people talk about topics such as scale or, sorry, so trust or safety, there’s a bit of pessimism at times talking about challenges. But if you really think about it, there is a large element of trust involved in it. But in this particular session, I’d like to be more optimistic. and present trust as the only way to scale. If you want the large corporations, the banks, the governments to adopt AI, they need to trust us. And only when these organizations adopt AI, we can really achieve scale. So that’s the, you know, I’d like to set the tone with that comment. And maybe, you know, in the last five years, especially after COVID, we have facing changes quite rapidly, right?
I mean, things are moving from one thing to another. We all started our career, and today we are talking about AI. So a lot of evolution in our lives as well. So I want to start from that point to introduce yourself, but also tell us. The evolutions that you have gone through, and how do you define trust? Maybe we’ll start with you, Neil.
Thank you. A very warm good morning to all of you, and thank you, Business France, for having me here. It’s a pleasure to be here and talking to all of you, and hopefully we’ll have a nice interaction. So personally, you know, we’ve been… So just to introduce myself, I head the cloud business for TataCom, which includes your general purpose cloud. Now AI cloud. Edge and dedicated private clouds for our enterprise customers. We are an international company. 80 % still comes from India, and 20 % comes from outside of India. So we were… As part of our cloud business, we did have a large AI ML offering. And about four years back, when suddenly the transformer architecture came into the scene, and we were able to do it, we were, you know, we didn’t know about it at all.
Actually, we were, I would reckon that we were like, we didn’t know about it at all. And so when it came up, you know, we thought, what is this new architecture which has come up and how it’s going to impact? And OpenAI and ChatGPT came up. And then we started thinking how we’re going to apply this to our businesses internally and also how we’re going to offer it as a service to our customers. So our journey has been a journey of learning a lot in the last three years, I would say. All of us are learning and it’s been pretty fast -paced. It’s been pretty steep in terms of technical. We had to, you know, through the organizational levels, right from the CEO to the bottom most, we had to do learning of what will it take for this new world to adopt Gen AI and how do we adopt Gen AI within the company and how do we adopt Gen AI within the company and how do we adopt Gen AI outside and offer it to our customers.
So tremendous scale of changes and the potential for innovation for our customers and for the company. So now we have established an AI COE within the company about three and a half years back. We had a lot of pilots which were going on within the company, and now they are into production. And similarly for our customers and enterprise world and beyond enterprise government and institutions which work very closely with government, who work on citizen -scale projects, all of us have seen that, right? So truly in the last five years, it’s moved from, I would say, POCs and pilots to now production. And production at an entry level. I would say scale. It is yet to be achieved.
It’s production. to say that, okay, there is a return on investment in the enterprise context and there is a reasonable outcome for citizen scale projects. And therefore, we should start putting it into production and then, of course, scale it. And scaling means that trust has to be put on steroids. So let me talk about trust now. So I would, you know, describe trust as something which is, in a very simple word, I have your back and I will not fail you. That’s trust. You know, beyond that, there’s nothing. So when we deploy these systems, the stack, and then when we deploy the use cases and the applications, you know, inherently, trust has to be foundational element.
It cannot be a bolt -on on top of what we have built. So it has to be built at every layer. And trust has also evolved within AI system. In the last five years, you know, it started off. by, you know, because it was a POC pilot, so you’re not really exposing it to the end users in a big way. It was in a closed group, user group, and therefore it was more of good to have. But now it’s moved to foundational, it’s more architectural in nature, right? Every element of the architecture needs to have trust built in. And from a regulatory point of view also, trust has also evolved, right? So, earlier it was all about, okay, a soft guidance on trust, saying that you need to be, you know, ethical, you need to have transparency, but now it’s in the, baked in into the regulatory policies and requirements, whether it is the DPDP, which has been operationalized in India, or the EU AI Act, which is already operational.
So now it is, you know, it is in black and white. And from a technology point of view, as I said, trust is foundational, it is architectural whether you have explainability built in in terms of the outcomes, whether the behavior of the systems is predictable it is explainable, you should be able to explain, it should be auditable the data which is fed into the models and trained and the inferencing happens and the outcomes which happen you need to have a very clear data lineage, you need to have end to end governance and we talked about edge computing, I think we talked about edge so you need to have governance, end to end governance, we talked about billions of devices which could be inferencing at scale and therefore whatever happens in the cloud and what happens at the edge, you need to be able to you know the entire workflow and the process has to have end to end visibility in terms of the governance and finally resiliency is also trust, it should not be broken, so from Tadak’s communications point of view when we talk about trust being the bedrock and foundational element of AI And therefore, it will scale while you put it to production.
We meant at every scale at the infra level, we build in some of the trust components, including, you know, zero trust networking, because, you know, networking is the invisible layer which carries data across AI platforms to the, you know, the software layer and the platform layer. We have advanced guardrailing technology, data lineage, data governance models, and the entire end -to -end data pipelining and management. So I’ll just hand it back to you. Long answer. Sorry for that.
No, no, not at all. It’s very important. And, you know, for us, Tata is synonymous to trust. So I have to mention that. So, well, you know, being a French company, I know about Quandela. But what do you like to talk about Quandela, your evolution, and how do you define trust in a quantum computing perspective? Thank you
very much. Yeah, so maybe you know, I will just introduce a little bit Candela. It’s a startup coming from the CNRS lab. We use CNRS technology to build photonic quantum computers. Actually, we are a full -stack company developing software and hardware. And now, actually, we partner with industries like Thales to move quantum from the lab to industry, to the real world, and to deploy systems. And basically, as a CEO, trust is a key, is a pillar in our roadmap because actually we need to build reliable systems. We need to demonstrate compliance, security in order to demonstrate scaling. That’s very important for us. So for me, when you asked about what means trust with my vision, and I’m an engineer, basically, it’s easy.
First, trust. It’s trust. Trustability. Trustability because we need to trace the systems, the models, the data that we use for AI. Even for quantum, we use quantum artificial intelligence, we develop quantum machine learning. And for all of this, it’s important to trace the results and to get reproducible runs. Second thing will be predictability. Predictability is you need to know basically where are the limits of the models and where are the failures as well. And this is also why it’s important to investigate this. Verifiability is the third one because we need to benchmark the performance. Actually now we are at this step. At Candela we released a framework which is called MERLIN for machine learning. And it’s very useful.
And it’s very useful. It’s used to benchmark applications and performance on quantum computers and using AI techniques and run stress tests on the applications. Fourth, security. And the fifth pillar, which is accountability as well. How to make sure that we have a clear ownership along the value chain of AI on quantum computing between hardware providers, software providers, certificate providers. We need to have clear ownership about everything. And with this, all together, we will be able to work in trust. We will be able to build the trust for the end users, and we will be able to scale. That’s for me. Thank you. Thank
you, Valeria. And Dr. David, you are in charge. You are in charge of AI and quantum computing at Thales. Both evolving topics. How do you see this? And what is trust for you? You have multiple… topics in hand so hello
team doing what we call friendly hacking, which actually friendly attacks our own algorithms to identify their breaches, their vulnerabilities, and to propose countermeasures. And by the way, this team won a challenge from our MOD, French MOD, two years ago because the team succeeded in retrieving sensitive data which were used to train the system. The third pillar is explainability of our system. So, if you have a digital copilot in a cockpit recommending to a pilot to make a left in 45 miles, for example, so the pilot should be entitled to ask the question, why should I do that, especially if she or he has had in mind to do something different. And the system should be able to answer because there is a threat, there is a thunderstorm, and not because the layer number three of the neural net was activated at 30%.
Okay? and finally the fourth pillar which is last but not least is what we call responsibility and responsibility actually is twofold there is one stream uh which is the uh compliance of ethics principles of laws of regulation principles as you know in europe we have this ai act and talus also issued a digital ethics charter a few years ago which comes in 10 commitments actually we are really working to achieve it’s on our strategic roadmap business roadmap now and the second stream is about the uh uh full carbon footprint and energy consuming so we have teams working on frugal ai to minimize the volume of data which are used to train systems for example this is minimizing the the footprint of the technology itself ai technology And we have also the complement of this is what we call AI for green, how to use AI to minimize the footprint of applications like working on optimizing the trajectories of aircraft, for example, to minimize what we call the condensation traits which are generated by the aircrafts.
So just to conclude this first part, I would say that trust actually is not a label. It’s not a promise. It’s a proof. Things have to be proved in our business. Thank you.
Thank you, David. Sandeep, coming to you, we are in the service industry. Our whole operation is built on relationship and trust. So how are you coping up with these new challenges? This of new technologies coming up, what’s your take on this?
Thank you. Thank you for inviting me here. So it’s a very valid question. And I will not answer it in a very technical way because I’m sure all of you have covered all the aspects around technology, architecture, governance. So my name is Sandeep. Been in London for the last 24 years. And I’m moving to India next month to accelerate the India business. And, of course, when I was in, I was managing the European business for HCL Tech. We’re just about a $15 billion company providing services. Services, and I took this job of growth markets, too, which is India, Middle East, Africa, France. It gave me a very different perspective because I’m managing about $1 .5 billion business. And now here I come in a completely different world.
And I started like a startup. So I built my own systems, which was based on AI. Like we say, before you preach anybody. You learn yourself. so I built all my systems today for growth markets too which is what I lead is built on AI so my inside sales engine my business analytics my forecasting everything is based on AI so I have reached from analytics to reasoning I am hoping I will reach to predictability in some way because the agents are still not predictive they are still reasoning but that’s where I started so if you look at my business and every person in my sales team or my delivery teams is certified on AI I myself started it, see if you have to embrace AI, it starts from the top, starts from the leader and we talked about trust, it starts from you if you as a leader in Vive there is no excel sheet in my world there is no powerpoint in my world you ask a question using voice you get an answer on a dashboard I can show you right here of course I will not tell you what is my forecast for this quarter but you ask a question you have it you ask a question about a company you get it in 2 and half minutes and that is the power of AI we were having you know earlier lot of people trying to dig data from here from there it doesn’t exist it is 2 and half minutes you ask for the market approach or anything that you want to do so in my view imbibe yourself it is an iterative process you do not build trust just like that you build it over a period of time you have to be patient you have to learn you have to make somebody learn and that is the learning process that continues over a period of time and then you build trust.
So my advice to anybody, and the reason I moved to India is very exciting. It’s a land of opportunity, saying, coming home. And you are in NCR, which we call it Delhi. It is the home of HCL Tech. So we have a very unique proposition or advantage in India or globally, which is we have what we call as AI products. Very proudly, it is made in India for India and for the world, which is HCL software. We have expertise of our global services, working with a lot of customers across the globe. So what it gave me an opportunity is to bring AI products, services together into what I call as AI solutions. so in this AI impact summit we have lost 7 solutions which is not just for enterprises it is for citizens it is for the governments as well more than welcome hall 4, 4 .5 please if you have not visited go and visit what we are talking about so these are the solutions which will make you know it will help us protect ourselves, fraud detection system, compliance system, training system, skilling system, not just enterprises so to me AI is about people progress and planet thank you
coming to you Tanuj Dassault is such a flag bearer of French innovation how do you how do you see this whole evolution and what is trust means at Dassault thank you
Arun and good morning everyone I represent the systems which champions the cause of industrial AI platforms. Now to this point of trust, the definition, the expectation itself has evolved I would say over the last several years. Five years back, for example, AI was still in silos and the definition of trust was mostly centered around the accuracy of the output. So you have a model, you feed data, you put a query, if the results are near to your expectation you are happy. But that is no more the situation because of widespread understanding of AI as a topic and adoption as well. Now there are new dimensions which got added to make it trustworthy and quite a few points which I wanted to highlight.
I think the highlight is already covered with my fellow panelists but for the sake of clarity and at the cost of repetition I will say it again the first one is of course the lineage of the data so the AI platform the industrial AI platform needs to ensure by design that the data which is being leveraged to solve a problem is ethical it has traceability there is no mischievous data which is being leveraged that done when the output comes it is credible and it is trustworthy by the people who are going to use it the second point which I wanted to highlight is about people in the loop we still have to go a long way where we trust a totally automated system without human intervention we still like to have at least at the governance level, people in the loop who will ensure that the processing, the output given by the machines is indeed in line with the objective for which it was created.
100 % trust only on machines is still a little far. So people in the loop is definitely which built trust for all of us. Another aspect and particularly in an industrial AI perspective is to simulate the result of an AI model in a real world environment. For example, when you design a car, you design a car in context. The car has to run on roads and the condition of roads changes from place to place. And if you really need to trust a car, which was, for example, developed. elsewhere in the world but being used in India, people will trust if that car at least is tested in the real world environment of India as a context. You have virtual twins of not only the product now, for Dassault system you also have virtual twins of the environment.
So you can simulate how that car will behave when it actually gets on road in Indian conditions. That builds trust. Another example is what kind of checks and balances which are there in the model itself that it does not let you make mistake whether the mistake is unintentional or whether it is deliberate. What kind of compliance you have already built in the model. If that is robust, the chances of getting a wrong output or a broken output is very low. The system is very robust. is far lesser and that builds trust. And the last one, point which I wanted to highlight, AI applications, unless it is end -to -end, from conceptualization to decommissioning, if it is still in silos, the overall output is less trustworthy as compared to, imagine a situation where right from conception up to decommissioning, you have been able to simulate the whole process multiple times again, prove it, streamline it, and then launch it.
That builds a lot of trust for the people who are actually going to build that system in the physical world and the consequent people who are going to use it. So these are some of my views. Arun, back to
Thank you. Thank you, Tanuj. I think we have some more time, but I’m glad that a lot of you guys, all of you, in fact, went. Thank you. The deep strength of French innovation, French technology, and two star walls of Indian scale and speed, in a way. So I just maybe quickly want everybody’s point of view on what is the mindset change that you are looking for to build trust and the democratization of AI at scale. So what is the mindset that you are looking for, a change of mindset, Neela, quickly?
I think I would say that the mindset change which we have to move towards is a mindset of an ecosystem. Because we can’t do it all. For example, we partner with Thales on many of the security components which we provide as part of a solution. So it’s an ecosystem play. And we need to work very closely to make… …make sure the trust is not broken. and the trust architecture is maintained across the ecosystem.
Valerio?
I think on my side, priority should be to break the walls between quantum and AI and build a huge community. And also this is why at Candela we released Merlin, which is a framework which aims to do that. Because that’s the point. Trust comes from benchmarking and reproducibility and not from one -off charts. And Merlin has been released with one very pragmatic first mission, establish trust between AI community, AI developers, using quantum computers that are brand new technology, which is now available. And we actually published some reproductions of papers. We are here to show quantum machine learning results in a controlled environment. We are turning scattered clays, names into… shared baseline and to build a community and invite people to use them.
So, yeah, my main topic is let’s break the walls and let’s share about what we learned in order to establish trust all together and build a common baseline, especially between France and India. In France, we can develop the technologies. In India, we can scale the technologies. So we have an ecosystem and a community.
What’s your take, David?
Well, I would say that in France, we have spent like decades to build something which is really supposed to work in context where failure is forbidden. I mean, with companies as Thales, as Dassault, as Airbus, and it has taken us, you know, decades to do this. and so we are living in a world of certification, of regulation of mathematics proofs so trust has to be proved this is very important we cannot afford as I said earlier that you know just declare trust, say ok please trust us when you deal with critical systems you have to prove the trust and I used to say that trust is gained by drop and is lost by bucket so this is very important and in India has been doing something equally extraordinary I would say in record time with this digital infrastructure for billion human scale which is really extraordinary and I think that the combination between depth and scale between France and India is really the very challenge here.
And to keep trust within this challenge is probably the way to go to make people adopt AI at large scale. Thank you.
Sandeep, for you. Can you just say one word?
Yeah. Just be open -minded and learn to adopt change. Adaptability. Very simple. There is nothing else.
And you, Tanul?
Yeah, quickly. The scale is directly proportional to the trust we built in the system, for sure. Yeah. And I’ll build on the example you gave initially and our prime minister also quoted. UPI, when it was launched in 2016, last year in December, it clocked some 21 billion transactions, translating to some 30 lakh crore worth of money transactions. with each other and today UPI is being used even by the most digitally illiterate person in India he doesn’t hesitate to put his trust in a system with his money so if you build the trust then the scale comes automatically
thank you gentlemen I think we are almost finished our time thank you very much I encourage you to meet with the speakers and thank you very much for your time
thank you once again to our moderator and to all our distinguished panelists I would now invite all the speakers to please remain on stage for a brief momentum presented by Mr. Mark Vialmopillier and for a group photo ladies and gentlemen please join me in applauding our speakers as we take this moment together Thank you. He was the founding director of the Robotics Institute at the Carnegie Mellon University and he was instrumental in helping to create the Rajiv Gandhi University of Knowledge Technologies in India to cater to the educational needs of the low -income gifted rural youth. He and Edward Fonningham won the 1994 Turing Award, sometimes known as the Nobel Prize of Computer Science, for their exemplary work in the field of artificial intelligence.
Now, I now request Professor Raj Reddy to take the stage to deliver his keynote. note.
phone in your pocket, it was listening to you and using it to guide your discussion. I’m hoping we’ll create user -friendly interfaces so that when I speak in Telugu, you can hear in Hindi, and when you speak in English, I can hear in my preferred language. And I think we are there. We can get there very quickly. And it’s being done already. There are two startups in India called Sarvam and Bharat Jain. Both are trying to do it. My request is that we create a quantitative measurable matrix. That we have achieved this goal. What that means to me is, it’s not enough. Already people will say, we already have multilingual intelligence. We have systems that will speak, and you can speak in one language.
But it’s not usable. It is not, especially if you’re a person in a village, and you don’t even know where to begin. So the first issue is, how do we create a multilingual AGI, and how do we make sure that we have measurable progress? There’s a statement, if you can’t measure it, you can’t improve it. We need to improve the existing models, and they will probably need more computation, more memory, and more bandwidth. In the 50 years ago, we created a thing called 3M computers, MIP, megabyte, and… megapixel. Today, we should create 3T computers, a terabyte of memory and teraflop of computational power and terabit bandwidth. That’s where we should aim for. That means every one of us should have in our pocket an AI companion that actually has what we call foundation edge models.
And they require not, right now, the many models that are on the edge are like three billion bytes or nine billion bytes. We’re off by a factor of 100. And we need to get there. And India can kind of, where am I? How am I doing for time? Anyway, somebody, it used to be that there’ll be a time map. thing here but whenever it is time tell me I’ll stop okay so that’s one the second important point I want to make is people at the bottom of the pyramid most of the talks I’ve heard most of the expectations assume you are AI enabled and you can actually make you effective use of AI I come from a little village I guarantee you not one of them knows anything about computers or AI and they simply you know are not going to be benefit from this whole technology so what we need to do is just like the agricultural revolution of some Swaminathan we need to figure out a way how to get this technology to people at the bottom of the pyramid.
Again, I’m sure you’ll find, I’d be happy to talk about any of these for much longer, but we only have a short time. Then, in order to do both of these things, I said we need a teraflop, terabyte systems, and what we need are personal sovereign edge models. And currently, if you talk to anyone, they’ll say, already we can have access to AI. It is not private. It is not, you know, personal and secure. We need systems because they’re always going to the cloud to access the AI models. As soon as you do that, you have no privacy. In the future, we want systems which are personal, autonomous, and can be used to do things.
So, I’m going to talk about the AI model. cognitive assistants that are always on, always working, always learning. And that is the challenge of how to get there without… We have to cut it off from the grid. We cannot let it go to the grid because then it’s no longer private. And so anyway, there is a whole set of issues of that kind. How much time do we have? Anyway, somebody tell me. There are three or four other topics we can talk about. One is, I had a child come and say, if AI is going to teach me and knows everything, why should I go to school? Yeah. And so the answer to that will take longer than two minutes, but I only have two minutes.
But you can figure it out. But basically what we need… to do is essentially teach the kid learning to learn using AI, have a dialogue, learning to think, you have to teach them critical thinking. Right now, most kids in India don’t even open their mouth in classrooms. They’re afraid. So we need to kind of get over the barrier, let them talk and think and go through critical thinking and learning to do. You have to learn how to execute. With that, I’m going to stop, but I want to leave you with one other thing which you can figure out. One of the things I remember from Vedas is Om Shanti Shanti Shanti. Peace. . . . . . . . . . . . . . . . . . one of our keynote speakers, they said autonomous weapons are going to AI -based autonomous weapons are going to destroy the world.
That’s a risk. Why don’t we have humane weapons? When a missile is going to hit a hospital or a school, it is easy with AI to discover that and deflect the missile. Why should we even kill the soldiers? They’re innocent. They’re just somebody recruited and they’re being bombed and killed. We should build weapons, humane weapons, that will disable them rather than destroy them. There are lots of very interesting issues of this kind. We need to think about that. Thank you. Thank you. Namaskar.
A very good morning, ladies and gentlemen. Our next session is a panel discussion on AI for science. The panel will be moderated by Professor Abhay Karandikar, Secretary, Department of Science and Technology, and he’s also the chair for the AI for Science Working Group. I would now request the panelists to please come on the dais, Professor Karandikar. The other panelists for the session are Mr. Irakli Berids, Head of Center of AI and Robotics, UNICRI, Professor Abhay Karandikar, Professor Antoin Petit, CEO and Chairman, CNRS France. We have Ms. Joelle Pino, Chief AI Officer. And we also have Mr. Amit Sheth, Founder, Indian AI Research Organization. A very warm welcome again to the panelists. I will… Right. Group photograph.
Okay, I request all on the dais to please come forward for a group photograph. We’ll have the photograph for you on your mementos. Thank you, panelists. Thank you, Professor Karandikar. I now hand it over to our moderator, Professor Abhay Karandikar, Secretary, Department of Science and Technology, to carry forward the panel discussion. Sir, over to you.
Thank you. Thank you, Ekta. So, distinguished panelists, we have a very distinguished panelist today on the panel, colleagues and all the members of the global scientific community. It is my pleasure to welcome you to this panel on AI for Science, and we consider it to be a very core pillar of our vision for this India AI Impact Summit. And as today we stand at the threshold of a new research, paradigm, our goal is not just to witness the AI revolution. but to steer it towards a more equitable, inclusive and transparent future. You know, in today’s AI world, we are moving beyond traditional methods where AI -driven models and automated experimentations have a potential to compress the decades of research into months.
And the rapid advances of these technologies, however, has not been so far equitably distributed and that is one challenge. Many regions still face the significant barriers. But still, the realm of possibility for using AI for scientific discovery continues to have, you know, a lot of excitements. Today, we are joined by leaders who represent the entire spectrum of scientific innovations, policy makers, institution builders. and from the governance and national research ecosystem. I look forward to the panelists’ insights on, you know, what are the exciting possibilities in AI for science and how we can bridge the digital divide and build a genuinely reciprocal global scientific ecosystem. So with this, I think I will begin with, you know, first a few questions.
I will request the panelists to answer. Of course, they are free to elaborate on any other things. And then I think we will open this floor to the audience for the introduction. So let me begin with, you know, Dr. Amit on the far end. So, Amit, you have been building, I think, IRO as a national -style institution in India. If you can just tell, you know, how can this be a national -style institution in India? How can this model? help overcome the specific barriers that we have identified in this region, you know, such as inadequate compute and fragmented data sets. And also, you know, I would like you to elaborate how can we ensure that AI research which gets conducted in our center of excellence actually can reach the translational stage addressing the real world challenges.
So if you can just, you know, take five to seven minutes on this. I think you can just do this.
Hello. Yeah. Thank you very much, Professor Karandika. This is a perfect question for me to talk about. This is why I’m here. I moved from USA after 44 years here to address, exactly the question you asked. I was on. Two days ago, I was on another panel, and I asked this question to the audience. How, if I were to be the founder of DeepSeek, had all the funding that he had and has, can I find those 200, 250 engineers, AI engineers and researchers that he had access to, to build DeepSeek? Out of around 100 people in the audience, three people raised their hand, saying, yeah, we might, we may. Of those three, two were students. So only one, you know, mature person basically thought that we can have that.
And I think that gives an answer of what we need to do. So India is well on its way, I mean, to grow. Many people who know something about the AI. and they will certainly have the ability, the skills necessary. Say, India has been big in IT services and whatever IT services need, they will be able to supply. The skill set that people would have here, that would be adequate. But we have noticed that two members, very important members of IRO’s board are Ajay Chaudhary and Sharath Sharma. And they have extensively talked about or lamented that India has not been a product nation. They have not made any global products. Virtually, I mean, hardly, you know, any global brands exist, have been developed in India.
And for that, we need more than skills. We need people at high end of expertise. That means our own indigenous research capacity, our own ability to train innovatively. And that’s what we need to do. And that’s what we need to do. A very good model has been that, you know, we do bachelors here. Take an example of Arvind Srinivasan. He did IIT Madras. Then you go outside. He did his PhD in Berkeley. I did mine in Ohio State. And then he worked for companies, three companies, DeepMind, OpenAI, and Google. And then he did his company. But that also in U .S. We want that to be done here, right? So the same ecosystem in which he got trained after leaving India, we want to provide that in India, right?
And there are, I think, a lot of things happening. As you know, there is a 40 % decrease in Indians going to the United States for studies. And that will continue for a while now, right? With most of you. You know of the results. You know of the results. So, first and foremost, IRO is developing an environment to create high -end talent of innovators. Secondly, and by the way, if you see, IRO’s founders are professors who have graduated nearly 200 top -end PhDs. So we know how to create that. Secondly, we have created a broad variety of collaborations with various universities, and we are starting to do that in industry. And we are creating a significant infrastructure to support IP creation, to licensing that, or to work with the corporates and startups to who will make the products.
So the idea would be that we’ll co -innovate, join. We’ll jointly work at IRO with the companies, with the startups, with the entrepreneurs. and we have already lined up large amount of investors, angels, seed, as well as growth stage. They are all hungry for deep tech AI startups and that we will provide comprehensive environment for us to take. Now, some of us also, founders have also done companies. Three of my four companies that I have done are AI companies licensing the research I did in my university. Ramesh Jain has done more companies than I have, and he’s also a co -founder. So we have the understanding of that entire pipeline it takes from lab to global products.
And so this is what we are going to do for India. And this was it. Okay. Thank you. Thank you.
Now, let me just switch the gears and go to Professor Antonin. You have been the chairman and CEO of CNRS France. so I think CNRS as you know operates at a scale you know that most research organizations can only imagine so two questions what do you think what structural shift the national research and funding agency need to make to support the interoperable scientific ecosystem that can sustain AI research beyond just short term pilot and so the added question is that is there a need to build an AI for science platform like as a mega science facility
so thanks for this invitation yes two words about CNRS CNRS in French means Centre National de la Recherche Scientifique and probably you don’t need an AI translator to understand that it means National Center for Scientific Research and And it’s true that we’re a big institution. We employ more than 35 ,000 people, among which 30 ,000 scientists. And we cover all fields of science. And clearly, AI opened a new era in science, in some sense, because AI is not only an accelerator of existing techniques. It forces us to imagine new ways to do science. Just to illustrate this, if you look at material sciences, what I will see is roughly, before you define new materials and then you study the properties of these materials.
Now you say, I would like to have a material with such properties. And then thanks to AI, you will build the material. With high probability that it will verify these properties. So in some sense, you see, it’s not that global acceleration. It’s a reverse, in some sense, of a way to do science. And this opens a new era in which you need really to have talents, of course, but you also need cooperation between different sciences. And that’s probably a challenge for an old institution, if I may. Like CNRS, we were organized classically in science. We cover all sciences, including humanity and social sciences. But you see that with AI, you need really new ways to cooperate between scientists.
And this means that, as usual, the key point is talents. And it means that we have to build ways to push people to interact. And that’s why we created, some years ago, a virtual center called AI for Science, Science for AI. And that’s why we created a virtual center called AI for Science, Science for AI. And we have to create some kind of virtual loop. And that’s why we created a virtual center called AI for Science, Science for AI. between, in some sense, producers of AI, mathematicians, computer scientists, and consumers of AI, which can come from every discipline. But the trick is that this producer will not produce tools or software that will be simply used by consumers, but consumers will have new, in some sense, new attempts for new ways to do research.
And that’s clearly something we try to do. And, of course, in addition, we need absolutely to have computer facility at the highest level, even if we also try, as a lot of people try to work, to have more frugal AI in order to not have a carbon footprint which will stop to develop this AI. And so that’s clearly a challenge for a center like Celeris, but I know that it is a challenge all over the world. And probably a key point is to really start from scientific use cases in order to, as I said, to rethink the way to do science. So do we need to have a platform for that? I don’t know. We clearly need to have cooperation.
That’s absolutely key. And Celeris, we have a long tradition of cooperation with India and with DST in particular. And clearly, from my point of view, the way I feel India approach AI in a very, very pragmatic way can be an example for us. You really try to apply AI for your citizens. And in some sense, for science, I think that… The process should be the same. we should start from very pragmatic scientific questions in different fields and to see, thanks once again to cooperation between data scientists, computer scientists, mathematicians and colleagues from the other fields, how we can apply AI. But also for science, AI for science has also some risk. In particular, you can produce a lot of papers thanks to AI.
And it’s not clear whether these papers were right or not. And in some sense, we can lose all our time by producing false papers by AI and then refaring these papers also by AI. And that’s a difficulty we all face. I think that none of us has a solution right today. But… But it’s clearly also an issue, but… be optimistic and let us think that AI for science once again will allow us to make progress and to discover also new results but also new ways to access to these results and in particular there are right now fascinating applications of AI to mathematics a bit frightened in some sense because new results have been obtained in mathematics without the help of any human and does it mean that AI will replace scientists I
ok so do you think AI will replace scientists or it will act as a co -scientist or a hybrid scientist that for me so let me just introduce I think
Professor Zuel Pino so you have an academic background as well as you are now a chief AI officer so you have worked in the industry street as well. So just your take. the properties of new crystals. And in this particular case, once you’ve done the ranking, you take your top -ranked candidates, and you still need to run them through a wet lab to verify the properties. Your mathematical model has some imperfections, some approximations, some errors. But by having the ability to rank the candidate’s solutions, you cut down the search times drastically. In the old days, you had to list the list of possible solutions, and you had to test them one by one in the lab using your intuition of the order in which to test them.
But now you have a ranking algorithm that tells you in what order to rank them. So for those of you who remember the web pre -page rank algorithm where the search tree to find a website of interest was incredibly long, and all of a sudden you had a good ranking algorithm. It was a complete game -changer in order to retrieve information. And now it’s a complete game -changer in terms of finding candidate solutions to problems in AI. And so this process that I described for this one case applies across… across all sorts of other areas, whether it’s biology, whether it’s mathematical theorems, and so on and so forth. So this is not like magic. There is like an organization to how you take the data, how you use it in a generative model, how you do the ranking, and then how you verify your solutions.
And the verification process changes depending on what the domain is. In some cases, the better your model of the data, and we hear a lot about world models, the ability to predict the properties of the system means that you can accelerate further the discovery. However, you get better ranking, and you have to take fewer solutions to the lab. And so that’s just to give you a sense of how to use it in practice to make this a little bit more concrete for people. Thank you. Now let me come to Dr. Irakli Behriz. Irakli leads the United Nations Interregional Crime and Justice Research Institute Center for AI, where he manages one of the first sort of UN programs dedicated to AI research.
So, Irakli, what did you do? What is your take on the, you know, this risk versus benefits, you know, if you see that in your experience this AI for science can potentially pose and what, you know, even other speakers have raised?
Thank you very much. Thank you for the question and thanks to the organizers for putting this together and inviting me to the panel. It’s a really pleasure to share the panel with the distinguished speakers who spoke before me. I will give some reflections what we are doing and how we’re looking at the discoveries of the science, including the social science and other things, how it translates into the policy developments at some of the United Nations streams and how we are working with that. So I’m leading a center for artificial intelligence and robotics for one of the UN agencies called UNICRI. And our mandate is anything related to AI. Crime prevention, criminal justice, rule of law, human rights, AI literacy now.
The center itself opened in 2017 in The Hague in the Netherlands, and we have a global mandate supporting law enforcement agencies all over the world to use AI and in a responsible way. We develop specialized toolkits and policy frameworks for that. We also support investigators to use AI to solve concrete crimes. And at the same time, we are assessing risks, how criminals and malicious actors can use artificial intelligence, and how we can support sort of global frameworks to ensure that AI is used in a beneficial way and risks are mitigated properly. So this is the type of framework what we are doing. A couple of questions now sort of starting from the broad side, from the United Nations.
Obviously, UN just approved a scientific advisory board. This is an extremely positive development. And just an hour ago, there was a panel about science related to the AI governance and how it is so crucial to understand and especially for the policy makers and sort of broader audience what we are trying to actually govern and what we are hoping is that the Scientific Advisory Board is going to do just that and quoting Secretary General of the United Nations who said that policy should be as smart as the technology it aims to guide and it is so true and right now there is quite a lot of sort of misconceptions and misconnects in that sense. Now a little bit about the law enforcement and how sort of how we are looking at it.
There are a number of things and there is a lot of aspects that could be touched upon. Several years ago when I started the center itself and we started sort of our programs especially on the responsible use of AI by law enforcement, most of the law enforcement agencies were not using AI. We are talking about back in 2018. or they didn’t even know what were the tools. And we had sort of a really handful of examples here and there. And now, last summer, we conducted regular global meetings, AI for law enforcement, and this one was hosted in Brazil. And we had so many use cases that we didn’t know actually sort of what to showcase. Right?
On the one hand, this is a really good development. So we have law enforcement needs to use AI and it needs to solve problems. And right now, without AI tools, the vast amount of data which exists there cannot be interpreted, cannot be put in place, but at the same time, it has to be done in a responsible way. So what we are doing is that we’re developing specialized toolkits for responsible use of AI, and that involves the multi -stakeholder dialogues. And we bring scientists there, we bring law enforcement agencies, governments, and academia to put together those findings and frameworks so that… this could be applied directly in the policy translation. So India is one of the pilot countries right now.
We have five countries where this toolkit has been implemented and this is India, Kazakhstan, Nigeria, Oman and Brazil. A couple of days ago we had a meeting at the Central Bureau of Investigation and we understood that there’s a lot of progress already made in the implementation of this particular project. At the same time we are, we have launched a rather sort of a scientific project on how to ensure that public trusts use of AI by law enforcement and in a few weeks we’re going to issue policy recommendations and the report which comes out of it which is again a very crucial form of the governance of AI in that particular field where AI is being used.
AI has been used by law enforcement but public has a fear to it and has a misunderstanding. perhaps or right understanding on how it is being used and applied in reality. So all of this stuff is being happening there. Thank you.
Thank you all the panelists. I think before we just open, I just had one quick question not in any order, but just to Dr. Pino, I had this question for you since you made a very important point of AI to be looked at as an instrument. Now, you know, one question I had is that there is this reproducibility crisis in science. You know, so what do you think? Do you need any standard or any methodology so that, you know, AI generated discoveries are considered, you know, as real or as reliable as, you know.
I do appreciate the question. I’ve been in I’m quite concerned about the reproducibility more generally in the field of AI for a number of years, starting at around 2018, and have published quite a few papers specifically on this topic of reproducibility. I’ll keep it very, very short. I do think this is an issue. I do think AI can be an instrument to accelerate the reproducibility of scientific findings, because specifically in those cases, the question is already there often. There’s a candidate methodology, and so that means we can apply the wheels of AI in using reasoning methods and generative methods to accelerate reproducibility. We’ve looked at doing that and running reproducibility challenges. I’ve run an annual reproducibility challenge around some of the AI conferences, and so I think there’s a lot of opportunity there to do that.
I would emphasize there’s two ingredients that are necessary, which often are associated with discussions of responsible use of AI. One. So that is transparency. So to facilitate reproducibility, it helps to have the artifacts of the scientific process be publicly available. and the second one is evaluations. And so just to reproduce a method without being very specific about how you’re going to specify the criteria can be difficult. So I think by spending some time on transparency and evaluation, we can really facilitate this process.
Okay. Amit, your…
Yeah, so I think we’ve gotten great things out like productivity and other things that Kali from Cohit mentioned. About using very large models trained on arbitrary data, we are bringing… We plan to bring to India something very unique. From the very beginning, in fact, when I had a chance to talk to the Prime Minister, we said that we need to have… India make its mark in the particular… in a new form of AI. And in this case, I get the chance to perfectly explain what we are doing. We want to solve, instead of using a big model and use it as an instrument or partner, we are developing models that are very specific. We call it compact custom neurosymbolic models such that we solve specific problem deeply.
IRO has taken the topics of healthcare, sustainability and environmental science and pharma as initial domains. And recently in pharma, there is a company called Benevent AI, and they had FDA approval of a new drug, remote arthritis drug, where it was developed by use of knowledge graph and deep learning. So in our case, we want to create specific model for specific problem, problem solving. And trained, neurosymbolic means that we can make the models explainable, safe, aligned, grounded, with deeper reasoning options and planning and so on and so forth. And so I think this is an alternative model for AI that is likely to come up and would solve the problems deeply, very specifically with high value.
Okay. Just quickly, I just wanted to ask you this question that what do you think that AI for science can act as a bridge to solve problems in some of the priority sectors, like climate resilience or agriculture or energy, particularly for countries which have a limited experimental facility?
I have two hours, right? Yes. No, no. Clearly, as I said before, AI will play a key role in particular because it has this ability to treat a huge amount of data. I said before that… We are also a consumer of AI. If I look at the domains who produce the most amount of data, it’s not at all mathematics, computer science. It’s particle physics and astronomy. And they need new techniques based on AI to treat properly this data. But coming back to North -South relations, as you said, I’m convinced that we need cooperations. We live at a period where sovereignty becomes a buzzword. But sovereignty does not mean, from my point of view, isolation. We need to collaborate.
We need to share. We need to develop open science and open software. And clearly this is not in opposition with the will of sovereignty. And clearly, to be brief, I think that we need to… start from use case either use case coming from civil society or use case coming from science and we as developed countries we do not have as you know France has a history with Africa which is particular and during a long time we try to explain to African people what they need and now we have understood at least I hope that the main point is to understand what all they need and to try to develop cooperation in order to to feel these things so thank you,
actually you made an important point of the responsible AI what do you think you know that about the shared global ethics you know for the AI that AI driven scientific breakthroughs are governed by some kind of a shared ethical frame
Yes. Okay. Yes. Thanks a lot. So there are not, I mean, many, many things happening at the moment in the world. On the one hand, we have the global digital divide where a lot of countries are investing in the technology and advancing and including in education and scientific breakthroughs. And then you have quite a large portion of the world which is staying either behind or may have a potential to stay behind. For example, right now only half of the world has either AI or digital strategies and have governmental spendings or allocations to that. Another half doesn’t. So that digital divide is very dangerous and there are numerous calls how to minimize that. And on the level of the United Nations, there are many type of streams there, but I don’t think it’s enough and I think that a lot more has to be done.
And hopefully the scientific breakthroughs… through the AI and some shared platforms and some shared collaboration that can be bridged and this could be benefited. And when you see the title of this AI Impact Summit, I cannot share it more or cannot resonate more that welfare of all, happiness for all, AI should certainly benefit all and not selected few. And I think that summits like this and hosting a summit in Global South should give a renewed impetus for doing all of that. Thank you very much.
Thank you very much. Now since we are running out of time, we just have time for two quick questions. So we can take from here. Yes, please, go ahead.
So my question is for Dr. Pino and Dr. Kashi. You know, I work at the intersection of AI and synthetic biology. Google defined release Alka -Volume from the mobile site. And then they announced Alka -Volume 4. What is it? Or ground discovery? And we have chosen to get… So it’s very interesting that the fundamental model in fundamental science was released in public domain. But the one which has commercial applications and drug discovery, Google has chosen to keep private. My question is, do you see this as a trend where the scientific foundation models as far as they relate to fundamental science will be released in open source, but if they are fine -tuned for commercial applications, they will be kept private?
Do you see this as a trend, and what do we do about that, Professor Sheth, in India?
Of course I can’t speak to DeepMind’s strategy. That belongs to them. I’ve been in deep disagreement about their open sourcing strategy for many years, respectfully so. I do think that the circulation of scientific assets and ideas is absolutely for the benefit of all. I will say it is possible to go against that trend. I was, in 2023, responsible for a language model called LAMA. At the time, all of these… The industry was against open sourcing large language models. against that. We open source the Lama 1 model, Lama 2, Lama 3. Today we’re looking at over 3 billion downloads of these family of models. It’s possible to see disturbances to those trends and I think specifically in the field of scientific research there’s so much more to be gained by sharing assets and sharing ideas than keeping it closed.
But that takes courage, that is going against the grain and it takes vision.
I want to express deep admiration for that approach and trend that you started in making open source model. India has to develop its own model so we just had a whole day yesterday with the pharma industry, they are our partners and with the access to information they can provide, that is they can provide, data they can provide, we will develop our own model for drug discovery. we are ourselves developing a very large pharma knowledge graph we have already developed a good one decent one now and we will be training our own model with deep pharma drug related you know knowledge and our version thank you
so just one last question we will have in the end just be brief I think 30 seconds and then I will have one of the panelists to answer another 40 seconds
my question is
yeah go ahead
my question is is there any government guidelines for responsible global AI
any you want to answer this right
so there are numerous guidelines on the responsible use of AI in many different domains from our side the sort of angle of the UN where I am working we did develop guidelines and not only guidelines but practical framework on the responsible use of AI in law enforcement and law enforcement is one of the probably most sensitive applications of artificial intelligence and that guidelines or that toolkit, that practical framework is now unveiled and it’s working and it’s been tested in many countries and as I mentioned it India is one of the first country which is implementing it and it’s very admirable. Thank you. So
thank you very much. With this I think we are time up and we have to close the session. I would like to thank all the panelists. Thank you. Thank you all. I just would like to give away the mementos for the panel discussion. Thank you. Thank you.
The discussion identified several concrete commitments:
EventThe summit’s opening presentations by Estelle David from Business France (the trade and investment agency) and Julie Huguet from LaFrenchTech revealed the substantial scope of collaboration. The Frenc…
EventSeveral concrete commitments emerged:
EventIndustry representatives provided concrete examples of this collaboration in action. Sanjay Mehrotra from Micron described the company’s $2.75 billion investment in India for semiconductor assembly an…
EventAnd questions of how we scale responsibly, how we engender trust in the technology, because in order for AI to be useful, it has to be applied. And in order for it to be applied and widely adopted, it…
Event“And the philosophy here is that AI is a tool which is helping the humankind to make a decision”<a href=”https://dig.watch/event/india-ai-impact-summit-2026/scaling-trusted-ai_-how-france-and-india-ar…
EventBut let me emphasize, AI is not a magic. As Honorable PM said in his inaugural session, AI must be built on trusted data, ethical governance. And public accountability. Without trust, scale will not h…
Event“And I would say it’s not an innovation gap, it’s a power gap.”<a href=”https://dig.watch/event/india-ai-impact-summit-2026/scaling-trusted-ai_-how-france-and-india-are-building-industrial-innovation-…
Event“strong IT background, dynamic startup ecosystem, make India a natural hub for affordable, scalable, and secure AI solutions.”<a href=”https://dig.watch/event/india-ai-impact-summit-2026/scaling-trust…
EventAll three industry leaders emphasized the need for collaborative, ecosystem-wide approaches rather than proprietary solutions. This consensus on openness and collaboration was unexpected given their c…
Event**Sajid Rahman**, ICANN board member, emphasized that AI’s growth is “unprecedented compared to previous technological waves.” He stressed the urgency of cooperation, stating: “the question is not whe…
EventThe panel discussion revealed both the complexity of addressing global compute access challenges and the potential for meaningful international cooperation. The strong consensus on the need for multi-…
EventStatement that audience will ‘really enjoy this next panel’ and emphasis on the distinguished nature of the guests
EventJuha Heikkila: Thank you Yoichi and thank you very much for this invitation. So I think it’s very useful to understand that the AI Act does not regulate the technology in itself, it regulates certain …
EventLow to moderate disagreement level. The speakers largely agreed on core principles (community-centered approach, partnership-driven development, addressing underserved populations) but differed on imp…
EventThe global focus on Artificial Intelligence (AI) capacity-building efforts has been a significant topic of discussion among stakeholders, as evident in various sessions of the9821st meetingon AI and s…
EventBilel Jamoussi: Thank you very much, Dr. Cho. Certainly, collaboration, inclusivity, and human-centered standards. Thank you for that. Onoe-san. Thank you. I think it’s on, yes. The discussion highli…
EventThe overall tone was formal yet warm and celebratory. Speakers expressed pride in the IFDT’s accomplishments and gratitude towards the host country, Montenegro. There was an underlying sense of urgenc…
EventThe tone was consistently formal, diplomatic, and celebratory throughout the session. It maintained a positive, collaborative spirit with speakers expressing mutual appreciation and commitment to part…
EventThe tone was consistently optimistic and collaborative throughout the discussion. It began with celebratory announcements of major financial commitments and maintained an encouraging, partnership-focu…
EventGreece appreciates high-level discussions on cybersecurity, such as those initiated by the Republic of Korea. In an address to the Chairman and the assembly, China conveyed its approval of the effort…
EventThe tone is consistently formal, diplomatic, and optimistic yet cautionary. Speakers maintain a celebratory atmosphere acknowledging 20 years of progress while expressing serious concerns about curren…
EventThe discussion maintained a thoughtful, forward-looking tone throughout, characterized by cautious optimism about AI’s potential while acknowledging significant challenges. The tone was collaborative …
EventThe discussion maintained a balanced tone that was simultaneously informative and concerning. It began with an educational focus, explaining the scope and importance of the infrastructure, then shifte…
EventThe discussion maintained a professional, collaborative tone throughout, with participants demonstrating technical expertise while acknowledging shared challenges. The tone was constructive and soluti…
EventThe discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while acknowledging the complexity of the challenges. The tone was constructive but reali…
EventThese key comments fundamentally transformed what could have been a technical discussion about open-source AI into a sophisticated analysis of power, democracy, and global equity. The panelists built …
EventThe discussion maintained a consistently thoughtful and collaborative tone throughout. While panelists acknowledged serious challenges and risks (declining public funding, regulatory bottlenecks, conc…
EventThe discussion maintained a consistently professional, collaborative, and optimistic tone throughout. Speakers demonstrated genuine enthusiasm for international cooperation and shared commitment to ad…
EventThe session explored whether the WSIS vision remains relevant after 20 years and how to address persistent digital inequalities while adapting to emerging technological challenges. Participants demons…
EventThe discussion maintained a consistently optimistic and forward-looking tone throughout. Speakers expressed confidence in India’s position as a leader in AI-telecom convergence, citing the country’s e…
EventThe tone of the discussion was generally optimistic and excited about technological progress, while also acknowledging challenges and risks. The panelists spoke enthusiastically about new capabilities…
EventThe tone began as optimistic and technically focused, with researchers enthusiastically presenting their innovative approaches. However, it gradually became more cautious and philosophical as the conv…
EventThe tone was thoughtful and forward-looking, with both speakers showing cautious optimism rather than fear. Harvey Mason Jr. maintained a more measured, practical perspective focused on current indust…
EventThe discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in Baroness Shields’ opening about AI engineering “simulated intimacy”), evolved int…
EventThe tone remained consistently optimistic and collaborative throughout both presentations. President Karis spoke with confidence about Estonia’s achievements while maintaining humility about the need …
EventThe discussion maintained a consistently optimistic and collaborative tone throughout. Speakers expressed enthusiasm about India’s semiconductor progress and demonstrated strong alignment between indu…
EventThe overall tone was optimistic and forward-looking. Speakers were enthusiastic about the potential of these technologies to solve major global challenges, while also acknowledging the need for proper…
EventThe tone was consistently optimistic and collaborative throughout, with speakers demonstrating genuine enthusiasm for solving real-world problems through edge AI. The atmosphere was professional yet a…
EventThe tone was consistently optimistic and collaborative throughout the conversation. Both speakers maintained a constructive, solution-oriented approach when discussing AI’s challenges, emphasizing res…
EventThe tone of the discussion was largely optimistic and solution-oriented. Speakers acknowledged significant challenges but focused on practical ways to overcome them through collaboration, policy chang…
Event“Estelle David of Business France opened the AI Impact Summit, noting that roughly one hundred French companies were present across sectors such as quantum‑ready photonics, secure edge AI, mobility systems, cybersecurity, digital twins and green‑tech.”
The knowledge base states that Estelle David opened the summit by showcasing a French AI delegation of about 100 companies across sectors like quantum computing, cybersecurity and green tech, confirming the reported figure and sector breadth.
“A partnership between H‑Company and St James Hospital in Bangalore was signed during the summit, and a collaboration between North France Invest and the TIAB was also announced.”
Source [S6] explicitly mentions the signature between H-Company and St James Hospital and the partnership between North France Invest and the TIAB, confirming these specific agreements.
“France now ranks among the world’s top three AI ecosystems (San Francisco, New York and Paris).”
While the ranking is not verified in the knowledge base, the source provides context that France hosts more than 1,100 AI startups and is actively doubling the number of AI scientists and engineers, underscoring its strong AI ecosystem.
“India trains hundreds of thousands of AI engineers each year, giving it the second‑largest developer community in the world.”
Source [S118] reports that India produces about 500,000 AI engineers annually, confirming the scale of India’s AI talent pool referenced in the broader discussion of AI ecosystems.
The panelists largely converged on the importance of trust, collaboration, and the complementary strengths of France and India for AI advancement. Disagreements centered on the mechanisms to achieve trustworthy AI (cultural prerequisite vs. architectural embedding vs. proof‑based testing), the openness of AI models (open‑source versus proprietary), and the handling of AI‑generated scientific outputs (risk of false papers versus reproducibility frameworks). These divergences are substantive but not antagonistic, reflecting different professional lenses (policy, engineering, research) rather than fundamental conflict.
Moderate – while there is clear consensus on high‑level goals (trusted AI, France‑India partnership, societal impact), the speakers differ on implementation pathways and policy nuances. The implications are that coordinated action will require reconciling these approaches—e.g., integrating regulatory compliance, technical safeguards, open‑source incentives, and reproducibility standards—to build a unified, trustworthy AI ecosystem across both nations.
The discussion coalesced around the central premise that trust is the prerequisite for AI scale. Arun’s opening claim framed trust as the linchpin, and each subsequent speaker deepened this premise from different angles—technical architecture (Neelakantan), quantum‑AI standards (Valerian), security and sustainability (David), organizational culture (Sandeep), real‑world Indian examples (Tanuj), measurable metrics and ethics (Raj Reddy), ecosystem productisation (Amit), paradigm‑shifting scientific methodology (Antoine), reproducibility practices (Joelle), and global governance (Irakli). These pivotal comments acted as turning points, steering the dialogue from abstract enthusiasm to concrete frameworks, metrics, and policy, and ultimately reinforced the summit’s goal of forging a trusted, scalable AI partnership between France and India.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

