Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges
20 Feb 2026 17:00h - 18:00h
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges
Session at a glance
Summary
This transcript captures discussions from the AI Impact Summit, a collaborative event between France and India focused on building trusted AI partnerships and advancing AI for scientific discovery. The summit featured high-level participation from Prime Minister Modi and President Macron, highlighting the strategic importance of Franco-Indian cooperation in artificial intelligence.
Estelle David from Business France opened by showcasing the strong French AI delegation of about 100 companies across sectors like quantum computing, cybersecurity, and green tech. She emphasized that several concrete partnerships were signed during the summit, including agreements between French and Indian companies in engineering automation, space technology, and healthcare. Julie Huguet from LaFrenchTech noted that Paris now ranks as the third-largest AI ecosystem globally after San Francisco and New York, crediting the previous AI Summit in Paris for helping structure France’s innovation landscape.
The main panel discussion focused on trusted AI as the foundation for scaling artificial intelligence adoption. Industry leaders from Tata Communications, Thales, HCL Technologies, Dassault Systèmes, and quantum computing startup Candela shared their perspectives on building trust through transparency, explainability, data governance, and end-to-end security. They emphasized that trust cannot be a “bolt-on” feature but must be architectural and foundational to AI systems.
A second panel on AI for Science featured discussions on how artificial intelligence is revolutionizing scientific discovery by accelerating research timelines and enabling new methodologies. Speakers addressed the importance of international collaboration, the need for reproducibility in AI-generated discoveries, and concerns about maintaining scientific integrity while leveraging AI tools. The summit concluded with calls for responsible AI development that benefits all nations and bridges the global digital divide.
Keypoints
Overall Purpose
This transcript captures a multi-session AI Impact Summit focused on strengthening Franco-Indian cooperation in artificial intelligence, with particular emphasis on trusted AI development, AI for science, and building bridges between French technological expertise and Indian scale and innovation capacity.
Major Discussion Points
– Franco-Indian AI Partnership and Collaboration: The summit showcased extensive cooperation between France and India, featuring over 100 French companies, strategic partnerships signed during the week (including Dacia-GT, ExoTrail-Druva Space, H-Company-St. James Hospital), and efforts to combine French deep tech excellence with Indian scale and engineering talent.
– Trusted AI as Foundation for Scale: A central theme emerged that trust is essential for AI adoption at scale, with panelists defining trust through multiple dimensions including explainability, predictability, data lineage, governance, security, and compliance with regulations like EU AI Act and India’s DPDP.
– AI for Scientific Discovery and Research: Extensive discussion on how AI is transforming scientific methodology – from traditional hypothesis-driven research to AI-enabled reverse engineering (defining desired properties first, then creating materials), with emphasis on the need for reproducibility, verification, and responsible use in scientific breakthroughs.
– Democratization and Global Equity in AI: Multiple speakers addressed the digital divide and the need to ensure AI benefits reach underserved populations, including rural communities, developing nations, and “people at the bottom of the pyramid,” with calls for multilingual AI systems and accessible interfaces.
– Institutional Innovation and Talent Development: Discussion of new models for AI research institutions (like India’s IRO), the need for indigenous research capacity, and strategies to retain and develop high-end AI talent domestically rather than losing it to international migration.
Overall Tone
The discussion maintained a consistently optimistic and collaborative tone throughout, characterized by mutual respect between French and Indian participants. The speakers demonstrated enthusiasm for technological possibilities while acknowledging challenges responsibly. The tone was professional yet warm, with frequent expressions of gratitude and partnership. There was a notable shift from technical discussions in early panels to more philosophical and policy-oriented conversations in later sessions, but the collaborative spirit remained constant throughout all sessions.
Speakers
Speakers from the provided list:
– Estelle David – Representative from Business France, involved in organizing French AI delegation and partnerships
– Julie Huguet – Director of LaFrenchTech Mission, supports growth of French startups in France and abroad
– Moderator – Session moderator (multiple instances, likely different moderators for different sessions)
– Arun Sasheesh – Associate Partner and Country Director, TNP Consultants; Panel moderator
– Neelakantan Venkataraman – Vice President and Global Business Head, Cloud AI and Edge, Tata Communications
– Valerian Giesz – Co-Founder and CEO of Candela (quantum computing company)
– David Sadek – VP Research Technology and Innovation Global CTUI and Quantum Computing, Thales
– Sandeep Kumar Saxena – Chief Growth Officer, HCL Technologies
– Tanuj Mittal – Senior Director Customer Solution Experience, Dassault Systèmes
– Raj Reddy – Professor, founding director of the Robotics Institute at Carnegie Mellon University, 1994 Turing Award winner
– Abhay Karandikar – Professor, Secretary of Department of Science and Technology, Chair for AI for Science Working Group
– Irakli Beridze – Head of Center of AI and Robotics, UNICRI (United Nations Interregional Crime and Justice Research Institute)
– Antoine Petit – CEO and Chairman, CNRS France (Centre National de la Recherche Scientifique)
– Joelle Pineau – Chief AI Officer (company not specified in transcript), academic background
– Amit Sheth – Founder, Indian AI Research Organization
– Audience – Various audience members asking questions during Q&A sessions
Additional speakers:
– Saloni – Session coordinator/moderator (mentioned briefly when handing over to Arun Sasheesh)
– Ekta – Session coordinator (mentioned when introducing Professor Karandikar)
Full session report
This transcript captures a comprehensive AI Impact Summit that served as a pivotal moment in Franco-Indian technological cooperation, bringing together over 100 French companies and distinguished leaders from both nations to explore the future of artificial intelligence. The week-long summit, which featured high-level participation from Prime Minister Modi and President Macron, represented far more than a diplomatic gathering—it established a concrete framework for combining French deep tech excellence with Indian scale and innovation capacity.
Strategic Franco-Indian Partnership and Concrete Outcomes
The summit’s opening presentations by Estelle David from Business France (the trade and investment agency) and Julie Huguet from LaFrenchTech revealed the substantial scope of collaboration. The French delegation encompassed diverse sectors including quantum-ready photonics, secure edge AI, mobility systems, cybersecurity, digital twins, and green technology. This wasn’t merely a showcase but resulted in tangible partnerships: Dacia Technology and GT Solved signed strategic agreements in engineering automation, ExoTrail and Druva Space concluded contracts for 14 satellite propulsion systems, and H-Company partnered with St. John’s Hospital in Bangalore in healthcare applications—a partnership announced by President Macron himself.
The complementarity between the two nations became a recurring theme throughout the summit—France offering deep tech excellence and scientific rigour, whilst India provides unprecedented scale with its 1.4 billion population and 200,000 startups. This partnership model represents an approach of combining complementary strengths rather than competing for dominance.
Trust as the Architectural Foundation for AI Scaling
The summit’s central insight emerged from the high-level panel discussion on trusted AI, moderated by Arun Sasheesh from TNP Consultants: trust is not merely a desirable feature but the fundamental enabler of AI adoption at scale. Sasheesh’s opening observation that “trust is the only way to scale” set the tone for a sophisticated exploration of what trust means in practical terms.
Neelakantan Venkataraman from Tata Communications provided crucial historical context, explaining how trust requirements have evolved as AI systems moved from proof-of-concept pilots to production environments. His definition—”trust means I have your back and I will not fail you”—established that trust must be foundational and architectural rather than a bolt-on feature.
The technical requirements for trustworthy AI emerged through multiple perspectives. Valerian Ghez from photonic quantum computing startup Candela outlined five pillars: traceability, predictability, verifiability, security, and accountability. Dr. David Sadek from Thales, drawing from decades of experience in critical systems, emphasised that “trust is not a label, it’s not a promise, it’s a proof,” establishing four pillars including robustness, cybersecurity, explainability, and responsibility.
Tanuj Mittal from Dassault Systèmes connected these technical requirements to real-world outcomes, using India’s UPI payment system as a powerful example where trust enabled massive scale—achieving 21 billion transactions translating to some 30 lakh crore worth of money transactions, demonstrating how trust can drive adoption even among digitally inexperienced users.
AI for Scientific Discovery: A Paradigm Revolution
The AI for Science panel, moderated by Professor Abhay Karandikar (Secretary of the Department of Science and Technology and chair of the AI for Science Working Group), revealed that artificial intelligence is not merely accelerating existing scientific methods but fundamentally transforming the nature of scientific inquiry itself. Professor Antoine Petit from CNRS France (which employs 35,000 people including 30,000 scientists across all fields of science) articulated this transformation most clearly: traditional science involved defining materials and then studying their properties, whilst AI-enabled science allows researchers to specify desired properties and then design materials to meet those specifications.
This paradigm shift represents what Petit called a “reverse” approach to science, moving from discovery-based to design-based methodologies. Professor Joelle Pineau from Meta provided a concrete framework for understanding this transformation, describing AI as a ranking algorithm that dramatically reduces search times in scientific discovery. Rather than testing candidate solutions sequentially based on intuition, researchers can now rank possibilities algorithmically and focus experimental resources on the most promising options.
Dr. Amit Sheth, founder of the Indian AI Research Organization (IRO), advocated for compact custom neurosymbolic models that solve specific problems deeply rather than relying on general large models. His approach emphasises explainability, safety, and alignment—qualities essential for scientific applications where understanding the reasoning process is as important as the results. IRO focuses specifically on healthcare, sustainability, environmental science, and pharma sectors, aiming to create the ecosystem conditions that have historically driven talent migration from India to Silicon Valley.
Addressing Global Digital Divides and Democratisation
A sobering theme throughout the summit was the recognition that AI’s benefits remain unevenly distributed globally. Professor Raj Reddy’s keynote—delivered by the founding director of Carnegie Mellon’s Robotics Institute and Turing Award winner—highlighted this challenge starkly, noting that whilst the summit assumed participants were AI-enabled, people in villages have no knowledge of computers or AI and risk being left behind entirely. His vision for multilingual artificial general intelligence and “3T computers” (teraflop computational power, terabyte memory, terabit bandwidth) represents an ambitious attempt to democratise AI access.
Reddy also emphasized the concept of “human manner” AI—a human-centric approach that he noted was introduced by Prime Minister Modi. He highlighted the work of Indian startups Sarvam and Bharat Jain in developing multilingual AI solutions as examples of progress in this direction.
Irakli Beridze from UNICRI provided global context, revealing that only half the world’s countries have AI strategies or governmental allocations for AI development. This digital divide poses risks not only for equitable development but for global stability and cooperation. The UN’s development of responsible AI frameworks is being piloted in five countries: India, Kazakhstan, Nigeria, Oman, and Brazil, representing one approach to ensuring AI benefits reach underserved populations whilst maintaining ethical standards.
Institutional Innovation and Capacity Building
The summit revealed significant efforts to build indigenous AI research capacity across different models. CNRS’s virtual centre for “AI for Science, Science for AI” emphasises cooperation between AI producers and consumers across disciplines. IRO focuses on creating comprehensive support from research through to commercialisation, including IP creation, licensing, and startup incubation.
Sandeep Kumar Saxena from HCL Technologies provided a practical example of organisational transformation, describing how his entire organisation moved to AI-driven operations where voice queries replace Excel spreadsheets and PowerPoint presentations. HCL showcased seven AI solutions designed for enterprises, citizens, and governments. His emphasis on leaders embracing AI first—”if you have to embrace AI, it starts from the top”—highlighted the importance of organisational transformation alongside technical implementation.
Responsible AI Governance and Global Cooperation
The governance discussions revealed both progress and persistent challenges in developing responsible AI frameworks. Irakli Beridze’s work with law enforcement agencies demonstrated how AI governance can move from abstract principles to practical implementation. The UN’s toolkit for responsible AI use in law enforcement, being piloted in the five countries mentioned above, provides a model for translating ethical principles into operational guidelines.
Beridze quoted the UN Secretary General’s observation that “policy should be as smart as the technology it aims to guide,” capturing the challenge facing policymakers who must develop technical sophistication matching the technologies they aim to govern.
However, significant challenges remain unresolved. The reproducibility crisis in AI-generated scientific discoveries lacks established standards or methodologies for validation. These challenges require new institutional mechanisms and international cooperation frameworks.
Technological Approaches and Future Directions
The technical discussions revealed divergent approaches to AI development that reflect deeper philosophical differences about AI’s future. The debate between general large models and specialised compact models represents more than a technical choice—it reflects different visions of how AI should be deployed and controlled. Raj Reddy’s emphasis on personal sovereign edge models that operate without cloud connectivity prioritises privacy and autonomy, whilst others advocate for ecosystem approaches that leverage cloud infrastructure and partnerships.
The quantum computing perspective, represented by Candela’s work with photonic quantum computers, introduced additional complexity by emphasising the need to break down walls between quantum and AI communities. Their release of the MERLIN framework for benchmarking quantum machine learning applications represents an attempt to build shared baselines and reproducible results across these emerging technologies.
Joelle Pineau’s advocacy for open-sourcing scientific models, citing the success of LAMA models with 3 billion downloads, highlighted the tension between democratisation and commercial interests, where fundamental science models remain public whilst commercially applicable versions may become private.
Summit Sponsors and Organization
The summit was co-organized by IFKI (Indo-French Chamber of Commerce) and supported by key sponsors including Platinum sponsors CMS CGM and Total; Gold sponsors BNP Paribas, Capgemini, and Schneider Electric; and Silver sponsor MBDA, demonstrating significant private sector investment in Franco-Indian AI cooperation.
Implications for Global AI Development
The AI Impact Summit demonstrated that successful AI development requires more than technical excellence—it demands institutional innovation, international cooperation, and sustained attention to equity and access. The Franco-Indian partnership model, combining complementary strengths rather than competing for dominance, offers a potential template for other international collaborations.
The summit’s emphasis on trust as the foundation for scale provides a framework for understanding why some AI applications succeed whilst others fail to achieve widespread adoption. The UPI example demonstrates that when trust is established through transparent, reliable, and beneficial operation, even the most digitally inexperienced users will embrace new technologies.
Perhaps most significantly, the summit revealed that AI for science represents not just an acceleration of existing research methods but a fundamental transformation in how scientific inquiry is conducted. This transformation requires new institutional structures, collaboration models, and governance frameworks that are only beginning to emerge.
The path forward requires continued attention to the digital divides that risk leaving significant populations behind, whilst simultaneously pushing the boundaries of what AI can achieve in scientific discovery, economic development, and social benefit. The Franco-Indian partnership, with its combination of deep tech expertise and massive scale, represents one promising approach to meeting these dual challenges of innovation and inclusion.
Session transcript
We were also very proud yesterday to welcome the different leaders who came for the summit and especially Prime Minister Modi and President Macron to come on the pavilion and discover the companies and speak with our companies. So as you see, through this week, the French AI delegation was actually more than what you are seeing on the pavilion. Altogether, it was about 100 French companies who came. And actually, when you will meet them, you can find in different sectors like quantum -ready photonics, secure edge AI, mobility systems. cybersecurity, digital twin, and green tech. And actually, all of them wrote, and they are all convinced and trust. that AI is the next frontier. So now just to share with you what is making this week very special.
Actually it’s as you with what I said you can see that was very intense that’s for sure but it’s not only intensity actually as you will see it’s also a lot of results achieved and results with real partnerships real signature and real commitments between our two countries. I would just name a few for the AI just maybe the first with that Dacia technology and GT solved where they signed a strategic partnership on Monday evening in Bangalore at the French consulate during the French AI night and that really shows strengthening of Franco -Indian cooperation and engineering automation in intelligence. Thank you. A second one in a different sector between ExoTrail and Druva Space, where they signed a major contract in the space industry to deliver 14 satellite propulsion systems, which is also a very strong symbol of the cooperation between France and India in terms of space.
Another signature between H -Company and St. James Hospital. And a final one that I can mention is actually a partnership between North France Invest and the TIAB that are actually uniting all together, which will create new bridges between actually one of the most Europe, most dynamic industrial region. And the other one is the T -U -B, which is actually a partnership between the two. One of India’s most powerful innovation ecosystem. So as you can see, when we see all these signatures, and I’m not just talking about AI. you can see that the dynamism between France and India is very strong but now actually when you see all this it wouldn’t have been possible without the strength of our collective network and Business France the trade and investment agency is really proud to collaborate and we have collaborated very closely with different partners with definitely LaFrenchTech and thank you Julie for the long standing partnership supporting the French startup and for bringing all these startups here in India with Numium the leading French digital and tech association helping the structure and mobilize the presence of French AI champions in India also some other partners Yuja Advisory Achoo but also the co -organizer of this event, this panel at the main summit, the Franco -Thai Chamber of Commerce, Indo -French Chamber of Commerce, IFKI.
I’m still in my… So thank you, thank you to all of you. Now we are actually arriving to today’s session where we are gathering today most influential leaders shaping the future of AI. So I won’t be long, but we are really honored to welcome Julie Huguet, Director of the Mission French Tech. Also Arun Sadesh, Associate Partner and Country Director for TNP Consultants. Nila Khan, Veta Karam, Vice President and Global Business Head, Cloud, AI and Age. From Tata Communication. Valerian Ghez, Co -Founder and CEO of Canvela. Dr. David Sadek, VP Research Technology and Innovation Global CTUI and Quantum Computing from Thales. Sandeep Kumar Saxena, Chief Growth Officer from HCL Technologies. And finally, Tanuj Mittal, Senior Director Customer Solution Experience from Dassault Systèmes.
So we’ll be really happy to hear your experience. And before I conclude, just two thanks also to our partners, because you know this event has been also been possible thanks to them. Our Platinum sponsors, CMS CGM, Total. Our gold sponsors, BNP Paribas, Capgemini, Schneider Electric, and the silver sponsor, MBDA. Again, thank you very much, all of you. Thank you to our co -organizer, IFKI, and I wish you a fruitful session. maybe just before I end also a big thanks to the teams the different teams, business friends teams but all the French team all together who worked like crazy to make this week possible
applause applause thank you very much Estelle we now move forward to our keynote address it is my pleasure to invite Miss Julie Rouget director of LaFrenchTech Julie leads one of the world’s most dynamic innovation ecosystems LaFrenchTech representing thousands of deep tech companies and scale -ups shaping Europe’s technological leadership Julie over to you applause
thank you good morning everyone thank you I’m Julie Rouget I’m director of the French Tech mission, so we support the growth of French startups in France and abroad. I’m truly delighted to discover the tech ecosystem here in India, a country that trains around 1 .5 million engineers every year. I think it’s the highest number in the world, so I’m very impressed. The AI Impact Summit is an opportunity to create more bridges between France and India, and exactly one year ago, actually, we hosted the AI Summit in Paris. That moment helped us, helped our ecosystem to structure itself. It was the opportunity to attract investment, to unlock talent, to accelerate the creation of French startups. Today, the French tech ecosystem is strong and ambitious.
According to Deal Room, the top three AI ecosystems globally are now San Francisco, New York, and Paris. We are very proud of it and we are really sure that the AI summit helped us to build this strong ecosystem. Across France, AI is becoming a pillar of our industrial transformation. We already have major European leaders such as Mistral AI or H -Company. And I’m convinced that the AI Impact Summit here in Delhi would be as valuable for India as it was for us. For the French tech, this week in India was of course a great opportunity to showcase French innovation. But it was also an opportunity to deepen our partnership with India. Beyond business, I’m truly convinced that we share common values, trustworthy, low environmental footprint, positive impact for humanity.
We support innovation when it reinforces our economies. Of course, we are committed to making the world a better place for all of us. but also when it brings real progress for humanity. Innovation only makes sense when it serves the greatest number. And to give you a concrete example, the French President Macron announced yesterday that H -Company and St. John’s Hospital in Bangalore have started a collaboration to make hospitals more efficient and to contribute to save thousands of lives. In healthcare, in agriculture, climate, and many other sectors, Franco -Indian partnerships are key for innovation with real impact. This is why I was really happy the whole week to be here with outstanding French startups, companies already working with India, like Estelle told us a bit earlier, and others ready to build strong and strategic partnerships here.
And thank you. And maybe I will introduce a few of them. Agri -Co is transforming agriculture through digital tools that connect farmers directly to markets. White Lab Genomics uses artificial intelligence to accelerate gene therapy development. Candela is building scalable quantum technologies that will shape the future of computing. And Edge Company develops advanced AI agents capable of computer use to perform complex tasks autonomously, just like a human would. For these innovations to become global leaders, international development is key. And we all know that the world is changing. Economic alliances are evolving. We see it with Canada, Latin America, Gulf countries, and obviously here in India. Today, India represents a scale of 1 .4 billion people. 200 ,000 startups.
It’s huge. France represents deep tech excellence, scientific force, industrial capability. And I think this complementary is powerful. In France, we like to schedule meetings weeks in advance. In India, we learn to be a bit more flexible. And honestly, innovation also requires agility and perhaps a bit of Indian wisdom. That’s what we learned as well this week. And it was, like Estelle said, a very important week for the startups who came with us. So I wish you all a good session and a great day. And thank you for being here with us this morning. And .
Thank you so much, Julie. We will now move to our high -level panel discussion, where leaders from telecom, quantum, industrial AI, cloud infrastructure, and enterprise digital transformation will reflect on how our two countries can jointly accelerate trusted AI across sectors. I am pleased to introduce our moderator for this session, Mr. Arun Sardesh, Associate Partner and Country Director, TNP Consultants. Joining Arun on the panel are an exceptional group of leaders, Neelakantan Venkataraman, Vice President and Global Business Head, Cloud AI and Edge Data Communications. Valerian Ghiaz, Co -Founder and COO, Coindella. Dr. David Sadeg, Vice President, Research, Technology and Innovation, Global CTO, AI and Quantum Computing, Thales. Mr. Sandeep Kumar Saxena, Chief Growth Officer, HCL Technologies Tanuj Mittal, Senior Director, Customer Solution Experience, Daso System With that, ladies and gentlemen, it is my pleasure to hand over the session to our moderator.
Thank you, Saloni. Good morning, everyone. It’s actually a pleasure and a privilege to be part of this summit and being a moderator to such an esteemed panel. I would like to start by thanking Business France, IFKI, and the AI Impact Summit organizers for giving us the opportunity to discuss something that is very important about trusted AI. So maybe I’ll start with actually what happened here yesterday. Our prime minister talked about human manner is the concept that he introduced. Our French president talked about scaling, and he used UPI, the Indian payment system, as a good example of scale. And if you really think about it, there is a large element of trust involved in it. The way that in India we accepted UPI means we trust it.
And when we trust things, scale is possible. So usually when people talk about topics such as scale or, sorry, so trust or safety, there’s a bit of pessimism at times talking about challenges. But if you really think about it, there is a large element of trust involved in it. But in this particular session, I’d like to be more optimistic. and present trust as the only way to scale. If you want the large corporations, the banks, the governments to adopt AI, they need to trust us. And only when these organizations adopt AI, we can really achieve scale. So that’s the, you know, I’d like to set the tone with that comment. And maybe, you know, in the last five years, especially after COVID, we have facing changes quite rapidly, right?
I mean, things are moving from one thing to another. We all started our career, and today we are talking about AI. So a lot of evolution in our lives as well. So I want to start from that point to introduce yourself, but also tell us. The evolutions that you have gone through, and how do you define trust? Maybe we’ll start with you, Neil.
Thank you. A very warm good morning to all of you, and thank you, Business France, for having me here. It’s a pleasure to be here and talking to all of you, and hopefully we’ll have a nice interaction. So personally, you know, we’ve been… So just to introduce myself, I head the cloud business for TataCom, which includes your general purpose cloud. Now AI cloud. Edge and dedicated private clouds for our enterprise customers. We are an international company. 80 % still comes from India, and 20 % comes from outside of India. So we were… As part of our cloud business, we did have a large AI ML offering. And about four years back, when suddenly the transformer architecture came into the scene, and we were able to do it, we were, you know, we didn’t know about it at all.
Actually, we were, I would reckon that we were like, we didn’t know about it at all. And so when it came up, you know, we thought, what is this new architecture which has come up and how it’s going to impact? And OpenAI and ChatGPT came up. And then we started thinking how we’re going to apply this to our businesses internally and also how we’re going to offer it as a service to our customers. So our journey has been a journey of learning a lot in the last three years, I would say. All of us are learning and it’s been pretty fast -paced. It’s been pretty steep in terms of technical. We had to, you know, through the organizational levels, right from the CEO to the bottom most, we had to do learning of what will it take for this new world to adopt Gen AI and how do we adopt Gen AI within the company and how do we adopt Gen AI within the company and how do we adopt Gen AI outside and offer it to our customers.
So tremendous scale of changes and the potential for innovation for our customers and for the company. So now we have established an AI COE within the company about three and a half years back. We had a lot of pilots which were going on within the company, and now they are into production. And similarly for our customers and enterprise world and beyond enterprise government and institutions which work very closely with government, who work on citizen -scale projects, all of us have seen that, right? So truly in the last five years, it’s moved from, I would say, POCs and pilots to now production. And production at an entry level. I would say scale. It is yet to be achieved.
It’s production. to say that, okay, there is a return on investment in the enterprise context and there is a reasonable outcome for citizen scale projects. And therefore, we should start putting it into production and then, of course, scale it. And scaling means that trust has to be put on steroids. So let me talk about trust now. So I would, you know, describe trust as something which is, in a very simple word, I have your back and I will not fail you. That’s trust. You know, beyond that, there’s nothing. So when we deploy these systems, the stack, and then when we deploy the use cases and the applications, you know, inherently, trust has to be foundational element.
It cannot be a bolt -on on top of what we have built. So it has to be built at every layer. And trust has also evolved within AI system. In the last five years, you know, it started off. by, you know, because it was a POC pilot, so you’re not really exposing it to the end users in a big way. It was in a closed group, user group, and therefore it was more of good to have. But now it’s moved to foundational, it’s more architectural in nature, right? Every element of the architecture needs to have trust built in. And from a regulatory point of view also, trust has also evolved, right? So, earlier it was all about, okay, a soft guidance on trust, saying that you need to be, you know, ethical, you need to have transparency, but now it’s in the, baked in into the regulatory policies and requirements, whether it is the DPDP, which has been operationalized in India, or the EU AI Act, which is already operational.
So now it is, you know, it is in black and white. And from a technology point of view, as I said, trust is foundational, it is architectural whether you have explainability built in in terms of the outcomes, whether the behavior of the systems is predictable it is explainable, you should be able to explain, it should be auditable the data which is fed into the models and trained and the inferencing happens and the outcomes which happen you need to have a very clear data lineage, you need to have end to end governance and we talked about edge computing, I think we talked about edge so you need to have governance, end to end governance, we talked about billions of devices which could be inferencing at scale and therefore whatever happens in the cloud and what happens at the edge, you need to be able to you know the entire workflow and the process has to have end to end visibility in terms of the governance and finally resiliency is also trust, it should not be broken, so from Tadak’s communications point of view when we talk about trust being the bedrock and foundational element of AI And therefore, it will scale while you put it to production.
We meant at every scale at the infra level, we build in some of the trust components, including, you know, zero trust networking, because, you know, networking is the invisible layer which carries data across AI platforms to the, you know, the software layer and the platform layer. We have advanced guardrailing technology, data lineage, data governance models, and the entire end -to -end data pipelining and management. So I’ll just hand it back to you. Long answer. Sorry for that.
No, no, not at all. It’s very important. And, you know, for us, Tata is synonymous to trust. So I have to mention that. So, well, you know, being a French company, I know about Quandela. But what do you like to talk about Quandela, your evolution, and how do you define trust in a quantum computing perspective? Thank you
very much. Yeah, so maybe you know, I will just introduce a little bit Candela. It’s a startup coming from the CNRS lab. We use CNRS technology to build photonic quantum computers. Actually, we are a full -stack company developing software and hardware. And now, actually, we partner with industries like Thales to move quantum from the lab to industry, to the real world, and to deploy systems. And basically, as a CEO, trust is a key, is a pillar in our roadmap because actually we need to build reliable systems. We need to demonstrate compliance, security in order to demonstrate scaling. That’s very important for us. So for me, when you asked about what means trust with my vision, and I’m an engineer, basically, it’s easy.
First, trust. It’s trust. Trustability. Trustability because we need to trace the systems, the models, the data that we use for AI. Even for quantum, we use quantum artificial intelligence, we develop quantum machine learning. And for all of this, it’s important to trace the results and to get reproducible runs. Second thing will be predictability. Predictability is you need to know basically where are the limits of the models and where are the failures as well. And this is also why it’s important to investigate this. Verifiability is the third one because we need to benchmark the performance. Actually now we are at this step. At Candela we released a framework which is called MERLIN for machine learning. And it’s very useful.
And it’s very useful. It’s used to benchmark applications and performance on quantum computers and using AI techniques and run stress tests on the applications. Fourth, security. And the fifth pillar, which is accountability as well. How to make sure that we have a clear ownership along the value chain of AI on quantum computing between hardware providers, software providers, certificate providers. We need to have clear ownership about everything. And with this, all together, we will be able to work in trust. We will be able to build the trust for the end users, and we will be able to scale. That’s for me. Thank you. Thank
you, Valeria. And Dr. David, you are in charge. You are in charge of AI and quantum computing at Thales. Both evolving topics. How do you see this? And what is trust for you? You have multiple… topics in hand so hello
team doing what we call friendly hacking, which actually friendly attacks our own algorithms to identify their breaches, their vulnerabilities, and to propose countermeasures. And by the way, this team won a challenge from our MOD, French MOD, two years ago because the team succeeded in retrieving sensitive data which were used to train the system. The third pillar is explainability of our system. So, if you have a digital copilot in a cockpit recommending to a pilot to make a left in 45 miles, for example, so the pilot should be entitled to ask the question, why should I do that, especially if she or he has had in mind to do something different. And the system should be able to answer because there is a threat, there is a thunderstorm, and not because the layer number three of the neural net was activated at 30%.
Okay? and finally the fourth pillar which is last but not least is what we call responsibility and responsibility actually is twofold there is one stream uh which is the uh compliance of ethics principles of laws of regulation principles as you know in europe we have this ai act and talus also issued a digital ethics charter a few years ago which comes in 10 commitments actually we are really working to achieve it’s on our strategic roadmap business roadmap now and the second stream is about the uh uh full carbon footprint and energy consuming so we have teams working on frugal ai to minimize the volume of data which are used to train systems for example this is minimizing the the footprint of the technology itself ai technology And we have also the complement of this is what we call AI for green, how to use AI to minimize the footprint of applications like working on optimizing the trajectories of aircraft, for example, to minimize what we call the condensation traits which are generated by the aircrafts.
So just to conclude this first part, I would say that trust actually is not a label. It’s not a promise. It’s a proof. Things have to be proved in our business. Thank you.
Thank you, David. Sandeep, coming to you, we are in the service industry. Our whole operation is built on relationship and trust. So how are you coping up with these new challenges? This of new technologies coming up, what’s your take on this?
Thank you. Thank you for inviting me here. So it’s a very valid question. And I will not answer it in a very technical way because I’m sure all of you have covered all the aspects around technology, architecture, governance. So my name is Sandeep. Been in London for the last 24 years. And I’m moving to India next month to accelerate the India business. And, of course, when I was in, I was managing the European business for HCL Tech. We’re just about a $15 billion company providing services. Services, and I took this job of growth markets, too, which is India, Middle East, Africa, France. It gave me a very different perspective because I’m managing about $1 .5 billion business.
And now here I come in a completely different world. And I started like a startup. So I built my own systems, which was based on AI. Like we say, before you preach anybody. You learn yourself. so I built all my systems today for growth markets too which is what I lead is built on AI so my inside sales engine my business analytics my forecasting everything is based on AI so I have reached from analytics to reasoning I am hoping I will reach to predictability in some way because the agents are still not predictive they are still reasoning but that’s where I started so if you look at my business and every person in my sales team or my delivery teams is certified on AI I myself started it, see if you have to embrace AI, it starts from the top, starts from the leader and we talked about trust, it starts from you if you as a leader in Vive there is no excel sheet in my world there is no powerpoint in my world you ask a question using voice you get an answer on a dashboard I can show you right here of course I will not tell you what is my forecast for this quarter but you ask a question you have it you ask a question about a company you get it in 2 and half minutes and that is the power of AI we were having you know earlier lot of people trying to dig data from here from there it doesn’t exist it is 2 and half minutes you ask for the market approach or anything that you want to do so in my view imbibe yourself it is an iterative process you do not build trust just like that you build it over a period of time you have to be patient you have to learn you have to make somebody learn and that is the learning process that continues over a period of time and then you build trust.
So my advice to anybody, and the reason I moved to India is very exciting. It’s a land of opportunity, saying, coming home. And you are in NCR, which we call it Delhi. It is the home of HCL Tech. So we have a very unique proposition or advantage in India or globally, which is we have what we call as AI products. Very proudly, it is made in India for India and for the world, which is HCL software. We have expertise of our global services, working with a lot of customers across the globe. So what it gave me an opportunity is to bring AI products, services together into what I call as AI solutions. so in this AI impact summit we have lost 7 solutions which is not just for enterprises it is for citizens it is for the governments as well more than welcome hall 4, 4 .5 please if you have not visited go and visit what we are talking about so these are the solutions which will make you know it will help us protect ourselves, fraud detection system, compliance system, training system, skilling system, not just enterprises so to me AI is about people progress and planet thank you
coming to you Tanuj Dassault is such a flag bearer of French innovation how do you how do you see this whole evolution and what is trust means at Dassault thank you
Arun and good morning everyone I represent the systems which champions the cause of industrial AI platforms. Now to this point of trust, the definition, the expectation itself has evolved I would say over the last several years. Five years back, for example, AI was still in silos and the definition of trust was mostly centered around the accuracy of the output. So you have a model, you feed data, you put a query, if the results are near to your expectation you are happy. But that is no more the situation because of widespread understanding of AI as a topic and adoption as well. Now there are new dimensions which got added to make it trustworthy and quite a few points which I wanted to highlight.
I think the highlight is already covered with my fellow panelists but for the sake of clarity and at the cost of repetition I will say it again the first one is of course the lineage of the data so the AI platform the industrial AI platform needs to ensure by design that the data which is being leveraged to solve a problem is ethical it has traceability there is no mischievous data which is being leveraged that done when the output comes it is credible and it is trustworthy by the people who are going to use it the second point which I wanted to highlight is about people in the loop we still have to go a long way where we trust a totally automated system without human intervention we still like to have at least at the governance level, people in the loop who will ensure that the processing, the output given by the machines is indeed in line with the objective for which it was created.
100 % trust only on machines is still a little far. So people in the loop is definitely which built trust for all of us. Another aspect and particularly in an industrial AI perspective is to simulate the result of an AI model in a real world environment. For example, when you design a car, you design a car in context. The car has to run on roads and the condition of roads changes from place to place. And if you really need to trust a car, which was, for example, developed. elsewhere in the world but being used in India, people will trust if that car at least is tested in the real world environment of India as a context.
You have virtual twins of not only the product now, for Dassault system you also have virtual twins of the environment. So you can simulate how that car will behave when it actually gets on road in Indian conditions. That builds trust. Another example is what kind of checks and balances which are there in the model itself that it does not let you make mistake whether the mistake is unintentional or whether it is deliberate. What kind of compliance you have already built in the model. If that is robust, the chances of getting a wrong output or a broken output is very low. The system is very robust. is far lesser and that builds trust. And the last one, point which I wanted to highlight, AI applications, unless it is end -to -end, from conceptualization to decommissioning, if it is still in silos, the overall output is less trustworthy as compared to, imagine a situation where right from conception up to decommissioning, you have been able to simulate the whole process multiple times again, prove it, streamline it, and then launch it.
That builds a lot of trust for the people who are actually going to build that system in the physical world and the consequent people who are going to use it. So these are some of my views. Arun, back to
Thank you. Thank you, Tanuj. I think we have some more time, but I’m glad that a lot of you guys, all of you, in fact, went. Thank you. The deep strength of French innovation, French technology, and two star walls of Indian scale and speed, in a way. So I just maybe quickly want everybody’s point of view on what is the mindset change that you are looking for to build trust and the democratization of AI at scale. So what is the mindset that you are looking for, a change of mindset, Neela, quickly?
I think I would say that the mindset change which we have to move towards is a mindset of an ecosystem. Because we can’t do it all. For example, we partner with Thales on many of the security components which we provide as part of a solution. So it’s an ecosystem play. And we need to work very closely to make… …make sure the trust is not broken. and the trust architecture is maintained across the ecosystem.
Valerio?
I think on my side, priority should be to break the walls between quantum and AI and build a huge community. And also this is why at Candela we released Merlin, which is a framework which aims to do that. Because that’s the point. Trust comes from benchmarking and reproducibility and not from one -off charts. And Merlin has been released with one very pragmatic first mission, establish trust between AI community, AI developers, using quantum computers that are brand new technology, which is now available. And we actually published some reproductions of papers. We are here to show quantum machine learning results in a controlled environment. We are turning scattered clays, names into… shared baseline and to build a community and invite people to use them.
So, yeah, my main topic is let’s break the walls and let’s share about what we learned in order to establish trust all together and build a common baseline, especially between France and India. In France, we can develop the technologies. In India, we can scale the technologies. So we have an ecosystem and a community.
What’s your take, David?
Well, I would say that in France, we have spent like decades to build something which is really supposed to work in context where failure is forbidden. I mean, with companies as Thales, as Dassault, as Airbus, and it has taken us, you know, decades to do this. and so we are living in a world of certification, of regulation of mathematics proofs so trust has to be proved this is very important we cannot afford as I said earlier that you know just declare trust, say ok please trust us when you deal with critical systems you have to prove the trust and I used to say that trust is gained by drop and is lost by bucket so this is very important and in India has been doing something equally extraordinary I would say in record time with this digital infrastructure for billion human scale which is really extraordinary and I think that the combination between depth and scale between France and India is really the very challenge here.
And to keep trust within this challenge is probably the way to go to make people adopt AI at large scale. Thank you.
Sandeep, for you. Can you just say one word?
Yeah. Just be open -minded and learn to adopt change. Adaptability. Very simple. There is nothing else.
And you, Tanul?
Yeah, quickly. The scale is directly proportional to the trust we built in the system, for sure. Yeah. And I’ll build on the example you gave initially and our prime minister also quoted. UPI, when it was launched in 2016, last year in December, it clocked some 21 billion transactions, translating to some 30 lakh crore worth of money transactions. with each other and today UPI is being used even by the most digitally illiterate person in India he doesn’t hesitate to put his trust in a system with his money so if you build the trust then the scale comes automatically
thank you gentlemen I think we are almost finished our time thank you very much I encourage you to meet with the speakers and thank you very much for your time
thank you once again to our moderator and to all our distinguished panelists I would now invite all the speakers to please remain on stage for a brief momentum presented by Mr. Mark Vialmopillier and for a group photo ladies and gentlemen please join me in applauding our speakers as we take this moment together Thank you. He was the founding director of the Robotics Institute at the Carnegie Mellon University and he was instrumental in helping to create the Rajiv Gandhi University of Knowledge Technologies in India to cater to the educational needs of the low -income gifted rural youth. He and Edward Fonningham won the 1994 Turing Award, sometimes known as the Nobel Prize of Computer Science, for their exemplary work in the field of artificial intelligence.
Now, I now request Professor Raj Reddy to take the stage to deliver his keynote. note.
phone in your pocket, it was listening to you and using it to guide your discussion. I’m hoping we’ll create user -friendly interfaces so that when I speak in Telugu, you can hear in Hindi, and when you speak in English, I can hear in my preferred language. And I think we are there. We can get there very quickly. And it’s being done already. There are two startups in India called Sarvam and Bharat Jain. Both are trying to do it. My request is that we create a quantitative measurable matrix. That we have achieved this goal. What that means to me is, it’s not enough. Already people will say, we already have multilingual intelligence. We have systems that will speak, and you can speak in one language.
But it’s not usable. It is not, especially if you’re a person in a village, and you don’t even know where to begin. So the first issue is, how do we create a multilingual AGI, and how do we make sure that we have measurable progress? There’s a statement, if you can’t measure it, you can’t improve it. We need to improve the existing models, and they will probably need more computation, more memory, and more bandwidth. In the 50 years ago, we created a thing called 3M computers, MIP, megabyte, and… megapixel. Today, we should create 3T computers, a terabyte of memory and teraflop of computational power and terabit bandwidth. That’s where we should aim for. That means every one of us should have in our pocket an AI companion that actually has what we call foundation edge models.
And they require not, right now, the many models that are on the edge are like three billion bytes or nine billion bytes. We’re off by a factor of 100. And we need to get there. And India can kind of, where am I? How am I doing for time? Anyway, somebody, it used to be that there’ll be a time map. thing here but whenever it is time tell me I’ll stop okay so that’s one the second important point I want to make is people at the bottom of the pyramid most of the talks I’ve heard most of the expectations assume you are AI enabled and you can actually make you effective use of AI I come from a little village I guarantee you not one of them knows anything about computers or AI and they simply you know are not going to be benefit from this whole technology so what we need to do is just like the agricultural revolution of some Swaminathan we need to figure out a way how to get this technology to people at the bottom of the pyramid.
Again, I’m sure you’ll find, I’d be happy to talk about any of these for much longer, but we only have a short time. Then, in order to do both of these things, I said we need a teraflop, terabyte systems, and what we need are personal sovereign edge models. And currently, if you talk to anyone, they’ll say, already we can have access to AI. It is not private. It is not, you know, personal and secure. We need systems because they’re always going to the cloud to access the AI models. As soon as you do that, you have no privacy. In the future, we want systems which are personal, autonomous, and can be used to do things.
So, I’m going to talk about the AI model. cognitive assistants that are always on, always working, always learning. And that is the challenge of how to get there without… We have to cut it off from the grid. We cannot let it go to the grid because then it’s no longer private. And so anyway, there is a whole set of issues of that kind. How much time do we have? Anyway, somebody tell me. There are three or four other topics we can talk about. One is, I had a child come and say, if AI is going to teach me and knows everything, why should I go to school? Yeah. And so the answer to that will take longer than two minutes, but I only have two minutes.
But you can figure it out. But basically what we need… to do is essentially teach the kid learning to learn using AI, have a dialogue, learning to think, you have to teach them critical thinking. Right now, most kids in India don’t even open their mouth in classrooms. They’re afraid. So we need to kind of get over the barrier, let them talk and think and go through critical thinking and learning to do. You have to learn how to execute. With that, I’m going to stop, but I want to leave you with one other thing which you can figure out. One of the things I remember from Vedas is Om Shanti Shanti Shanti. Peace. . . .
. . . . . . . . . . . . . . one of our keynote speakers, they said autonomous weapons are going to AI -based autonomous weapons are going to destroy the world. That’s a risk. Why don’t we have humane weapons? When a missile is going to hit a hospital or a school, it is easy with AI to discover that and deflect the missile. Why should we even kill the soldiers? They’re innocent. They’re just somebody recruited and they’re being bombed and killed. We should build weapons, humane weapons, that will disable them rather than destroy them. There are lots of very interesting issues of this kind. We need to think about that. Thank you.
Thank you. Namaskar.
A very good morning, ladies and gentlemen. Our next session is a panel discussion on AI for science. The panel will be moderated by Professor Abhay Karandikar, Secretary, Department of Science and Technology, and he’s also the chair for the AI for Science Working Group. I would now request the panelists to please come on the dais, Professor Karandikar. The other panelists for the session are Mr. Irakli Berids, Head of Center of AI and Robotics, UNICRI, Professor Abhay Karandikar, Professor Antoin Petit, CEO and Chairman, CNRS France. We have Ms. Joelle Pino, Chief AI Officer. And we also have Mr. Amit Sheth, Founder, Indian AI Research Organization. A very warm welcome again to the panelists. I will… Right. Group photograph.
Okay, I request all on the dais to please come forward for a group photograph. We’ll have the photograph for you on your mementos. Thank you, panelists. Thank you, Professor Karandikar. I now hand it over to our moderator, Professor Abhay Karandikar, Secretary, Department of Science and Technology, to carry forward the panel discussion. Sir, over to you.
Thank you. Thank you, Ekta. So, distinguished panelists, we have a very distinguished panelist today on the panel, colleagues and all the members of the global scientific community. It is my pleasure to welcome you to this panel on AI for Science, and we consider it to be a very core pillar of our vision for this India AI Impact Summit. And as today we stand at the threshold of a new research, paradigm, our goal is not just to witness the AI revolution. but to steer it towards a more equitable, inclusive and transparent future. You know, in today’s AI world, we are moving beyond traditional methods where AI -driven models and automated experimentations have a potential to compress the decades of research into months.
And the rapid advances of these technologies, however, has not been so far equitably distributed and that is one challenge. Many regions still face the significant barriers. But still, the realm of possibility for using AI for scientific discovery continues to have, you know, a lot of excitements. Today, we are joined by leaders who represent the entire spectrum of scientific innovations, policy makers, institution builders. and from the governance and national research ecosystem. I look forward to the panelists’ insights on, you know, what are the exciting possibilities in AI for science and how we can bridge the digital divide and build a genuinely reciprocal global scientific ecosystem. So with this, I think I will begin with, you know, first a few questions.
I will request the panelists to answer. Of course, they are free to elaborate on any other things. And then I think we will open this floor to the audience for the introduction. So let me begin with, you know, Dr. Amit on the far end. So, Amit, you have been building, I think, IRO as a national -style institution in India. If you can just tell, you know, how can this be a national -style institution in India? How can this model? help overcome the specific barriers that we have identified in this region, you know, such as inadequate compute and fragmented data sets. And also, you know, I would like you to elaborate how can we ensure that AI research which gets conducted in our center of excellence actually can reach the translational stage addressing the real world challenges.
So if you can just, you know, take five to seven minutes on this. I think you can just do this.
Hello. Yeah. Thank you very much, Professor Karandika. This is a perfect question for me to talk about. This is why I’m here. I moved from USA after 44 years here to address, exactly the question you asked. I was on. Two days ago, I was on another panel, and I asked this question to the audience. How, if I were to be the founder of DeepSeek, had all the funding that he had and has, can I find those 200, 250 engineers, AI engineers and researchers that he had access to, to build DeepSeek? Out of around 100 people in the audience, three people raised their hand, saying, yeah, we might, we may. Of those three, two were students.
So only one, you know, mature person basically thought that we can have that. And I think that gives an answer of what we need to do. So India is well on its way, I mean, to grow. Many people who know something about the AI. and they will certainly have the ability, the skills necessary. Say, India has been big in IT services and whatever IT services need, they will be able to supply. The skill set that people would have here, that would be adequate. But we have noticed that two members, very important members of IRO’s board are Ajay Chaudhary and Sharath Sharma. And they have extensively talked about or lamented that India has not been a product nation.
They have not made any global products. Virtually, I mean, hardly, you know, any global brands exist, have been developed in India. And for that, we need more than skills. We need people at high end of expertise. That means our own indigenous research capacity, our own ability to train innovatively. And that’s what we need to do. And that’s what we need to do. A very good model has been that, you know, we do bachelors here. Take an example of Arvind Srinivasan. He did IIT Madras. Then you go outside. He did his PhD in Berkeley. I did mine in Ohio State. And then he worked for companies, three companies, DeepMind, OpenAI, and Google. And then he did his company.
But that also in U.S. We want that to be done here, right? So the same ecosystem in which he got trained after leaving India, we want to provide that in India, right? And there are, I think, a lot of things happening. As you know, there is a 40 % decrease in Indians going to the United States for studies. And that will continue for a while now, right? With most of you. You know of the results. You know of the results. So, first and foremost, IRO is developing an environment to create high -end talent of innovators. Secondly, and by the way, if you see, IRO’s founders are professors who have graduated nearly 200 top -end PhDs.
So we know how to create that. Secondly, we have created a broad variety of collaborations with various universities, and we are starting to do that in industry. And we are creating a significant infrastructure to support IP creation, to licensing that, or to work with the corporates and startups to who will make the products. So the idea would be that we’ll co -innovate, join. We’ll jointly work at IRO with the companies, with the startups, with the entrepreneurs. and we have already lined up large amount of investors, angels, seed, as well as growth stage. They are all hungry for deep tech AI startups and that we will provide comprehensive environment for us to take. Now, some of us also, founders have also done companies.
Three of my four companies that I have done are AI companies licensing the research I did in my university. Ramesh Jain has done more companies than I have, and he’s also a co -founder. So we have the understanding of that entire pipeline it takes from lab to global products. And so this is what we are going to do for India. And this was it. Okay. Thank you. Thank you.
Now, let me just switch the gears and go to Professor Antonin. You have been the chairman and CEO of CNRS France. so I think CNRS as you know operates at a scale you know that most research organizations can only imagine so two questions what do you think what structural shift the national research and funding agency need to make to support the interoperable scientific ecosystem that can sustain AI research beyond just short term pilot and so the added question is that is there a need to build an AI for science platform like as a mega science facility
so thanks for this invitation yes two words about CNRS CNRS in French means Centre National de la Recherche Scientifique and probably you don’t need an AI translator to understand that it means National Center for Scientific Research and And it’s true that we’re a big institution. We employ more than 35 ,000 people, among which 30 ,000 scientists. And we cover all fields of science. And clearly, AI opened a new era in science, in some sense, because AI is not only an accelerator of existing techniques. It forces us to imagine new ways to do science. Just to illustrate this, if you look at material sciences, what I will see is roughly, before you define new materials and then you study the properties of these materials.
Now you say, I would like to have a material with such properties. And then thanks to AI, you will build the material. With high probability that it will verify these properties. So in some sense, you see, it’s not that global acceleration. It’s a reverse, in some sense, of a way to do science. And this opens a new era in which you need really to have talents, of course, but you also need cooperation between different sciences. And that’s probably a challenge for an old institution, if I may. Like CNRS, we were organized classically in science. We cover all sciences, including humanity and social sciences. But you see that with AI, you need really new ways to cooperate between scientists.
And this means that, as usual, the key point is talents. And it means that we have to build ways to push people to interact. And that’s why we created, some years ago, a virtual center called AI for Science, Science for AI. And that’s why we created a virtual center called AI for Science, Science for AI. And we have to create some kind of virtual loop. And that’s why we created a virtual center called AI for Science, Science for AI. between, in some sense, producers of AI, mathematicians, computer scientists, and consumers of AI, which can come from every discipline. But the trick is that this producer will not produce tools or software that will be simply used by consumers, but consumers will have new, in some sense, new attempts for new ways to do research.
And that’s clearly something we try to do. And, of course, in addition, we need absolutely to have computer facility at the highest level, even if we also try, as a lot of people try to work, to have more frugal AI in order to not have a carbon footprint which will stop to develop this AI. And so that’s clearly a challenge for a center like Celeris, but I know that it is a challenge all over the world. And probably a key point is to really start from scientific use cases in order to, as I said, to rethink the way to do science. So do we need to have a platform for that? I don’t know. We clearly need to have cooperation.
That’s absolutely key. And Celeris, we have a long tradition of cooperation with India and with DST in particular. And clearly, from my point of view, the way I feel India approach AI in a very, very pragmatic way can be an example for us. You really try to apply AI for your citizens. And in some sense, for science, I think that… The process should be the same. we should start from very pragmatic scientific questions in different fields and to see, thanks once again to cooperation between data scientists, computer scientists, mathematicians and colleagues from the other fields, how we can apply AI. But also for science, AI for science has also some risk. In particular, you can produce a lot of papers thanks to AI.
And it’s not clear whether these papers were right or not. And in some sense, we can lose all our time by producing false papers by AI and then refaring these papers also by AI. And that’s a difficulty we all face. I think that none of us has a solution right today. But… But it’s clearly also an issue, but… be optimistic and let us think that AI for science once again will allow us to make progress and to discover also new results but also new ways to access to these results and in particular there are right now fascinating applications of AI to mathematics a bit frightened in some sense because new results have been obtained in mathematics without the help of any human and does it mean that AI will replace scientists I
ok so do you think AI will replace scientists or it will act as a co -scientist or a hybrid scientist that for me so let me just introduce I think
Professor Zuel Pino so you have an academic background as well as you are now a chief AI officer so you have worked in the industry street as well. So just your take. the properties of new crystals. And in this particular case, once you’ve done the ranking, you take your top -ranked candidates, and you still need to run them through a wet lab to verify the properties. Your mathematical model has some imperfections, some approximations, some errors. But by having the ability to rank the candidate’s solutions, you cut down the search times drastically. In the old days, you had to list the list of possible solutions, and you had to test them one by one in the lab using your intuition of the order in which to test them.
But now you have a ranking algorithm that tells you in what order to rank them. So for those of you who remember the web pre -page rank algorithm where the search tree to find a website of interest was incredibly long, and all of a sudden you had a good ranking algorithm. It was a complete game -changer in order to retrieve information. And now it’s a complete game -changer in terms of finding candidate solutions to problems in AI. And so this process that I described for this one case applies across… across all sorts of other areas, whether it’s biology, whether it’s mathematical theorems, and so on and so forth. So this is not like magic. There is like an organization to how you take the data, how you use it in a generative model, how you do the ranking, and then how you verify your solutions.
And the verification process changes depending on what the domain is. In some cases, the better your model of the data, and we hear a lot about world models, the ability to predict the properties of the system means that you can accelerate further the discovery. However, you get better ranking, and you have to take fewer solutions to the lab. And so that’s just to give you a sense of how to use it in practice to make this a little bit more concrete for people. Thank you. Now let me come to Dr. Irakli Behriz. Irakli leads the United Nations Interregional Crime and Justice Research Institute Center for AI, where he manages one of the first sort of UN programs dedicated to AI research.
So, Irakli, what did you do? What is your take on the, you know, this risk versus benefits, you know, if you see that in your experience this AI for science can potentially pose and what, you know, even other speakers have raised?
Thank you very much. Thank you for the question and thanks to the organizers for putting this together and inviting me to the panel. It’s a really pleasure to share the panel with the distinguished speakers who spoke before me. I will give some reflections what we are doing and how we’re looking at the discoveries of the science, including the social science and other things, how it translates into the policy developments at some of the United Nations streams and how we are working with that. So I’m leading a center for artificial intelligence and robotics for one of the UN agencies called UNICRI. And our mandate is anything related to AI. Crime prevention, criminal justice, rule of law, human rights, AI literacy now.
The center itself opened in 2017 in The Hague in the Netherlands, and we have a global mandate supporting law enforcement agencies all over the world to use AI and in a responsible way. We develop specialized toolkits and policy frameworks for that. We also support investigators to use AI to solve concrete crimes. And at the same time, we are assessing risks, how criminals and malicious actors can use artificial intelligence, and how we can support sort of global frameworks to ensure that AI is used in a beneficial way and risks are mitigated properly. So this is the type of framework what we are doing. A couple of questions now sort of starting from the broad side, from the United Nations.
Obviously, UN just approved a scientific advisory board. This is an extremely positive development. And just an hour ago, there was a panel about science related to the AI governance and how it is so crucial to understand and especially for the policy makers and sort of broader audience what we are trying to actually govern and what we are hoping is that the Scientific Advisory Board is going to do just that and quoting Secretary General of the United Nations who said that policy should be as smart as the technology it aims to guide and it is so true and right now there is quite a lot of sort of misconceptions and misconnects in that sense. Now a little bit about the law enforcement and how sort of how we are looking at it.
There are a number of things and there is a lot of aspects that could be touched upon. Several years ago when I started the center itself and we started sort of our programs especially on the responsible use of AI by law enforcement, most of the law enforcement agencies were not using AI. We are talking about back in 2018. or they didn’t even know what were the tools. And we had sort of a really handful of examples here and there. And now, last summer, we conducted regular global meetings, AI for law enforcement, and this one was hosted in Brazil. And we had so many use cases that we didn’t know actually sort of what to showcase.
Right? On the one hand, this is a really good development. So we have law enforcement needs to use AI and it needs to solve problems. And right now, without AI tools, the vast amount of data which exists there cannot be interpreted, cannot be put in place, but at the same time, it has to be done in a responsible way. So what we are doing is that we’re developing specialized toolkits for responsible use of AI, and that involves the multi -stakeholder dialogues. And we bring scientists there, we bring law enforcement agencies, governments, and academia to put together those findings and frameworks so that… this could be applied directly in the policy translation. So India is one of the pilot countries right now.
We have five countries where this toolkit has been implemented and this is India, Kazakhstan, Nigeria, Oman and Brazil. A couple of days ago we had a meeting at the Central Bureau of Investigation and we understood that there’s a lot of progress already made in the implementation of this particular project. At the same time we are, we have launched a rather sort of a scientific project on how to ensure that public trusts use of AI by law enforcement and in a few weeks we’re going to issue policy recommendations and the report which comes out of it which is again a very crucial form of the governance of AI in that particular field where AI is being used.
AI has been used by law enforcement but public has a fear to it and has a misunderstanding. perhaps or right understanding on how it is being used and applied in reality. So all of this stuff is being happening there. Thank you.
Thank you all the panelists. I think before we just open, I just had one quick question not in any order, but just to Dr. Pino, I had this question for you since you made a very important point of AI to be looked at as an instrument. Now, you know, one question I had is that there is this reproducibility crisis in science. You know, so what do you think? Do you need any standard or any methodology so that, you know, AI generated discoveries are considered, you know, as real or as reliable as, you know.
I do appreciate the question. I’ve been in I’m quite concerned about the reproducibility more generally in the field of AI for a number of years, starting at around 2018, and have published quite a few papers specifically on this topic of reproducibility. I’ll keep it very, very short. I do think this is an issue. I do think AI can be an instrument to accelerate the reproducibility of scientific findings, because specifically in those cases, the question is already there often. There’s a candidate methodology, and so that means we can apply the wheels of AI in using reasoning methods and generative methods to accelerate reproducibility. We’ve looked at doing that and running reproducibility challenges. I’ve run an annual reproducibility challenge around some of the AI conferences, and so I think there’s a lot of opportunity there to do that.
I would emphasize there’s two ingredients that are necessary, which often are associated with discussions of responsible use of AI. One. So that is transparency. So to facilitate reproducibility, it helps to have the artifacts of the scientific process be publicly available. and the second one is evaluations. And so just to reproduce a method without being very specific about how you’re going to specify the criteria can be difficult. So I think by spending some time on transparency and evaluation, we can really facilitate this process.
Okay. Amit, your…
Yeah, so I think we’ve gotten great things out like productivity and other things that Kali from Cohit mentioned. About using very large models trained on arbitrary data, we are bringing… We plan to bring to India something very unique. From the very beginning, in fact, when I had a chance to talk to the Prime Minister, we said that we need to have… India make its mark in the particular… in a new form of AI. And in this case, I get the chance to perfectly explain what we are doing. We want to solve, instead of using a big model and use it as an instrument or partner, we are developing models that are very specific. We call it compact custom neurosymbolic models such that we solve specific problem deeply.
IRO has taken the topics of healthcare, sustainability and environmental science and pharma as initial domains. And recently in pharma, there is a company called Benevent AI, and they had FDA approval of a new drug, remote arthritis drug, where it was developed by use of knowledge graph and deep learning. So in our case, we want to create specific model for specific problem, problem solving. And trained, neurosymbolic means that we can make the models explainable, safe, aligned, grounded, with deeper reasoning options and planning and so on and so forth. And so I think this is an alternative model for AI that is likely to come up and would solve the problems deeply, very specifically with high value.
Okay. Just quickly, I just wanted to ask you this question that what do you think that AI for science can act as a bridge to solve problems in some of the priority sectors, like climate resilience or agriculture or energy, particularly for countries which have a limited experimental facility?
I have two hours, right? Yes. No, no. Clearly, as I said before, AI will play a key role in particular because it has this ability to treat a huge amount of data. I said before that… We are also a consumer of AI. If I look at the domains who produce the most amount of data, it’s not at all mathematics, computer science. It’s particle physics and astronomy. And they need new techniques based on AI to treat properly this data. But coming back to North -South relations, as you said, I’m convinced that we need cooperations. We live at a period where sovereignty becomes a buzzword. But sovereignty does not mean, from my point of view, isolation. We need to collaborate.
We need to share. We need to develop open science and open software. And clearly this is not in opposition with the will of sovereignty. And clearly, to be brief, I think that we need to… start from use case either use case coming from civil society or use case coming from science and we as developed countries we do not have as you know France has a history with Africa which is particular and during a long time we try to explain to African people what they need and now we have understood at least I hope that the main point is to understand what all they need and to try to develop cooperation in order to to feel these things so thank you,
actually you made an important point of the responsible AI what do you think you know that about the shared global ethics you know for the AI that AI driven scientific breakthroughs are governed by some kind of a shared ethical frame
Yes. Okay. Yes. Thanks a lot. So there are not, I mean, many, many things happening at the moment in the world. On the one hand, we have the global digital divide where a lot of countries are investing in the technology and advancing and including in education and scientific breakthroughs. And then you have quite a large portion of the world which is staying either behind or may have a potential to stay behind. For example, right now only half of the world has either AI or digital strategies and have governmental spendings or allocations to that. Another half doesn’t. So that digital divide is very dangerous and there are numerous calls how to minimize that. And on the level of the United Nations, there are many type of streams there, but I don’t think it’s enough and I think that a lot more has to be done.
And hopefully the scientific breakthroughs… through the AI and some shared platforms and some shared collaboration that can be bridged and this could be benefited. And when you see the title of this AI Impact Summit, I cannot share it more or cannot resonate more that welfare of all, happiness for all, AI should certainly benefit all and not selected few. And I think that summits like this and hosting a summit in Global South should give a renewed impetus for doing all of that. Thank you very much.
Thank you very much. Now since we are running out of time, we just have time for two quick questions. So we can take from here. Yes, please, go ahead.
So my question is for Dr. Pino and Dr. Kashi. You know, I work at the intersection of AI and synthetic biology. Google defined release Alka -Volume from the mobile site. And then they announced Alka -Volume 4. What is it? Or ground discovery? And we have chosen to get… So it’s very interesting that the fundamental model in fundamental science was released in public domain. But the one which has commercial applications and drug discovery, Google has chosen to keep private. My question is, do you see this as a trend where the scientific foundation models as far as they relate to fundamental science will be released in open source, but if they are fine -tuned for commercial applications, they will be kept private?
Do you see this as a trend, and what do we do about that, Professor Sheth, in India?
Of course I can’t speak to DeepMind’s strategy. That belongs to them. I’ve been in deep disagreement about their open sourcing strategy for many years, respectfully so. I do think that the circulation of scientific assets and ideas is absolutely for the benefit of all. I will say it is possible to go against that trend. I was, in 2023, responsible for a language model called LAMA. At the time, all of these… The industry was against open sourcing large language models. against that. We open source the Lama 1 model, Lama 2, Lama 3. Today we’re looking at over 3 billion downloads of these family of models. It’s possible to see disturbances to those trends and I think specifically in the field of scientific research there’s so much more to be gained by sharing assets and sharing ideas than keeping it closed.
But that takes courage, that is going against the grain and it takes vision.
I want to express deep admiration for that approach and trend that you started in making open source model. India has to develop its own model so we just had a whole day yesterday with the pharma industry, they are our partners and with the access to information they can provide, that is they can provide, data they can provide, we will develop our own model for drug discovery. we are ourselves developing a very large pharma knowledge graph we have already developed a good one decent one now and we will be training our own model with deep pharma drug related you know knowledge and our version thank you
so just one last question we will have in the end just be brief I think 30 seconds and then I will have one of the panelists to answer another 40 seconds
my question is
yeah go ahead
my question is is there any government guidelines for responsible global AI
any you want to answer this right
so there are numerous guidelines on the responsible use of AI in many different domains from our side the sort of angle of the UN where I am working we did develop guidelines and not only guidelines but practical framework on the responsible use of AI in law enforcement and law enforcement is one of the probably most sensitive applications of artificial intelligence and that guidelines or that toolkit, that practical framework is now unveiled and it’s working and it’s been tested in many countries and as I mentioned it India is one of the first country which is implementing it and it’s very admirable. Thank you. So
thank you very much. With this I think we are time up and we have to close the session. I would like to thank all the panelists. Thank you. Thank you all. I just would like to give away the mementos for the panel discussion. Thank you. Thank you.
Estelle David
Speech speed
118 words per minute
Speech length
742 words
Speech time
374 seconds
Franco‑Indian AI collaboration and partnership outcomes
Explanation
Estelle highlights concrete bilateral partnerships that demonstrate the depth of France‑India AI cooperation, citing specific agreements that reinforce joint innovation and engineering automation.
Evidence
“Actually it’s as you with what I said you can see that was very intense that’s for sure but it’s not only intensity actually as you will see it’s also a lot of results achieved and results with real partnerships real signature and real commitments between our two countries.” [3]. “And the other one is the T -U -B, which is actually a partnership between the two.” [8]. “I would just name a few for the AI just maybe the first with that Dacia technology and GT solved where they signed a strategic partnership on Monday evening in Bangalore at the French consulate during the French AI night and that really shows strengthening of Franco -Indian cooperation and engineering automation in intelligence.” [26].
Major discussion point
Franco‑Indian AI collaboration and partnership outcomes
Topics
Artificial intelligence | The enabling environment for digital development
Julie Huguet
Speech speed
128 words per minute
Speech length
624 words
Speech time
291 seconds
AI summit strengthens France‑India ties and shares values of trust and sustainability
Explanation
Julie describes the AI Impact Summit as a bridge that deepens bilateral ties, builds a strong ecosystem, and showcases French innovation to Indian stakeholders.
Evidence
“The AI Impact Summit is an opportunity to create more bridges between France and India, and exactly one year ago, actually, we hosted the AI Summit in Paris.” [16]. “We are very proud of it and we are really sure that the AI summit helped us to build this strong ecosystem.” [21]. “And I’m convinced that the AI Impact Summit here in Delhi would be as valuable for India as it was for us.” [20].
Major discussion point
Franco‑Indian AI collaboration and partnership outcomes
Topics
Artificial intelligence | The enabling environment for digital development
Arun Sasheesh
Speech speed
124 words per minute
Speech length
652 words
Speech time
312 seconds
Trust as prerequisite for AI scaling and deployment
Explanation
Arun argues that trust is the essential condition for large organisations to adopt AI, positioning trust as the only path to achieve scale.
Evidence
“And only when these organizations adopt AI, we can really achieve scale.” [31]. “and present trust as the only way to scale.” [32]. “If you want the large corporations, the banks, the governments to adopt AI, they need to trust us.” [34]. “And when we trust things, scale is possible.” [35].
Major discussion point
Trust as prerequisite for AI scaling and deployment
Topics
Artificial intelligence | Building confidence and security in the use of ICTs
Neelakantan Venkataraman
Speech speed
156 words per minute
Speech length
1114 words
Speech time
428 seconds
Trust must be built into every layer of the AI stack
Explanation
Neelakantan explains that trust is architectural, requiring explainability, data lineage, governance, and zero‑trust networking across the entire AI infrastructure.
Evidence
“And from a technology point of view, as I said, trust is foundational, it is architectural whether you have explainability built in … you need to have a very clear data lineage, you need to have end to end governance … resiliency is also trust, it should not be broken … it will scale while you put it to production.” [39]. “We meant at every scale at the infra level, we build in some of the trust components, including, you know, zero trust networking, because, you know, networking is the invisible layer which carries data across AI platforms to the, you know, the software layer and the platform layer.” [44]. “So when we deploy these systems, the stack, and then when we deploy the use cases and the applications, you know, inherently, trust has to be foundational element.” [48]. “Every element of the architecture needs to have trust built in.” [45].
Major discussion point
Trust as prerequisite for AI scaling and deployment
Topics
Artificial intelligence | Building confidence and security in the use of ICTs
Mindset and ecosystem shift for democratising AI
Explanation
He calls for moving from siloed projects to an ecosystem where trust is embedded at every scale, encouraging collaborative partnerships.
Evidence
“And scaling means that trust has to be put on steroids.” [40]. “We need to work very closely to make… …make sure the trust is not broken.” [71]. “So it’s an ecosystem play.” [72].
Major discussion point
Mindset and ecosystem shift for democratising AI
Topics
Artificial intelligence | Capacity development
Valerian Giesz
Speech speed
132 words per minute
Speech length
541 words
Speech time
244 seconds
Trust pillars for quantum‑AI
Explanation
Valerian outlines specific trust attributes—trustability, verifiability, and predictability—required for quantum‑AI systems and urges breaking walls between communities.
Evidence
“Trustability because we need to trace the systems, the models, the data that we use for AI.” [49]. “Verifiability is the third one because we need to benchmark the performance.” [51]. “First, trust.” [43]. “It’s trust.” [12]. “So, yeah, my main topic is let’s break the walls and let’s share about what we learned in order to establish trust all together and build a common baseline, especially between France and India.” [25]. “Trust comes from benchmarking and reproducibility and not from one -off charts.” [37].
Major discussion point
Trust as prerequisite for AI scaling and deployment
Topics
Artificial intelligence | Building confidence and security in the use of ICTs
David Sadek
Speech speed
128 words per minute
Speech length
555 words
Speech time
258 seconds
Responsible AI governance and ethical frameworks
Explanation
David stresses that trust must be demonstrably proven through certification, explainability, and a dual focus on compliance and carbon‑aware design.
Evidence
“and so we are living in a world of certification, of regulation of mathematics proofs so trust has to be proved … you have to prove the trust … the combination between depth and scale between France and India is really the very challenge here.” [29]. “The third pillar is explainability of our system.” [61]. “and finally the fourth pillar which is last but not least is what we call responsibility … the first stream … compliance of ethics principles of laws of regulation … the second stream is about the … full carbon footprint and energy consuming … we have teams working on frugal ai to minimize the volume of data … we have also the complement of this is what we call AI for green…” [121].
Major discussion point
Responsible AI governance and ethical frameworks
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Sandeep Kumar Saxena
Speech speed
142 words per minute
Speech length
687 words
Speech time
289 seconds
Trust cultivated through leadership adoption and iterative learning
Explanation
Sandeep describes how AI adoption starts at the leadership level and how trust is built over time through continuous learning and demonstrable outcomes.
Evidence
“so I built all my systems today for growth markets too which is what I lead is built on AI … if you look at my business and every person in my sales team or my delivery teams is certified on AI … it starts from the top, starts from the leader and we talked about trust, it starts from you … you do not build trust just like that you build it over a period of time you have to be patient you have to learn … and then you build trust.” [46].
Major discussion point
Trust as prerequisite for AI scaling and deployment
Topics
Artificial intelligence | Capacity development
Tanuj Mittal
Speech speed
134 words per minute
Speech length
745 words
Speech time
332 seconds
Scale follows trust; UPI example shows mass adoption when trust is established
Explanation
Tanuj links trust to scale, using India’s UPI system as a concrete illustration of how widespread trust drives massive usage even among digitally illiterate users.
Evidence
“The scale is directly proportional to the trust we built in the system, for sure.” [41]. “with each other and today UPI is being used even by the most digitally illiterate person in India he doesn’t hesitate to put his trust in a system with his money so if you build the trust then the scale comes automatically” [42]. “That builds trust.” [4].
Major discussion point
Mindset and ecosystem shift for democratising AI
Topics
Artificial intelligence | Building confidence and security in the use of ICTs
Raj Reddy
Speech speed
113 words per minute
Speech length
950 words
Speech time
502 seconds
AI for scientific acceleration and societal impact
Explanation
Raj emphasizes the need for AI to be inclusive and reach the bottom of the pyramid, advocating for measurable, privacy‑preserving solutions that serve underserved communities.
Evidence
“But basically what we need… to do is essentially teach the kid learning to learn using AI, have a dialogue, learning to think, you have to teach them critical thinking.” [64]. “…people at the bottom of the pyramid… they simply … are not going to benefit from this whole technology so what we need to do is just like the agricultural revolution … we need to figure out a way how we get this technology to people at the bottom of the pyramid.” [80].
Major discussion point
AI for scientific acceleration and societal impact
Topics
Artificial intelligence | Social and economic development
Abhay Karandikar
Speech speed
123 words per minute
Speech length
858 words
Speech time
418 seconds
AI can compress decades of research into months but must be equitable and transparent
Explanation
Abhay highlights AI’s potential to accelerate scientific discovery while insisting on inclusive, transparent, and globally shared ethical frameworks.
Evidence
“you know, in today’s AI world, we are moving beyond traditional methods where AI‑driven models and automated experimentations have a potential to compress the decades of research into months.” [93]. “but to steer it towards a more equitable, inclusive and transparent future.” [86]. “actually you made an important point of the responsible AI what do you think you know that about the shared global ethics … AI driven scientific breakthroughs are governed by some kind of a shared ethical frame” [125].
Major discussion point
AI for scientific acceleration and societal impact
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Amit Sheth
Speech speed
130 words per minute
Speech length
1046 words
Speech time
480 seconds
National AI institute can create domain‑specific neurosymbolic models to bridge research‑to‑product gap
Explanation
Amit proposes building compact, custom neurosymbolic models that are explainable and safe, supported by broad university‑industry collaborations.
Evidence
“We call it compact custom neurosymbolic models such that we solve specific problem deeply.” [103]. “And trained, neurosymbolic means that we can make the models explainable, safe, aligned, grounded, with deeper reasoning options and planning …” [104]. “Secondly, we have created a broad variety of collaborations with various universities, and we are starting to do that in industry.” [11].
Major discussion point
AI for scientific acceleration and societal impact
Topics
Artificial intelligence | The enabling environment for digital development
Antoine Petit
Speech speed
135 words per minute
Speech length
1028 words
Speech time
456 seconds
Open‑source versus proprietary AI models
Explanation
Antoine stresses the necessity of cooperation and open science, noting industry resistance to open‑sourcing large models while advocating for shared virtual centers.
Evidence
“We clearly need to have cooperation.” [1]. “And that’s why we created, some years ago, a virtual center called AI for Science, Science for AI.” [112]. “The industry was against open sourcing large language models.” [142].
Major discussion point
Open‑source versus proprietary AI models
Topics
Artificial intelligence | Data governance
Joelle Pineau
Speech speed
171 words per minute
Speech length
836 words
Speech time
291 seconds
Open‑source scientific models accelerate progress and improve reproducibility
Explanation
Joelle argues that open sharing of scientific artifacts and transparent evaluation are key to reproducibility and broader benefit of AI research.
Evidence
“I do think AI can be an instrument to accelerate the reproducibility of scientific findings, because specifically in those cases, the question is already there often.” [94]. “So I think by spending some time on transparency and evaluation, we can really facilitate this process.” [95]. “I would emphasize there’s two ingredients that are necessary, which often are associated with discussions of responsible use of AI.” [120].
Major discussion point
Open‑source versus proprietary AI models
Topics
Artificial intelligence | Data governance
Irakli Beridze
Speech speed
162 words per minute
Speech length
1140 words
Speech time
421 seconds
Responsible AI governance and policy frameworks for law enforcement
Explanation
Irakli outlines the development of specialized toolkits and policy frameworks to ensure responsible AI use in sensitive domains such as law enforcement.
Evidence
“And right now, without AI tools, the vast amount of data which exists there cannot be interpreted, cannot be put in place, but at the same time, it has to be done in a responsible way.” [124]. “We develop specialized toolkits and policy frameworks for that.” [128]. “We develop specialized toolkits and policy frameworks for responsible use of AI in law enforcement … this toolkit … has been tested in many countries and … India is one of the first country which is implementing it.” [124].
Major discussion point
Responsible AI governance and ethical frameworks
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Audience
Speech speed
166 words per minute
Speech length
158 words
Speech time
56 seconds
Query on government guidelines for responsible global AI
Explanation
An audience member asks whether any governmental guidelines exist for the responsible use of AI at a global level.
Evidence
“my question is is there any government guidelines for responsible global AI” [119].
Major discussion point
Responsible AI governance and ethical frameworks
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Moderator
Speech speed
39 words per minute
Speech length
525 words
Speech time
805 seconds
Panel facilitation and framing of trusted AI discussion
Explanation
The moderator introduces the high‑level panel, setting the stage for a dialogue on accelerating trusted AI across sectors in both countries.
Evidence
“We will now move to our high -level panel discussion, where leaders from telecom, quantum, industrial AI, cloud infrastructure, and enterprise digital transformation will reflect on how our two countries can jointly accelerate trusted AI across sectors.” [75].
Major discussion point
Panel facilitation
Topics
Artificial intelligence | The development of the WSIS framework
Agreements
Agreement points
Trust must be foundational and architectural, not a bolt-on feature
Speakers
– Neelakantan Venkataraman
– David Sadek
– Valerian Giesz
Arguments
Trust means ‘I have your back and I will not fail you’ and must be foundational, not a bolt-on feature
Trust is gained by drop and lost by bucket – it must be proved, not just declared
Trust requires explainability, predictability, verifiability, security, and accountability
Summary
All three speakers emphasized that trust cannot be added as an afterthought but must be built into AI systems from the ground up, with specific technical and architectural requirements
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | Human rights and the ethical dimensions of the information society
Trust enables scale in AI adoption
Speakers
– Arun Sasheesh
– Tanuj Mittal
– Neelakantan Venkataraman
Arguments
Trust is the only way to achieve scale in AI adoption – large corporations, banks, and governments need to trust AI systems before they will adopt them at scale
Scale is directly proportional to trust built in systems, as demonstrated by UPI’s success in India
Trust requires an ecosystem approach with partnerships across the value chain
Summary
Speakers agreed that trust is the fundamental enabler of large-scale AI adoption, using UPI as a successful example of how trust leads to massive scale
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | The digital economy
Need for international cooperation and open science approaches
Speakers
– Antoine Petit
– Joelle Pineau
– Julie Huguet
Arguments
Sovereignty doesn’t mean isolation – need cooperation, open science and shared global ethics
Open sourcing scientific models benefits all, as demonstrated by LAMA model with 3 billion downloads
Franco-Indian partnerships are key for innovation with real impact in healthcare, agriculture, and climate
Summary
Speakers advocated for collaborative, open approaches to AI development that benefit global communities rather than isolated national efforts
Topics
Artificial intelligence | The enabling environment for digital development | Human rights and the ethical dimensions of the information society
AI represents a paradigm shift requiring new methodologies
Speakers
– Antoine Petit
– Joelle Pineau
– Abhay Karandikar
Arguments
AI is not just an accelerator but forces new ways to do science, like reverse engineering materials with desired properties
AI acts as a ranking algorithm to cut down search times drastically in scientific discovery
AI for science represents a new research paradigm that can compress decades of research into months
Summary
Speakers agreed that AI fundamentally changes how scientific research is conducted, not just accelerating existing methods but enabling entirely new approaches
Topics
Artificial intelligence | Information and communication technologies for development
Need to address digital divides and ensure equitable access
Speakers
– Raj Reddy
– Irakli Beridze
– Abhay Karandikar
Arguments
Need multilingual AGI and 3T computers (teraflop, terabyte, terabit) to reach people at bottom of pyramid
Digital divide exists where only half the world has AI strategies and governmental allocations
Need to bridge digital divide and build genuinely reciprocal global scientific ecosystem
Summary
Speakers emphasized the importance of making AI accessible to underserved populations and addressing global inequalities in AI access and capabilities
Topics
Artificial intelligence | Closing all digital divides | Information and communication technologies for development
Similar viewpoints
Both emphasized the need for organizational transformation and capacity building, with leaders driving AI adoption and India moving beyond services to product innovation
Speakers
– Sandeep Kumar Saxena
– Amit Sheth
Arguments
Leaders must embrace AI first – entire sales teams certified on AI with voice-driven analytics
Building indigenous research capacity and high-end expertise for product innovation rather than just services
Topics
Artificial intelligence | Capacity development | The enabling environment for digital development
Both emphasized the need for comprehensive frameworks that include human oversight, explainability, and real-world testing for trustworthy AI in critical applications
Speakers
– David Sadek
– Tanuj Mittal
Arguments
Thales implements four pillars: robustness, cybersecurity, explainability, and responsibility including ethics compliance
People-in-the-loop governance and simulation in real-world environments build trust
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | Human rights and the ethical dimensions of the information society
Both highlighted the strategic value of Franco-Indian cooperation, emphasizing complementary strengths and concrete business partnerships across multiple sectors
Speakers
– Estelle David
– Julie Huguet
Arguments
French AI delegation brought 100 companies across sectors like quantum, cybersecurity, and green tech to strengthen cooperation
France and India share complementary strengths – France has deep tech excellence, India has scale of 1.4 billion people
Topics
Artificial intelligence | The enabling environment for digital development | Social and economic development
Unexpected consensus
Privacy through edge computing and local models
Speakers
– Raj Reddy
– Amit Sheth
Arguments
Personal sovereign edge models required for privacy and security without cloud dependency
Need for compact custom neurosymbolic models that solve specific problems deeply rather than general large models
Explanation
Unexpected consensus between a veteran AI researcher and a startup founder on moving away from cloud-based large models toward specialized local models, representing a counter-trend to mainstream AI development
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | Human rights and the ethical dimensions of the information society
Importance of reproducibility and transparency in AI research
Speakers
– Joelle Pineau
– Valerian Giesz
Arguments
Need for transparency and evaluation criteria to facilitate reproducibility in AI-driven science
Trust requires explainability, predictability, verifiability, security, and accountability
Explanation
Unexpected alignment between an industry AI leader and a quantum computing startup founder on the critical importance of reproducibility and benchmarking, suggesting broad recognition of this challenge across different AI domains
Topics
Artificial intelligence | Monitoring and measurement | Human rights and the ethical dimensions of the information society
Ecosystem approach to AI development
Speakers
– Neelakantan Venkataraman
– Antoine Petit
– Irakli Beridze
Arguments
Trust requires an ecosystem approach with partnerships across the value chain
AI for science requires cooperation between AI producers and consumers from different disciplines
UN developing frameworks for responsible AI use in law enforcement across multiple countries including India
Explanation
Unexpected consensus across telecom, research, and governance sectors on the need for collaborative ecosystem approaches rather than isolated development efforts
Topics
Artificial intelligence | The enabling environment for digital development | Building confidence and security in the use of ICTs
Overall assessment
Summary
Strong consensus emerged around trust as the foundational requirement for AI scale, the need for international cooperation over isolation, AI as a paradigm shift in methodology, and the importance of addressing digital divides. Speakers from different sectors and countries aligned on core principles of responsible AI development.
Consensus level
High level of consensus with significant implications for AI governance and development. The agreement across industry, academia, and government representatives suggests a mature understanding of AI challenges and a shared vision for addressing them through collaborative, trust-based approaches. This consensus could facilitate more effective international cooperation and policy coordination.
Differences
Different viewpoints
Open source vs proprietary AI models for commercial applications
Speakers
– Joelle Pineau
– Audience
Arguments
Open sourcing scientific models benefits all, as demonstrated by LAMA model with 3 billion downloads
Concern about selective open-sourcing where fundamental science models are public but commercial applications remain private
Summary
Pineau advocates for open sourcing AI models, particularly for scientific research, citing the success of LAMA models with 3 billion downloads. An audience member expressed concern about the trend where fundamental science models are released publicly while commercially applicable versions remain private, using Google’s AlphaFold strategy as an example.
Topics
Artificial intelligence | Data governance | The enabling environment for digital development
General large models vs specialized compact models for AI development
Speakers
– Amit Sheth
– Joelle Pineau
Arguments
Need for compact custom neurosymbolic models that solve specific problems deeply rather than general large models
AI acts as a ranking algorithm to cut down search times drastically in scientific discovery
Summary
Sheth advocates for developing compact, custom neurosymbolic models that solve specific problems deeply with explainability and safety, focusing on particular domains like healthcare and pharma. Pineau describes AI’s role as a ranking algorithm that works with large models to accelerate scientific discovery across broad applications.
Topics
Artificial intelligence | Information and communication technologies for development
Cloud-based vs edge-based AI systems for privacy and autonomy
Speakers
– Raj Reddy
– Neelakantan Venkataraman
Arguments
Personal sovereign edge models required for privacy and security without cloud dependency
Trust requires an ecosystem approach with partnerships across the value chain
Summary
Reddy argues for completely autonomous edge-based AI systems that are cut off from the cloud to maintain privacy, emphasizing personal sovereign models. Venkataraman advocates for an ecosystem approach that involves cloud and edge integration with partnerships across the value chain, suggesting that complete isolation is not practical.
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | Human rights and the ethical dimensions of the information society
Unexpected differences
Role of human oversight in AI systems
Speakers
– Tanuj Mittal
– Raj Reddy
Arguments
People-in-the-loop governance and simulation in real-world environments build trust
Personal sovereign edge models required for privacy and security without cloud dependency
Explanation
While both speakers focus on trustworthy AI, Mittal emphasizes the continued need for human oversight and governance, stating that ‘100% trust only on machines is still a little far.’ In contrast, Reddy envisions fully autonomous AI companions that are ‘always on, always working, always learning’ without human intervention, suggesting a more automated future.
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | Human rights and the ethical dimensions of the information society
Approach to AI model development and sharing
Speakers
– Amit Sheth
– Joelle Pineau
Arguments
Building indigenous research capacity and high-end expertise for product innovation rather than just services
Open sourcing scientific models benefits all, as demonstrated by LAMA model with 3 billion downloads
Explanation
Unexpectedly, these speakers represent different philosophies toward AI development. Sheth emphasizes building indigenous, self-reliant capabilities and developing India’s own models for specific domains, while Pineau advocates for open sharing and collaboration through open-source models. This reflects a tension between sovereignty/self-reliance and global collaboration approaches.
Topics
Artificial intelligence | The enabling environment for digital development | Capacity development
Overall assessment
Summary
The main areas of disagreement center around fundamental approaches to AI development: open vs proprietary models, general vs specialized AI systems, cloud vs edge computing, human oversight vs automation, and national self-reliance vs global collaboration. These disagreements reflect deeper tensions between different visions for AI’s future.
Disagreement level
Moderate level of disagreement with significant implications for AI development strategies. While speakers generally agree on the importance of trust, transparency, and beneficial AI, they differ substantially on implementation approaches, governance models, and the balance between collaboration and sovereignty. These disagreements could influence policy directions and international cooperation frameworks for AI development.
Partial agreements
Partial agreements
All speakers agree that trust is foundational and requires multiple technical pillars including explainability, security, and accountability. However, they differ in their specific frameworks – Sadek focuses on four pillars for critical systems, Giesz emphasizes five pillars including verifiability and predictability, while Venkataraman stresses ecosystem-wide governance and end-to-end visibility.
Speakers
– David Sadek
– Valerian Giesz
– Neelakantan Venkataraman
Arguments
Thales implements four pillars: robustness, cybersecurity, explainability, and responsibility including ethics compliance
Trust requires explainability, predictability, verifiability, security, and accountability
Trust means ‘I have your back and I will not fail you’ and must be foundational, not a bolt-on feature
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | Human rights and the ethical dimensions of the information society
Both speakers agree on the need for international cooperation and addressing global inequalities in AI access. However, Petit focuses on maintaining open science and cooperation despite sovereignty concerns, while Beridze emphasizes the urgent need to address the digital divide where half the world lacks AI strategies and governmental support.
Speakers
– Antoine Petit
– Irakli Beridze
Arguments
Sovereignty doesn’t mean isolation – need cooperation, open science and shared global ethics
Digital divide exists where only half the world has AI strategies and governmental allocations
Topics
Artificial intelligence | Closing all digital divides | The enabling environment for digital development | Human rights and the ethical dimensions of the information society
Similar viewpoints
Both emphasized the need for organizational transformation and capacity building, with leaders driving AI adoption and India moving beyond services to product innovation
Speakers
– Sandeep Kumar Saxena
– Amit Sheth
Arguments
Leaders must embrace AI first – entire sales teams certified on AI with voice-driven analytics
Building indigenous research capacity and high-end expertise for product innovation rather than just services
Topics
Artificial intelligence | Capacity development | The enabling environment for digital development
Both emphasized the need for comprehensive frameworks that include human oversight, explainability, and real-world testing for trustworthy AI in critical applications
Speakers
– David Sadek
– Tanuj Mittal
Arguments
Thales implements four pillars: robustness, cybersecurity, explainability, and responsibility including ethics compliance
People-in-the-loop governance and simulation in real-world environments build trust
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | Human rights and the ethical dimensions of the information society
Both highlighted the strategic value of Franco-Indian cooperation, emphasizing complementary strengths and concrete business partnerships across multiple sectors
Speakers
– Estelle David
– Julie Huguet
Arguments
French AI delegation brought 100 companies across sectors like quantum, cybersecurity, and green tech to strengthen cooperation
France and India share complementary strengths – France has deep tech excellence, India has scale of 1.4 billion people
Topics
Artificial intelligence | The enabling environment for digital development | Social and economic development
Takeaways
Key takeaways
Trust is the fundamental enabler for AI scaling – without trust from corporations, banks, and governments, AI cannot achieve widespread adoption
Franco-Indian AI cooperation represents a powerful complementarity where France provides deep tech excellence and India offers scale (1.4 billion people)
AI for science is creating a paradigm shift from traditional research methods to AI-driven discovery that can compress decades of research into months
Trust in AI systems must be built foundationally across all layers (infrastructure, platform, application) rather than as an add-on feature
The digital divide remains a critical challenge with only half the world having AI strategies and governmental allocations
Open sourcing of scientific AI models benefits global progress, as demonstrated by successful examples like LAMA with 3 billion downloads
AI requires an ecosystem approach with partnerships across the value chain rather than isolated development
Personal sovereign edge models are needed to ensure privacy and security without cloud dependency
Responsible AI governance frameworks are being developed and implemented globally, including UN guidelines for law enforcement use
Resolutions and action items
Strategic partnerships signed between French and Indian companies in AI, space, and healthcare sectors during the summit
India identified as pilot country for UN responsible AI toolkit implementation in law enforcement
IRO (Indian AI Research Organization) established to create high-end AI talent and indigenous research capacity in India
CNRS created virtual center called ‘AI for Science, Science for AI’ to foster cooperation between AI producers and consumers
Business France and partners committed to continued collaboration supporting French startups in India
Development of compact custom neurosymbolic models for specific domains like healthcare, sustainability, and pharma
Implementation of AI COE (Center of Excellence) within Tata Communications with pilots moving to production
Unresolved issues
How to effectively bridge the global digital divide where half the world lacks AI strategies
Risk of AI producing false scientific papers and the challenge of verification without clear solutions identified
Whether AI will replace scientists or act as co-scientists – the relationship remains undefined
Reproducibility crisis in AI-generated scientific discoveries lacks established standards or methodologies
Tension between open sourcing fundamental science models versus keeping commercially applicable models private
How to reach people at the bottom of the pyramid who have no knowledge of computers or AI
Challenge of finding sufficient high-end AI engineers and researchers (only 1 out of 100 people could identify 200-250 qualified engineers)
Need for shared global ethics framework for AI-driven scientific breakthroughs remains unaddressed
Suggested compromises
Sovereignty in AI development doesn’t require isolation – countries can maintain sovereignty while engaging in international cooperation and open science
Hybrid approach to AI model development where fundamental science models are open-sourced while commercial applications may remain private
People-in-the-loop governance as a middle ground between full automation and human control in AI systems
Ecosystem partnerships where no single entity tries to do everything – collaboration across the value chain for trust and scaling
Gradual transition from proof-of-concepts to production to scale, allowing trust to be built incrementally
Balance between deep tech excellence (France) and scale capabilities (India) through strategic partnerships rather than competition
Thought provoking comments
Trust is not a label. It’s not a promise. It’s a proof. Things have to be proved in our business.
Speaker
David Sadek (Thales VP Research)
Reason
This comment cuts through the abstract discussions about trust to establish a concrete, actionable definition. It shifts the conversation from philosophical concepts to practical implementation requirements, emphasizing that trust in AI systems must be demonstrable and verifiable rather than merely claimed.
Impact
This statement became a foundational principle that other panelists referenced and built upon. It elevated the discussion from general trust concepts to specific implementation strategies, with subsequent speakers addressing how to actually prove trustworthiness through explainability, auditability, and governance frameworks.
Trust has also evolved within AI system… it started off by, you know, because it was a POC pilot, so you’re not really exposing it to the end users in a big way… But now it’s moved to foundational, it’s more architectural in nature, right? Every element of the architecture needs to have trust built in.
Speaker
Neelakantan Venkataraman (Tata Communications)
Reason
This observation provides crucial historical context showing how trust requirements have fundamentally changed as AI moved from experimental to production systems. It highlights that trust is no longer an afterthought but must be embedded at the architectural level from the beginning.
Impact
This comment established the evolutionary framework for understanding trust in AI, helping other panelists contextualize their own experiences and solutions. It shifted the discussion from current challenges to understanding how we arrived at this point and what it means for future development.
The scale is directly proportional to the trust we built in the system… UPI, when it was launched in 2016, last year in December, it clocked some 21 billion transactions… if you build the trust then the scale comes automatically
Speaker
Tanuj Mittal (Dassault Systèmes)
Reason
This comment brilliantly connects the abstract concept of trust to concrete, measurable outcomes using India’s UPI system as a powerful real-world example. It demonstrates how trust directly enables mass adoption and provides a tangible model for AI scaling.
Impact
This insight reframed the entire discussion by positioning trust not as a constraint or compliance requirement, but as the primary enabler of scale. It connected the technical discussions to the summit’s broader theme of scaling AI for societal benefit, influencing how other speakers approached the relationship between trust and adoption.
AI opened a new era in science… before you define new materials and then you study the properties of these materials. Now you say, I would like to have a material with such properties. And then thanks to AI, you will build the material… it’s not that global acceleration. It’s a reverse, in some sense, of a way to do science.
Speaker
Antoine Petit (CNRS France CEO)
Reason
This comment reveals a fundamental paradigm shift in scientific methodology enabled by AI – moving from discovery-based to design-based science. It’s profound because it shows AI isn’t just accelerating existing processes but completely inverting the traditional scientific approach.
Impact
This observation shifted the AI for Science panel from discussing AI as a tool to recognizing it as a transformative force that changes the very nature of scientific inquiry. It influenced subsequent discussions about the need for new institutional structures and collaboration models to support this reversed scientific methodology.
We want that to be done here, right? So the same ecosystem in which he got trained after leaving India, we want to provide that in India… we have the understanding of that entire pipeline it takes from lab to global products.
Speaker
Amit Sheth (Indian AI Research Organization)
Reason
This comment addresses a critical gap in India’s innovation ecosystem – the ability to retain and nurture talent domestically rather than losing it to foreign ecosystems. It articulates a vision for creating indigenous innovation capacity that can compete globally.
Impact
This statement highlighted the strategic importance of building domestic research and innovation infrastructure, influencing the discussion toward practical solutions for talent retention and indigenous capability building. It connected individual career trajectories to national innovation strategy.
Policy should be as smart as the technology it aims to guide… right now there is quite a lot of sort of misconceptions and misconnects in that sense.
Speaker
Irakli Beridze (UNICRI)
Reason
This quote from the UN Secretary General, shared by Beridze, captures a fundamental challenge in AI governance – the gap between technological advancement and policy understanding. It highlights how governance frameworks often lag behind or misunderstand the technologies they attempt to regulate.
Impact
This comment introduced the governance perspective into the scientific discussion, emphasizing the need for better science-policy interfaces. It influenced the conversation toward considering how scientific breakthroughs in AI need to be accompanied by equally sophisticated governance frameworks.
Innovation only makes sense when it serves the greatest number… Franco-Indian partnerships are key for innovation with real impact.
Speaker
Julie Huguet (LaFrenchTech Director)
Reason
This comment establishes a philosophical foundation that innovation should be inclusive and serve broad societal needs rather than narrow commercial interests. It connects technological advancement to social responsibility and international cooperation.
Impact
This statement set the tone for the entire summit by establishing that the goal isn’t just technological advancement but equitable impact. It influenced how subsequent speakers framed their contributions in terms of societal benefit and international collaboration.
Overall assessment
These key comments fundamentally shaped the discussion by establishing several critical frameworks: trust as a provable, architectural requirement rather than a promise; the recognition that AI is not just accelerating science but reversing traditional methodologies; the understanding that scale and trust are directly proportional; and the emphasis that innovation must serve broad societal needs. The comments created a progression from abstract concepts to concrete implementation strategies, while consistently connecting technical discussions to broader themes of societal impact, international cooperation, and equitable development. The most impactful insight was the reframing of trust from a constraint to an enabler of scale, which influenced how all subsequent speakers approached the relationship between technical excellence and mass adoption.
Follow-up questions
How do we create a quantitative measurable matrix to achieve multilingual AGI goals?
Speaker
Raj Reddy
Explanation
Raj Reddy emphasized that ‘if you can’t measure it, you can’t improve it’ and stressed the need for measurable progress in creating multilingual artificial general intelligence that can serve people in villages who don’t know where to begin with technology.
How do we get AI technology to people at the bottom of the pyramid who have no knowledge of computers or AI?
Speaker
Raj Reddy
Explanation
Raj Reddy highlighted that most discussions assume people are AI-enabled, but people in villages have no knowledge of computers or AI and won’t benefit from the technology without specific solutions to reach them.
How do we develop personal sovereign edge models that are private and secure without going to the cloud?
Speaker
Raj Reddy
Explanation
Raj Reddy pointed out that current AI systems require cloud access which compromises privacy, and there’s a need for systems that are personal, autonomous, and can work as cognitive assistants without grid connectivity.
If AI is going to teach me and knows everything, why should I go to school?
Speaker
Raj Reddy (quoting a child’s question)
Explanation
This represents a fundamental question about the role of education in an AI-driven world that Raj Reddy acknowledged would take longer to answer but is crucial for understanding how education needs to evolve.
Why don’t we develop humane weapons that disable rather than destroy using AI?
Speaker
Raj Reddy
Explanation
Raj Reddy suggested that instead of autonomous weapons that destroy, AI could be used to create weapons that deflect missiles from hospitals/schools or disable soldiers rather than kill them, raising important ethical questions about AI in warfare.
How can we find 200-250 AI engineers and researchers needed to build systems like DeepSeek in India?
Speaker
Amit Sheth
Explanation
Amit Sheth highlighted the talent gap in India by noting that when he asked an audience of 100 people if they could find the engineering talent that DeepSeek had access to, only three people raised their hands, indicating a critical need for high-end AI talent development.
How do we ensure AI-generated discoveries are as reliable as traditional scientific discoveries?
Speaker
Abhay Karandikar
Explanation
This addresses the reproducibility crisis in science and the need for standards or methodologies to validate AI-generated scientific discoveries, which is crucial for maintaining scientific integrity.
Will AI replace scientists or act as co-scientists?
Speaker
Antoine Petit
Explanation
Antoine Petit raised concerns about AI producing mathematical results without human help, questioning whether AI will replace scientists entirely or work alongside them, which has fundamental implications for the future of scientific research.
How do we prevent AI from producing false papers that are then peer-reviewed by AI, creating a cycle of misinformation?
Speaker
Antoine Petit
Explanation
Antoine Petit identified a risk where AI could generate numerous papers of questionable validity, and if these are also reviewed by AI systems, it could create a dangerous cycle of false scientific information.
Do we need a mega science facility or AI for science platform?
Speaker
Abhay Karandikar
Explanation
This question addresses whether there’s a need for large-scale infrastructure specifically designed to support AI for science research, similar to other mega science facilities.
How do we ensure public trust in AI use by law enforcement?
Speaker
Irakli Beridze
Explanation
Irakli Beridze mentioned launching a scientific project on ensuring public trust in AI use by law enforcement, indicating this is an active area requiring further research and policy development.
Will scientific foundation models be open source while commercial applications remain private?
Speaker
Audience member
Explanation
An audience member questioned whether there’s a trend where fundamental science AI models are released publicly but commercial applications are kept private, using Google’s AlphaFold as an example, which has implications for scientific collaboration and access.
What government guidelines exist for responsible global AI?
Speaker
Audience member
Explanation
This question seeks clarification on existing governmental frameworks for responsible AI development and deployment on a global scale, indicating a need for better understanding of current regulatory landscapes.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

