Inclusive AI_ Why Linguistic Diversity Matters
20 Feb 2026 15:00h - 16:00h
Inclusive AI_ Why Linguistic Diversity Matters
Session at a glance
Summary
This discussion centered on the development and launch of an innovative open-source AI hardware device that is multilingual, privacy-preserving, and operates offline, created through a collaboration between Bhashini and Current AI. The session began with a demonstration of the prototype device, which can process multiple AI models locally and supports over 22 languages, enabling users to ask questions in their native language and receive responses without internet connectivity.
Sushant Kumar introduced the concept as part of making “AI work for everyone,” emphasizing the importance of personal, local, and multilingual AI solutions. Ayah Bdeir, CEO of Current AI, explained that her organization was born from the AI Action Summit in Paris as a public-private partnership focused on creating AI for public interest, motivated by concerns about dominant tech companies controlling AI development. She shared personal experiences of losing connection to her native Arabic language due to inadequate technology support.
Amitabh Nag from Bhashini discussed the origins of their multilingual AI initiative, which began in 2023 with the goal of preserving linguistic diversity and ensuring no language is left behind. The technical demonstration showed the device successfully processing speech recognition, neural machine translation, and text-to-speech across multiple languages while running entirely offline.
The conversation expanded to broader themes of cultural preservation, data sovereignty, and the balance between open-source development and community rights over cultural data. Participants discussed the importance of including diverse cultural contexts in AI training data and ensuring communities benefit from sharing their cultural information. The session concluded with the announcement of the India AI Innovation Challenge, inviting developers to build upon this open-source platform to create solutions for their own communities and languages.
Keypoints
Major Discussion Points:
– Demonstration of Open Source Multilingual AI Device: The session showcased a collaborative prototype between Bhashini and Current AI – a handheld, offline AI device that supports 22+ languages and can perform complex AI tasks like speech recognition, translation, and text-to-speech without internet connectivity.
– Cultural Preservation and Linguistic Diversity in AI: Extensive discussion about preventing AI from creating cultural monocultures, with emphasis on preserving local languages, traditions, and cultural contexts in AI systems rather than defaulting to Western/English-dominant models.
– AI Sovereignty and Data Governance: Deep exploration of what sovereignty means in the AI context – from national control over AI infrastructure stacks to community rights over cultural data, and the balance between open-source collaboration and maintaining control over sensitive cultural information.
– Democratization and Accessibility of AI: Focus on making AI accessible to underserved populations, particularly those in remote areas with limited connectivity or those who only speak local languages, with the goal of ensuring “no person is left behind.”
– Launch of India AI Innovation Challenge: Announcement of an open-source innovation challenge inviting developers worldwide to build upon the demonstrated prototype, with significant prize money and technical support from both Bhashini and Current AI.
Overall Purpose:
The discussion aimed to present and promote a vision of “personal, local, and multilingual AI” as an alternative to centralized, Western-dominated AI systems. The session served to demonstrate practical solutions for inclusive AI, discuss policy frameworks for cultural preservation in AI, and launch collaborative initiatives to expand access to AI technology globally.
Overall Tone:
The discussion maintained a consistently optimistic and collaborative tone throughout. It began with excitement around the technical demonstration, evolved into thoughtful policy discussions about sovereignty and cultural preservation, and concluded with enthusiastic announcements about future partnerships. The tone was professional yet passionate, with speakers clearly motivated by a shared vision of democratizing AI technology while preserving cultural diversity.
Speakers
Speakers from the provided list:
– Sushant Kumar – Session moderator/host
– Amitabh Nag – CEO of Bhashini
– Announcer – Event announcer/moderator
– Andrew Tergis – Lead engineer on the AI device project from Current AI team
– Martin Tisne – Leads the AI Collaborative, Chair of Current AI, works on building AI grounded in democratic values and principles
– Device – AI device providing automated responses during demonstration
– Shalindra Pal Singh – General manager at Bhashini, worked on integrating Bhashini models into the device
– Anne Bouverot – Special envoy to the president (France)
– Abhishek Singh – Master and orchestrator of the AI summit
– Ayah Bdeir – CEO of Current AI, engineer and entrepreneur with 20 years of experience building open source technology infrastructure
Additional speakers:
None identified beyond the provided speakers names list.
Full session report
This discussion showcased the groundbreaking development and launch of an innovative open-source AI hardware device representing a paradigm shift towards personal, local, and multilingual artificial intelligence. The session, orchestrated by Kalpa Impact through a collaboration between Bhashini and Current AI, demonstrated how AI can be democratised to serve diverse global communities rather than reinforcing existing technological monopolies.
Technical Innovation and Demonstration
The session featured a remarkable live demonstration of a handheld AI device running entirely offline while supporting over 22 languages. Andrew Tergis, the lead engineer from Current AI, explained that this device—built on NVIDIA Jetson processing platform—was conceived as an open platform enabling anyone to connect, write applications, and run AI inference locally. This represented Current AI’s first collaborative build.
The demonstration showcased the “Here the World” application for vision-impaired users, where participants could press a button, ask questions in their native language, and receive responses in the same language. During the demo, the device successfully identified candy bars on the table (Twix, Milky Way, KitKat) and responded to questions in Hindi, demonstrating its practical multilingual capabilities.
Shalindra Pal Singh from Bhashini highlighted a crucial technical breakthrough: the team had successfully quantised multiple AI models—including automatic speech recognition (ASR), neural machine translation, large language models (LLM), and text-to-speech (TTS)—to fit on the device without accuracy loss. All processing occurs locally, ensuring privacy while enabling functionality in areas with poor connectivity or during emergencies.
Personal Motivations and Cultural Preservation
The speakers shared deeply personal motivations driving this innovation. Amitabh Nag from Bhashini described his experience as a Bengali speaker navigating India’s three-language formula, including the challenge of translating concepts like “chol khawe” (Bengali for eating) and the social stigma faced when thinking in one’s mother tongue. This illustrated the broader mission of ensuring “no language is left behind.”
Ayah Bdeir, CEO of Current AI and an engineer with 20 years of experience building open-source technology infrastructure, described how technology had contributed to Arabic language erosion in her family’s communications. Despite Arabic being their native tongue, her family predominantly uses English on digital platforms due to inadequate voice recognition and language support.
A powerful example came from Singh’s anecdote about tribal women in Jharkhand performing data annotation work. When a young woman correctly identified that a particular insect was beneficial rather than harmful to plants based on traditional ecological knowledge, it demonstrated how Western-centric training data can systematically misclassify reality for other cultures. This became a compelling argument for why multicultural AI isn’t just socially desirable but technically necessary for accuracy.
Organisational Vision and Rapid Development
Current AI emerged from the AI Action Summit in Paris as a public-private partnership with a mission to create AI for public interest at scale. Bdeir explained that the organisation was founded on the premise that dominant AI companies operate at such scales that meaningful alternatives require equally ambitious collaborative efforts.
Bhashini’s remarkable journey, beginning in 2023, demonstrated impressive rapid development to support 15 million daily inferences across a 200-GPU system. Nag described the challenges of building AI models with insufficient digital data, requiring extensive fieldwork with translators to create digital corpora for underrepresented languages, including digitising the Bheeli tribal language which had no script.
Sovereignty and International Collaboration
The discussion addressed sophisticated questions of AI sovereignty, with Abhishek Singh providing a framework encompassing energy, data centres, infrastructure, chips, models, and applications. Singh acknowledged that no country maintains complete control over all layers, making international collaboration essential. He argued that practical sovereignty means having choices about which technologies to use rather than complete independence.
Anne Bouverot articulated the challenge of balancing individual rights with collective cultural representation needs, noting that cultural preservation must operate at community levels—not just national AI, but representation for specific regional communities. Martin Tisne, who leads the AI Collaborative and chairs Current AI, raised questions about indigenous data sovereignty, referencing approaches where communities view cultural data as inherently part of their patrimony.
Addressing Technological Lock-in Concerns
Bdeir raised critical concerns about emerging embodied AI devices from major technology companies entering personal spaces—such as Meta’s glasses with facial recognition capabilities—without users understanding their training or data collection practices. She warned about hardware as the initial point of technological lock-in, similar to how smartphones created ecosystem dependencies.
The open-source hardware approach demonstrated represents a direct challenge to this model, enabling infinite innovation possibilities from agricultural applications helping farmers identify crop diseases to educational tools that don’t transmit private data to corporate servers.
Innovation Challenge and Future Directions
The session concluded with the announcement of the India AI Innovation Challenge, inviting developers worldwide to build upon the demonstrated prototype. Bhashini committed $110,000 in funding and ongoing technical support, including quantisation mechanisms and model enrichment. The challenge aims to expand imagination about personal, multilingual AI solving community-specific problems.
The France-India partnership exemplified how countries with complementary strengths can collaborate to enhance rather than compromise sovereignty by increasing options and reducing dependencies on dominant technology providers. The partnership encompasses multiple levels—government, research institutions, universities, and businesses.
Implications
The successful demonstration challenges assumptions about cloud-based AI necessity, opening possibilities for deployment in previously impossible contexts due to connectivity or privacy constraints. The emphasis on cultural preservation represents a fundamental shift from viewing diversity as a technical challenge to recognising it as essential for accuracy and relevance.
Rather than pursuing impossible technological independence, the collaboration demonstrates how countries can enhance their agency through strategic partnerships. The session presented a vision of AI development that serves human flourishing over corporate dominance, showing that alternative approaches can achieve remarkable technical sophistication while maintaining commitment to equity, diversity, and community empowerment.
The informal, collaborative atmosphere of the event—including technical interruptions, photo sessions, and candy distribution—underscored the genuine partnership approach driving this initiative, suggesting a new model for AI development that prioritises community needs and cultural preservation.
Session transcript
And therefore, how do we develop and support a paradigm that can make AI work for everyone? And that’s what we are here today. The session today is very aptly called: The case for personal, local and multilingual AI. Through a collaboration between Bhashani and Current AI, orchestrated by Kalpa Impact, we are proud to present to you today a seminal open source AI hardware device, one that is multilingual, handheld, privacy preserving and works in zero connectivity settings. So what we are going to do today is we are going to talk about the concept of AI. What we are going to show you after this will be a video that presents the imagination of what such a device could lead to.
in terms of making AI work for everyone. And once we have done that, there’s a special treat for all of you. The maker of the device and the collaborators at Bhashani are there in the room and they will demonstrate the product to you. So why don’t I begin with playing this video, which captures, which takes some creative liberties and captures our imagination of what this product would look like. And train on what I am watching. Audio, please. Thank you. Thank you. India’s real journey is no longer about pilots or promises. It’s about populations’ reach, clear use cases, last mile delivery. This is real world impact. This is real world impact. and connected vision for AI, not one that’s governed by any one country or one company.
I think all countries have a huge amount to bring to the table and a big relief in the power of collaboration. I was ready, the cup is open, now we need you. Come innovate AI for your own language, for your own community. We want to work with as diverse a group as possible. We can’t wait to see what we do. Yes, we’re back on. And for the next segment, I would like to invite Aya Bhadel, the CEO of Current AI, to take us through the product demonstration. Aya is an engineer and an entrepreneur with 20 years of experience building open source technology infrastructure. that works at global scale. Aya, over to you.
I have a quick interruption. I have to ask everybody to come here to take a picture so that the picture can be read by the end of the panel. You have 90 seconds free to speak amongst yourselves. Thank you. All right. All right. Thank you so much for coming, everyone. I’d like to introduce Andrew Turgis, who was the lead engineer on this project from the current AI team, who’s going to take us through a demo. Oh, there you are. And also Shalindra Pal Singh, who is a general manager at Bashni, who was Andrew’s collaborator and worked very closely to integrate Bashni models into the device. And I just want to say a couple of things.
This project was undertaken in a six -week period, I think maybe closer to five weeks, actually. So I just joined current AI in January of this year. When I came in, the partnership with Bashni had already been in discussion, and I was very inspired by Bashni’s work on linguistic diversity and the 250 models. And we thought this was an opportunity for us to go all the way, say, to the user and create something where really people can create AI that works for themselves, for their communities, and for their languages. So this prototype is the beginning of a journey and also a platform to imagine infinite things that are possible. And so you’ll see how it works.
But as it’s working, I also would like you to imagine what you could do with it and where you could take it. And from my perspective, I’ll just say for Current AI, this is an example of how we’d like to work with partners where we learn more about their interests and their focus areas and their priorities, and we zero in on a collaboration that we can develop together. We build it together, and then we release it as a public good. So in this case, it’s a piece of hardware and a development platform. In another case, it could be something else. But we’re really proud that this collaboration with Vashni is our first collaborative build, and you get to see it kind of firsthand as you’re sitting here.
So, Andrew, Shalindra, please. Please join me on stage, and I’ll let you take us away for the demo.
All right. Perfect. Hello. I’m so pleased to be able to show you this prototype that we’ve created. Yes. Oh, thank you. In front of the table. Wonderful. So this is our prototype open AI inference device. So, you know, unlike some other products you might have seen at this conference, which might be designed for one very specific user or one very specific use case. This device is designed to be used by any number of users for any number of use cases. The hope is that anyone could feel empowered to connect up to this device, write their own application, pull any number of models onto the device and run inference locally in their hand. We have one flagship application that we’ve developed in concert with Bosch.
That demonstrates their the models that they’ve been developing over so much time. And this sample. Application we call here the world, which is. an application where a vision -impaired user can press a button, ask a question in their native language about their surrounding, and have the device read back their response again in their native language, leveraging Bosch, these 22 -plus languages. In particular, we’re leveraging an ASR, an automatic speech recognition module, to convert the audio into text in their native language. We’ll be leveraging an MMT, neural machine translation module, to convert that text into English. We’re running it through a large language model with the image data to answer the question, and then we’ll be converting it back into their native language using, again, the MMT model, and finally a TTS module to convert it back into audio.
So this device is able to run all of those modules in concert. So without further ado, let’s try and give it a test query. Shalinder, do you think you can help me out here? I guess you’ll take the photo, and then I’ll spin it around quickly so the audience can see what’s happening. We’ll ask in Hindi. Let me just triple check. Yep, you’re all good. All right.
What it has done is it has taken the image and then it has taken the automatic speech recognition model kicks in and then neural machine translation is happening and then again the response is getting from the LLM that we have embedded and the translation is happening and the text to speech which is being spoken out. We have quantized the model in such a way that it is fit in. Usually when we do the quantization there is always a trade -off that there is a hit on the accuracy but we have reached to a point where there is no hit on the accuracy fronts.
This is a great way to a truly huge effort from your team, and we wouldn’t have been able to fit such a high -fidelity LLM on this if you didn’t do that great optimization work. So let’s see. Let’s ask another question. We have a couple of candy bars on this desk here, which we can show you. Let’s see. Let’s try it. I’m going to put this in English. What is on this table?
The table has candy wrappers of Twix, Milky Way, and KitKat.
All right. All right. It actually got the brands. And we have one more question of grave importance. But I’ll ask him in Hindi. That’s right. I got it. This is the best candy bar in the world. There we go. Would anyone like a candy bar? Anyone? Anyone? There you go. So just very briefly while we’re handing this out, this is currently based on the Intel Jetson, the NVIDIA Jetson processing platform, but we’ve used it to support other platforms as well because the processing that we’re doing does not depend on that. That just happens to be the platform we’ve chosen at the moment. And, yeah, we’re working on the ability to deploy any model that you could dream of onto this device.
Thank you.
Thank you very much. How did everyone feel about that demonstration and the things that can be done? Thank you. Thank you. And kudos to the Bhashini team, which worked tirelessly, and, of course, Andrew and the current AI team, which worked tirelessly to make sure the hardware, software, all of that was integrated. We had to get a device through customs as well. So that took some time, but eventually it’s here. and it’s working, which is amazing. And the best part is that the device is offline. All those queries, all the AI processing was happening on the device. And there are four or five models operational. Four models operational on that particular device, no mean feat. I salute the engineers who have worked on this, and there’s more to come.
And we know we have to get in a lot in a short period of time. So I will invite Ayya Bedev, the CEO of Current AI, and Sri Amitabh Ma ‘amji, the CEO of Bhashini, to join me for a fireside chat. And we’ll try and understand what about the personal, local, multilingual AI is what they are passionate about. So this is also about what are their motivations. So why don’t we start with you, Anasabji. so we all know a lot about bhajani we have heard about it and you know it’s a superstar at this point in time in terms of what you have achieved tell us about the origins tell us about how this all started and why this is personal to
Hey thank you see we all are born with our mother tongue right we learn our mother tongues for good 4 -5 years before we land up in a school and when we land up in a school it’s a three language formula so I am a Bengali and I talk about you know when it is Bengali everything is eaten so chol khawe is the right word so when you go to the school and you have to do Hindi and English you know how it could be for first 6 months you are going to you know, people will be laughing at you when you are translating and speaking because that’s the first way of speaking. You are not a native language speaker, so you will be translating and speaking.
That’s the linguistic nuance that you went after. So, you know, over a period of time, of course, we grew up. We were told that you have to learn English to succeed in life. So that’s another given which was there. And obviously, this opportunity came up. You know, there was already a concept which was there. And obviously, we started with, you know, one room office, first employee.
When was this? Which year?
This was in 2023.
Okay. That’s recent. That’s recent.
And then obviously, we started growing up as a team looking at various use cases. People started initially looking at the first thing was what’s the accuracy, which was the first question which used to come up. But then, you know. So our models were built up in a difficult condition because we didn’t have digital data to build up the AI model and which we collected the data through a brute force. And then we built up the models which were there because we went across to multiple places with translators who actually created the corpus, which is digital corpus. We still had deficient data, but we went across to build the model and deploy it. And under deployment, we had challenges which obviously came up from all aspects.
And today, when we have actually deployed the use cases, learned from it, improved it, we are now in a situation where we are running about 15 million inferences a day with a 200 GPU system and all having dashboards which actually give you every inference timeliness, how much time it takes, et cetera, et cetera. So we are able to real -time monitor what is happening in our system, who are our customers, how they are using it.
Fantastic. It’s wonderful to hear about your personal motivations. And I’ll move to you. How many languages do you speak?
My native tongue is Arabic and then I speak French, English and I’m learning Spanish
So very apt to move this personal and multilingual I have two questions for you One, tell us a little bit about Current AI and why this interest in this open hardware and partnership with Bhashini, how does this tie back to Current AI’s strategy and second, why is this personal to you?
So, Current AI was actually born out of the AI Action Summit last year in Paris It’s a public -private partnership with a mission to create AI for the public interest. And so it’s a partnership between philanthropy, government and the private sector to really say, we’re going to tackle public interest AI at scale. And the reason we’re going to do that is because the dominant companies that are governing our lives in AI operate at a scale, a financial scale, operate at an ambition level, that if we don’t match it, we don’t really have a chance to be a real alternative. And so Current AI was born out of that desire. The goal is to rally a global community and collaboratively and collectively to build a public staff for OpenAI that’s completely vertically integrated.
And so the way we work is we work with partners because the core premise is collaboration. Work with partners where we’ll identify an area of common interest and a priority and a gap in technology, and then we’ll zero in on that gap, work on it together, and then develop a piece of tech and release it as a public good. And so encourage this collaboration. This creation of technology that is put back in the public good, as well as have grant making under sort of like our fund pillar in order to encourage people already doing this work. And this topic is important to me, has been important to me for many years. I’m from Lebanon, from Beirut, and like I said, my native tongue is Arabic.
For the past many years, you know, our use of WhatsApp and mobile and social and everything, a lot of us in the Arab world lost use of Arabic. You know, my family and I, my sisters, my mom and my sisters and I speak in English to each other online all day. We speak on WhatsApp in English. The voice recognition is never good enough in Arabic. You spend more time correcting it than you do doing anything else. And so now it’s improved a little bit. But really, you know, technology has had an effect on the way we communicate with each other. And so for many years, it’s been a real concern for me that, you know, technology, if it’s not made by us, it’s not for us.
And so when I joined Current AI early this year, multilingual diversity was already a topic. And I was very happy about that. And sort of really… I really wanted to expand it into this idea of not just… language diversity, but cultural diversity and cultural preservation as a whole. And so this sort of idea came about and you can tell more about it.
Fantastic. What a story of Genesis. And of course, Silicon Valley making devices on AI for local use cases is going to be as effective as giving power in the hands of people. So on inclusivity, Amitabhji, one of the visions of Bhashani is to expand access. So when you think of this partnership with current AI, what is the future you envision in terms of expanding access and creating inclusion with Bhashani as the linchpin?
So a few things. So, you know, when you look at the size of the device, you know, we have almost reached a form factor, which is quite significant. It’s small, right? And it can be carried through at the last mile. And since it works offline, you are in a position to actually use it anywhere or more. So that’s the first part of inclusivity. We obviously have, you know, plans to look at smaller form factor as we go forward. The second thing which is there is to look at the language coverage. We currently cover 22 languages. In our system, we already have 16 languages, 14 more languages on text, a total of 36 languages. And we would like to increase that on breadth.
And recently we have digitized one of the tribal languages, which is Bheeli, which doesn’t have script. So that also gets added to it. So that is about breadth of languages which is there, which will be continuously added. So when we are talking about form factor, second, we are talking about offline. Third, we are talking about creating a breadth of languages so that no language is left behind. Hence, no person is left behind, including the tribal languages. The fourth factor is about… So how do we, you know, enrich the models which are there, which is a continuous activity which Vashni takes over. There can be, there are multiple things where the models still have to be enriched.
Means India has got about, means we were talking to Survey of India, and they have about 16 lakh places named, which are still to be digitized. So, you know, and put into the system. So those are glossaries which we are building. There are contextualization efforts which are happening. So over the period of time, the language enrichment as far as depth is concerned, is another thing which we are looking at. So we’re looking at breadth, depth, offline form factor as the four things which will move forward in this.
Fantastic. I can certainly see the open hardware playing a big role in that as well. I have a question to you on how you look at future. So what gives you the most hope? and the most concern about the future of language? And you started talking about how, you know, you feel like Arabic and the nuances are getting lost. So what gives you most hope or most concern about the future of language in an AI -driven world? Could you talk about that?
So I’ll start with the concern. I’m concerned about this new frontier of embodied AI. So over the past, you know, year or so, every big tech company has released their version of an embodied AI device that wants to enter your home, wants to enter, that wants to be close to your body, wants to, you know, enter your personal space. So whether metal is glasses or whether the butts are robots or whether Amazon Alexa. And we’re in full control of these devices, and we don’t know how they’re developed, and we don’t know how they’re trained. You know, last week or the week before, Meta announces that that the glasses are going to start doing facial recognition on every person you encounter in the street.
So now, unknowingly, you’re walking down the street. If somebody is wearing meta glasses, you are being recorded and facially recognized. So we have these devices. We don’t know how they work. They’re continuously recording our data, sending it out to the cloud. We also don’t know how they’re trained, and oftentimes they’re trained on Western languages. And so hardware is where the lockup first starts. It’s how the iPhone locked up a lot of technology innovation, because what happens is these companies will then develop, give us APIs into their devices. Startups will start forming and building on top of these devices, and then the startups start building a dependency on the device, and you start to build a whole stack on.
a core piece of hardware that you do not control. So it’s really kind of like a core, you know, building block that we have to crack before we let them sort of own the entire stack or the supply chain. I spent 15 years, you know, before current AI in open source hardware. I’ve seen how powerful it is when you develop on an open platform and people do what they want with it. It’s, you know, the same power that you get from something like Linux. And so that’s sort of a big area of concern. The area of hope for me is, you know, there are many trajectories for us to kind of improve from here. On one side, you can improve the device itself.
You lower its cost. You improve its battery life. You shrink its size. You make it more beautiful. So, you know, that’s one access. Then there’s another access that you can develop. You can have multiple of these devices together, connect them in a mesh network, now you have a distributed inference that you can use. you can run something larger on. You can have a larger version of this device that’s stationary. It can be like a micro data center. You can put a solar panel on it. Now, suddenly, it doesn’t need a battery. So you can infinitely innovate on the possibilities of this core building block. And then the third kind of track is on what you do with it.
You make a device for a farmer to identify how to deal with their crops. You make a device for a parent who wants to give their kid a toy but doesn’t want the toy to be communicating their private data back to the cloud. You create some sort of, I don’t know, tourism device that you can put around your neck and helps you move around, various sorts of things. And the opportunities are infinite.
Fantastic. And I wish we had more time to just continue going. We’re just scratching the surface. But we’re at time. And I thank you, Amitabhji, for the great work that you and your team are doing. Thank you. And I wish you all the best and all the luck for making that vision into a reality. Thank you very much. Thank you. and we move into our next segment which is another fireside chat and for that i would now hand the floor to a long -time friend and colleague martin tisney martin tisney leads the ai collaborative an organization working on building ai grounded in democratic values and principles and he’s also the chair of current ai. Martin over to you
Thanks very much um and my first task is going to be to welcome Abhishek singh who everyone knows who is the master and orchestrator of this entire summit congratulations Abhishek and amazed you’re still standing welcome and who is the orchestrator of the Paris summit welcome special envoy to the president thank you very much please I hope that was enough I think it was the next step so we are setting something with a resource to follow so as Sushant was saying and Aya I’m extraordinarily excited by Aya’s leadership when it comes to current AI and the work in really turning this work around linguistic diversity to the question of cultural preservation it seems to me that ensuring that AI isn’t squashing all of these incredible cultures that make up the beauty of the world into a monoculture or into a small number of monocultures is one of the most important questions that we have today so my first question to both of you maybe starting with you Abhishek and then to Anne it’s the same question what is your vision?
what is the world that you would like us to live in when it comes to this intersection of AI and culture if we get it right what does it look like? whether it’s five years ten years from now what does it look like if we get it right?
languages. He knows only his local term, his bug term. He does not even know how to key in or how to navigate a captcha or he gets lost with the hashtags and the Amazon. So for such people, if they are able to talk to the developers, put their query into the internet or bandwidth or connectivity and get a reply back, that will be empowering. And that’s what I think the ultimate objective of this summit also. Democratizing use of AI and ultimately making AI work for all. Thanks.
Thank you very much, Abhishek. Anne, What is your vision?
So, of course, I share a lot of what Abhishek said. I also think that using AI through our phones and one way to say this is that when I get online to my phone, I mean I love San Francisco, I love Shanghai, but I’d like to have a wider choice. I don’t necessarily want to be transported to Silicon Valley. who are transported to Shanghai when I get into AI. And that’s a little bit of a joke, but if all the cultural representation, if all the legal background, if all the customs that are taken as just the de facto way you interact with people, if that’s the choice, well, that’s just such a reduction of cultural diversity.
And I think it’s just not okay. It’s not just about being able to have access to a French AI or an Indian AI. It’s even more than that. If I’m interested in music and if I come from a particular area in France, well, I’d like to be able to have that community and its culture represented there. So I think that’s part of my vision.
Thank you. And if I can stay with you just a second, Anne, from a French perspective, from France’s point of view, how do you see? Um, culture. and AI playing together? What does it look like? So when I was a kid growing up in France, from a cultural perspective, it was at a time where it was, I actually think it was a good idea in retrospect, you’ll tell us what you think, that there was a law that mandated a certain percentage of music on radio to be sung in French. There was a law that mandated a certain amount of productions, movie productions to be in French. And that’s ended up, it seems to me, with a certain amount of, you know, sort of cultural patrimoine, as we say, to exist.
So from a policy perspective in France, when it comes to artificial intelligence and culture, do you think that at some point there needs to be a sort of a set norm, like we did in sort of in movies and radio? What do you think?
That’s a good question. I don’t know whether we need a set norm, but yes, there’s mechanisms to encourage creation in France and in Europe. That’s quite important. With every movie that you go and see, which can be from any country, you can see that there’s a set norm. And I think that’s a good thing. I think that’s a good thing. that gives a certain tax on this, a certain amount of money, goes to a fund that then helps French creators to go and prepare whatever they want as their next film. And that mechanism doesn’t make it hegemonious. I mean, of course, we love culture from all over the world, but it helps ensure that there’s an element of French cultural creation.
And that’s what we definitely want to continue to have. And we want people to have the ability to see that in France, but all over the world, just like we love to see Indian movies or listen to Indian music or some symphony or some movement. So that city needs to be maintained, needs to be ensured in including through some mechanisms to fund it. Yes.
Thank you Anne so. Thank you very much,. Abhishek, similar question to you.
I think if AI has to be like covering all aspects, then it has to be rooted into data sets that are diverse and data sets when you talk about in any cultural context, it will include not only languages but it will also include the culture, the heritage, the music, the movies, the songs and lots of folklore. Because in fact, if you look at across India, if you go to the rural areas and all, there are lots of traditions which are not even documented well. So those things are not even available in a digital format. They are known to people. Like in fact, recently I was watching a documentary on Netflix called Human in the Moon.
It’s set up in a state of India, Jharkhand, with a lot of tribal population and there are these tribal women who are doing data annotation for an American firm. And it shows that they have to, they are seeing leaves and pests and they have to mark it whether it’s a pest or not. So this young girl is there and what she does is that she sees a pest, an image, and she marks it not a pest. Her manager comes down heavily on her and says that this is obviously a pest. How are you saying there’s not a pest? She says that this tree grows in my local forest around where I live and I know that this worm eats only leaves which are dying.
In a way, it helps the plants. It’s not a pest. So again, having this traditional knowledge built into the corpus of data sets on which we train AI models will be very, very vital if we have to ensure that AI doesn’t hassle and hallucinate if AI becomes near to what human is. So it becomes very important to capture this cultural context from all across the world, from all communities, all cultures, all traditions, and we only will be able to be something which is not a pest. Because we are human like atrocious. This just technological pursuit of AGI and all will not solve the problem that we are living.
that’s a great example thank you but then maybe staying with you abhishek for a second and i’ll come back to you on the question of reciprocity so you talked about the data sets communities cultures and all their diversity are sharing the data we want them to be sharing their data with different with different ai models what does it look like from the community perspective do you think like should they be involved in it should they be have rights over the data how do you how do you think
It is a very interesting question because when it was about sharing of data sharing of data across uh across companies across industry we have to kind of when the frameworks which allows data for public purposes that means data in a way which does not violate the privacy or the personal identity of the person who owns the data the person who the data belongs the data principle per se so when it is data sharing the data in the community will need to be involved If you don’t do that, in the interest of business and in the interest of commercial requirements, the possibility of missing the data goes up. So it’s very important to have standards, not only technical standards, but community standards which are rooted in the culture and the belief systems of a place from where the data is coming from in order to ensure that the models and the applications
Thank you. If I can get a little bit further on that question, there’s the question about the rights of the individual and the rights of the communities towards the data. Do you think, in the way that you’re working, is there also a reciprocity in terms of if data about them is used for a particular purpose, that then the community should benefit from it? How do you think about that? They should benefit whether it’s a translation or other device? How do you think about that?
So you need to think about like the different use cases may have different applications. Like for example, it’s data about say agriculture and if I have aggregate data about a particular area and that kind of is used to generally advise me so partners with regard to what they should show for maximum benefits at what time they should show. Then that data should be shareable and that’s in benefit of everyone. But if we say for example health data, then there in the individual might not be wanting to share that data with the lab and ecosystem. So I think it will be context specific and we cannot have general rules about sharing of data and the reciprocity principles across different sectors.
Thank you very much. Anne I have a similar question for you on this question of reciprocity. What’s your take?
I think that’s a very profound question. Part of the reason why you want to share cultural data is so that cultures are preserved and you don’t end up with one or two or three cultures in the world, but something that is more diverse. So it is in the interest of a cultural group, of a civilization, that in the world of AI this culture is represented. And from that perspective, you have a very natural reciprocity loop. But at the same time, creators are saying, I don’t want my data to be used if I don’t have a mode of being compensated or recognized or a way to oppose. And so you have this tension between artists, for example, who say, well, I want my rights to be maintained and I want some type of compensation.
If this is being used to feed AI. models and then for people to earn money out of it. But then on a collective basis, you do want that culture to be represented. So I’m not sure I have a solution, but I very clearly see the tension. And many ways we can navigate that is to have a right of opposition by specific artists so that they can say, no, my data, my creations are not going to be used. And at the same time, you can certainly have historical information and things that are not so subject to maybe having remuneration for living artists be part of the general cultural data that you use to train AI. But beyond these two obvious things, I’m not really sure.
So we need to continue to work on this.
Thank you. And again, just to go a bit deeper on the question. it really is, it’s a fascinating question because from the perspective of the communities I would imagine whose data it is it’s data about them, as you say you want people to know about your culture, you want the culture to be preserved and at the same time you want a certain degree of agency over how the data is used. In an earlier panel I was talking about, or we were talking about indigenous data sovereignty and we were talking about the Maori community in New Zealand and the degree to which, as I understand it in Maori culture, any data, any information that pertains to Maori culture is effectively part of Maori culture so there’s a real question of agency.
My question is, in the run up and when we were working together on the Paris summit we talked quite a bit about the relation between open source AI on one hand and then the governance of the data and the governance of the data then to be controlled in different ways so how do you think about this balance because it strikes me that getting the balance right between on one hand the open source components, and I’ll come to the same question to you in a second, that is and on the other hand, a more controlled approach around data governance, that’s the special source. What do you think?
Yeah, I completely agree. And maybe that’s, as Abhishek was saying, maybe the example of health data is a good one there because for cultural data, you want the general benefit and you want to preserve artists’ rights. I think those are the two dynamics. For health data, you do want, as an individual, as a patient, if you’re being asked the question, do you want to protect your personal data, the answer is yes. If you’re being asked the question, are you willing to share your data with other people who have or are at risk of a similar illness so that it can help them, the answer is yes. And then how do you balance the two? And so you need to find some ways to share data in a platform or in a way that you have trust into.
And so it needs. It needs to be privacy -preserving. It needs to be held by an actor you trust, even if you don’t go and look at all the terms and conditions, but you need to understand that it’s an institution or a third party that you can trust. And then you want to be able to rely on that third party to make the right decisions, like, yes, sharing the data to enable research and find new cures, but maybe to sharing it to insurance companies so that you can be charged a different rate, depending on what your personal situation is. And then when you get into sovereignty, maybe you’re happy for this to be shared with innovative startups in your country or your region that will develop cures and new foods, but maybe not with some other actors.
So you get to a number of different levels and questions. And for that, having third parties, most third parties that can vote for you, we make the right decisions, I think, is very important.
Thank you. Let me take the same question to you. How do you see that balance between, on the one hand, we’ve talked a lot about open source, open source AI over the course of the week. How do you see the balance between, on the one hand, open source, on the other hand, the question of the cultural data that we’ve been talking about?
Again, ultimately, I’ll go back to the end objectives. What is the purpose for which we are sharing the data? Is it serving public interest or is it serving private interest? Is there a benefit for the user to whom the data belongs? So, for example, health data is there. If aggregate level data is there, for example, we keep on hearing about outbreaks of, we are over COVID, but we keep on hearing about outbreaks of flu and other ailments. If aggregate data about incidence of such diseases and linkages with other factors, environmental factors, weather factors, rain factors, is shared so that people can think of devising a data. AI enables solutions of integrating various. data sets and trying to see why in a particular geography, in a particular locality, some element of this is happening.
That is the public interest. So, we will have to define in a case -by -case basis, whenever data is being shared, whether open source or in a proprietary solution, what is the end objective, what is the problem that I am going to solve? And is it serving the larger interest of the community? Is it serving larger public interest or is it being done to benefit a few corporations? Like, for example, the example she gave about insurance companies. If it leads to, if the data about health consumption or something leads to increase in my insurance premiums, that is like not fair. Because they are linking that data with the individuals to whom the data belongs. So, we will need to think of privacy preservation techniques, we will need to think of anonymizing techniques, so that in no ways the data principle to whom the data belongs is harmed in an adverse manner.
So, we will have to do this in a very new instrument. There is not one size fits all solution. If we do that, we will end the risks of the, the risks dominating the narrative and we will go somewhere towards the positives that would have been the most fun.
Thank you very much so then there’s a question i can’t resist asking you which is what’s your definition of sovereignty because then you mentioned the term we’ve talked a lot about this this week and in the context of this conversation it’s really interesting because there’s a question of sovereignty from a nation’s perspective there’s a question of sovereignty i mentioned that marie example indigenous data sovereignty from a community’s perspective and then both have been talking about health so there’s a question of you know at an individual level the sovereignty i have over a data about me so with your experience coming towards the end of the summit and the experience that india has how do you when you think about sovereignty and ai what do you what do you think of
I feel like sovereignty of course traditionally it’s a science concept wherein it seems that nations which are sovereign need to have complete control over what they do how they do with the entire control of the decisions so when it apply when you apply to technology and when you apply to ai specifically the same concepts will apply with regard to what I want to do, with whom I want to do and how I want to do. Nobody else should decide to make decisions on my behalf. So maybe in the short term ideally a complete sovereign AI stack will mean that we should have complete control over all the five layers of AI. Whether it’s the energy layer, the data center, infrastructure, chips, models, applications, use cases.
We should have complete control over it. The technology is evolving right now. In fact, good for a few countries. In fact, good for humans. I don’t think any other country has complete control over the entire AI stack. Every other country has complete control. In the context of India, we are there. We are there on energy sufficiency. We have the data centers. We have our models, our applications, but we don’t have the complete. We have the capability to distribute the computer. We do hold them three to five years. We design our own chip. And in five to ten years, we’ll be able to have a fab which we can take it out also. In the short term, if I decide that which chip I want to use, how I want to use, how I procure rather than be subject to conditionality rather we force people something which will be sovereignty.
So sovereignty will apply the same concept of sovereignty that we apply at the beginning of political science where in complete control of the business live with the sovereign government that should be the way we should look at sovereignty in AI as well.
Thank you very much. So just as we’re ending and we’re now in the time, feel free to weave in the questions of sovereignty. Anne, curious, in the wake of President Macron state visit and the bilateral relationship between France and India what do you both see, so starting with Anne and then finishing with you Abhishek, what do you both see as opportunities for France and India to jointly work on these global norms, global approaches for a more contextual approach to artificial intelligence, for a culturally inclusive approach to AI?
Well, I’ll try to be short, but this is the year of joint innovation between India and France. There’s many areas where we’re collaborating and will continue to collaborate. Clearly, current AI and this work on multilingual AI is one. Working on AI that is resilient and sustainable by design, as we were just discussing earlier with Abhishek, is clearly a priority. So, it’s a priority, joint research. And then just to weave, I can’t resist weaving the work on sovereignty. I think sovereignty, no one actually, not even the U.S., they don’t have the chips. So, nobody can do everything alone. I believe that means having a choice and building alternative solutions. And I really think we can and we will jointly build alternative solutions between France and India.
I kind of echo her and in fact the partnership between India and France have been there for quite some time, in fact last year we coached with France Action Summit and the partnership has continued this year and this year of course as you know we have launched a year of innovation and many more activities have been announced by President Macron and our Prime Minister in the last week and we are looking forward to joining you at the World Tech in the next few months and there are many more activities, partnership at the university level, partnership at the research level, partnership at the business level, partnership at the government level, so I strongly believe that working jointly with especially a trusted partner like France and India we have complementary strengths and we can try to present an approach to building a solution that can become an example for the whole world.
Thank you very much thank you very much, it was an honour to launch Panteo and Paris and a pleasure to launch this partnership in India, thank you. Thank you.
Hello. Thanks, Martin. Abhishek Singh sir, I request you to stay on stage. And Aya, we’d love to have you on to launch the Global Innovation Challenge in the spirit of what Anne said. And Amitabh Nagsir as well. Please.
Am I going first? Okay, great. So, great session, great thoughts, great demo. So, all of us have seen the demo of the reference device, the device which has been built in partnership with Bhashini and Current AI. And in fact, I must mention that it was just a few weeks back when we had this discussion, because I have been discussing with Martin after the discussions and announcement at Public Interest AI, like what will Current AI do with the 400 million dollars that they have, euros that they have raised. And I was saying that let’s do something which can really make an impact and if we can do something at the impact summit, it will be worthwhile.
But kudos to the teams and they have built the collaborative build design which was designed by both the engineers from Bhashini and the Current AI’s support is being done in such a way that it’s a platform, it’s a prototype on which we can innovate. It’s completely open source. It’s hackable, it’s privacy preserving, it’s multilingual. And with on -device AI, this prototype is capable of functioning in remote locations in not only India but anywhere else in the world where connectivity is a challenge or for any reason, if there’s an earthquake or there’s a problem or if there is a natural calamity and we can’t have connectivity it can work. So that can be really transformational for people to access services.
And in partnership with Current AI and Bhashini, in fact it’s my honor and privilege to announce the India AI Innovation Challenge. which will give an opportunity to researchers, to engineers, to developers, to entrepreneurs to build on this prototype. And this prototype will be available in an open source manner for everyone to hack it and make it smaller, you can make it more sleeker, you can solve individual use cases for different sectors and it’s based on an open source software and hardware design and the kind of use cases one can think of will be limitless. So there will not be one but multiple solutions that can be built on it. And we are opening it today and in the next few weeks the date here says that submissions will open on 25th Feb.
The 25th Feb on our website will launch the challenge on which applications can be submitted and there is some time to build the actual device and those who will win will get a very handsome reward that will be funded both by Bhashini and Current AI and together we will try to ensure that we are able to build a product that the whole world can use.
so we will continue to you know support this effort to our quantization mechanism and also the technical support will be available with respect to the model enrichment etc so this will be a joint effort so people are supposed to put in the effort and come back to us on the challenges and we will work on that together
I just say for Amitabh because maybe he tried to say Bashi is offering I think $110 ,000 prize to the winners maybe no I guess should people make a demand how does the number increase On your way out please make a request everyone the number to go out so there’s a big page to it for also participants to make sure that they have support while they’re developing their hardware and software and showcase the work online to inspire many other people really the point of it is to kind of like expand imagination and start this conversation about making your own AI and start the conversation about AI being personal and multilingual and solving communities and individuals own problems and today it’s a piece of hardware tomorrow it could be something in the software the day after can be in data so really this is the beginning of the journey thank you so much for coming everyone thank you for doing such good partners thank you Amitabh and the Vashni team thank you to the current AI team thank you Martin for bringing us together and have a great rest of the week and rest hopefully for you bye
Sushant Kumar
Speech speed
109 words per minute
Speech length
999 words
Speech time
546 seconds
Vision for Personal, Local, Multilingual AI
Explanation
Sushant frames the session around the need for AI that is personal, local and multilingual, emphasizing that AI should work for everyone and be adaptable to individual languages and communities.
Evidence
“The session today is very aptly called: The case for personal, local and multilingual AI” [1]. “And we’ll try and understand what about the personal, local, multilingual AI is what they are passionate about” [3]. “And therefore, how do we develop and support a paradigm that can make AI work for everyone?” [4]. “Come innovate AI for your own language, for your own community” [5].
Major discussion point
Vision for Personal, Local, Multilingual AI
Topics
Closing all digital divides | Artificial intelligence
Collaboration as Public‑Good Open‑Source Initiative
Explanation
He highlights the partnership between Bhashini, Current AI and Kalpa Impact that produced an open‑source, multilingual hardware device, positioning it as a public‑interest effort rather than a proprietary product.
Evidence
“Through a collaboration between Bhashani and Current AI, orchestrated by Kalpa Impact, we are proud to present to you today a seminal open source AI hardware device, one that is multilingual, handheld, privacy preserving and works in zero connectivity settings” [33]. “and connected vision for AI, not one that’s governed by any one country or one company” [15].
Major discussion point
Collaboration as Public‑Good Open‑Source Initiative
Topics
The enabling environment for digital development | Artificial intelligence
Technical Demonstration: Device Capabilities & Offline Operation
Explanation
Sushant points out that the prototype runs multiple models locally, works offline, and therefore can be used in last‑mile, connectivity‑challenged settings.
Evidence
“And the best part is that the device is offline” [46]. “Four models operational on that particular device, no mean feat” [55]. “Four models operational on that particular device, no mean feat” [63]. “This is real world impact” [65]. “It’s about populations’ reach, clear use cases, last mile delivery” [75].
Major discussion point
Technical Demonstration: Device Capabilities & Open Hardware
Topics
Artificial intelligence | Closing all digital divides
Inclusivity and Language Diversity Commitment
Explanation
He stresses the intent to involve diverse groups and languages, ensuring no community is left behind in AI adoption.
Evidence
“We want to work with as diverse a group as possible” [81]. “And of course, Silicon Valley making devices on AI for local use cases is going to be as effective as giving power in the hands of people” [14].
Major discussion point
Inclusivity, Language Coverage, and Cultural Preservation
Topics
Closing all digital divides | Social and economic development
Ayah Bdeir
Speech speed
163 words per minute
Speech length
1650 words
Speech time
606 seconds
Personal, Local, Multilingual AI Vision
Explanation
Ayah describes the motivation to let users create AI that serves their own communities and languages, linking it to her early involvement with Current AI.
Evidence
“And we thought this was an opportunity for us to go all the way, say, to the user and create something where really people can create AI that works for themselves, for their communities, and for their languages” [8]. “And so when I joined Current AI early this year, multilingual diversity was already a topic” [12].
Major discussion point
Vision for Personal, Local, Multilingual AI
Topics
Closing all digital divides | Artificial intelligence
Collaboration as Public‑Interest Partnership
Explanation
She frames Current AI as a public‑private partnership aimed at delivering AI for the public interest, co‑creating technology with partners.
Evidence
“Current AI was actually born out of the AI Action Summit last year in Paris It’s a public‑private partnership with a mission to create AI for the public interest” [27]. “And from my perspective, I’ll just say for Current AI, this is an example of how we’d like to work with partners where we learn more about their interests and their focus areas and our priorities, and we zero in on a collaboration that we can develop together” [28]. “Work with partners where we’ll identify an area of common interest and a priority and a gap in technology, and then we’ll zero in on that gap, work on it together, and then develop a piece of tech and release it as a public good” [30].
Major discussion point
Collaboration as Public‑Good Open‑Source Initiative
Topics
The enabling environment for digital development | Artificial intelligence
Concern about Embodied AI and Privacy
Explanation
Ayah voices worry that embodied AI devices could lock users into proprietary stacks and raise privacy issues, while also expressing hope for open‑source trajectories.
Evidence
“I’m concerned about this new frontier of embodied AI” [21]. “So over the past, you know, year or so, every big tech company has released their version of an embodied AI device that wants to enter your home, wants to enter, that wants to be close to your body, wants to, you know, enter your personal space” [84]. “And so for many years, it’s been a real concern for me that, you know, technology, if it’s not made by us, it’s not for us” [85]. “The area of hope for me is, you know, there are many trajectories for us to kind of improve from here” [94].
Major discussion point
Concerns and Hopes for the Future of AI
Topics
Human rights and the ethical dimensions of the information society | Building confidence and security in the use of ICTs
Cultural Preservation Beyond Language
Explanation
She expands the AI ambition to protect cultural nuance and heritage, not just linguistic diversity.
Evidence
“I really wanted to expand it into this idea of not just… language diversity, but cultural diversity and cultural preservation as a whole” [79].
Major discussion point
Inclusivity, Language Coverage, and Cultural Preservation
Topics
Social and economic development | Human rights and the ethical dimensions of the information society
Future Hope: Mesh Networking and Distributed Inference
Explanation
Ayah envisions multiple devices linked together in a mesh network to provide distributed, offline inference capabilities.
Evidence
“You can have multiple of these devices together, connect them in a mesh network, now you have a distributed inference that you can use” [53].
Major discussion point
Technical Demonstration: Device Capabilities & Open Hardware
Topics
Artificial intelligence | Closing all digital divides
Andrew Tergis
Speech speed
161 words per minute
Speech length
546 words
Speech time
202 seconds
Prototype Open‑AI Inference Device
Explanation
Andrew introduces the handheld prototype, emphasizing its openness, ability to run any model locally, and offline operation.
Evidence
“So this is our prototype open AI inference device” [44]. “The hope is that anyone could feel empowered to connect up to this device, write their own application, pull any number of models onto the device and run inference locally in their hand” [45]. “And the best part is that the device is offline” [46]. “And, yeah, we’re working on the ability to deploy any model that you could dream of onto this device” [49]. “This device is designed to be used by any number of users for any number of use cases” [57]. “This is a great way to a truly huge effort from your team, and we wouldn’t have been able to fit such a high‑fidelity LLM on this if you didn’t do that great optimization work” [58]. “So this device is able to run all of those modules in concert” [64].
Major discussion point
Technical Demonstration: Device Capabilities & Open Hardware
Topics
Artificial intelligence | Closing all digital divides
Accessibility for Vision‑Impaired Users
Explanation
He demonstrates a use‑case where a vision‑impaired person can query the device in their native language and receive spoken feedback, showcasing multilingual offline capability.
Evidence
“an application where a vision‑impaired user can press a button, ask a question in their native language about their surrounding, and have the device read back their response again in their native language, leveraging Bosch, these 22‑plus languages” [17].
Major discussion point
Technical Demonstration: Device Capabilities & Open Hardware
Topics
Social and economic development | Closing all digital divides
Shalindra Pal Singh
Speech speed
136 words per minute
Speech length
108 words
Speech time
47 seconds
Quantization Without Accuracy Loss
Explanation
Shalindra explains that the model quantization technique used enables a high‑fidelity LLM to fit on‑device without sacrificing accuracy.
Evidence
“Usually when we do the quantization there is always a trade‑off that there is a hit on the accuracy but we have reached to a point where there is no hit on the accuracy fronts” [59]. “We have quantized the model in such a way that it is fit in” [60].
Major discussion point
Technical Demonstration: Device Capabilities & Open Hardware
Topics
Artificial intelligence | Capacity development
Amitabh Nag
Speech speed
164 words per minute
Speech length
814 words
Speech time
297 seconds
Multilingual Coverage and Tribal Language Inclusion
Explanation
Amitabh outlines the current language portfolio (22‑plus languages, aiming for 36) and stresses the commitment to add tribal languages so that no language is left behind.
Evidence
“We currently cover 22 languages” [73]. “In our system, we already have 16 languages, 14 more languages on text, a total of 36 languages” [74]. “So that is about breadth of languages which is there, which will be continuously added” [76]. “Third, we are talking about creating a breadth of languages so that no language is left behind” [77]. “Hence, no person is left behind, including the tribal languages” [80].
Major discussion point
Inclusivity, Language Coverage, and Cultural Preservation
Topics
Closing all digital divides | Social and economic development
Offline Capability and Model Enrichment Support
Explanation
He confirms that the device works offline everywhere and that the team will continue to support quantization and model enrichment for developers.
Evidence
“And since it works offline, you are in a position to actually use it anywhere or more” [52]. “So when we are talking about form factor, second, we are talking about offline” [56]. “so we will continue to you know support this effort to our quantization mechanism and also the technical support will be available with respect to the model enrichment etc so this will be a joint effort” [155]. “So how do we, you know, enrich the models which are there, which is a continuous activity which Vashni takes over” [158].
Major discussion point
Technical Demonstration: Device Capabilities & Open Hardware
Topics
Artificial intelligence | Capacity development
Martin Tisne
Speech speed
221 words per minute
Speech length
1206 words
Speech time
326 seconds
Data Governance, Reciprocity and Community Rights
Explanation
Martin probes the need for community involvement, standards rooted in cultural contexts, and reciprocity when data from communities is used to train AI.
Evidence
“what is the world that you would like us to live in when it comes to this intersection of AI and culture if we get it right what does it look like?” [96]. “How do you see the balance between, on the one hand, open source, on the other hand, the question of the cultural data that we’ve been talking about?” [103]. “Do you think, in the way that you’re working, is there also a reciprocity in terms of if data about them is used for a particular purpose, that then the community should benefit from it?” [104]. “it really is, it’s a fascinating question because from the perspective of the communities I would imagine whose data it is it’s data about them, as you say you want the culture to be preserved and you want a certain degree of agency over how the data is used” [105].
Major discussion point
Data Governance, Sovereignty, and Reciprocity
Topics
Data governance | Human rights and the ethical dimensions of the information society
International Cooperation and Norm‑Setting (France‑India)
Explanation
Martin highlights the bilateral dialogue on AI and culture, asking how France and India can jointly shape global norms that respect cultural diversity.
Evidence
“Anne, curious, in the wake of President Macron state visit and the bilateral relationship between France and India what do you both see as opportunities for France and India to jointly work on these global norms, global approaches for a more contextual approach to artificial intelligence, for a culturally inclusive approach to AI?” [144].
Major discussion point
International Cooperation and Norm‑Setting (France‑India)
Topics
International cooperation (covered under Artificial intelligence) | Data governance
Abhishek Singh
Speech speed
182 words per minute
Speech length
2010 words
Speech time
659 seconds
Democratizing AI for All Languages and Cultures
Explanation
Abhishek stresses that AI must be hackable, privacy‑preserving, multilingual and that democratizing AI empowers users lacking language‑tech tools.
Evidence
“Democratizing use of AI and ultimately making AI work for all” [6]. “It’s hackable, it’s privacy preserving, it’s multilingual” [11]. “I think if AI has to be like covering all aspects, then it has to be rooted into data sets that are diverse and data sets when you talk about in any cultural context, it will include not only languages but it will also include the culture, the heritage, the music, the movies, the songs and lots of folklore” [9].
Major discussion point
Vision for Personal, Local, Multilingual AI
Topics
Closing all digital divides | Social and economic development
Open‑Source Challenge Announcement and Collaborative Build
Explanation
He announces the India AI Innovation Challenge, describing it as an open‑source hardware competition with substantial prizes and joint support from Bhashini and Current AI.
Evidence
“And in partnership with Current AI and Bhashini, in fact it’s my honor and privilege to announce the India AI Innovation Challenge” [32]. “The 25th Feb on our website will launch the challenge on which applications can be submitted and there is some time to build the actual device and those who will win will get a very handsome reward that will be funded both by Bhashini and Current AI” [41]. “And this prototype will be available in an open source manner for everyone to hack it and make it smaller, you can make it more sleeker, you can solve individual use cases for different sectors and it’s based on an open source software and hardware design” [51].
Major discussion point
Launch of the India AI Innovation Challenge
Topics
Financial mechanisms | Artificial intelligence
Data Sovereignty and Full Stack Control
Explanation
Abhishek argues that true AI sovereignty means having control over the entire stack—from chips to applications—so that nations and communities are not dependent on external providers.
Evidence
“AI sovereignty means full control over the entire stack—from chips to applications” [11] (paraphrased in original list, but supported by) “So maybe in the short term, it’s a very interesting question because when it was about sharing of data… we should have complete control over the entire AI stack” [116]. “We should have complete control over it” [119]. “I feel like sovereignty of course traditionally it’s a science concept wherein it seems that nations which are sovereign need to have complete control over what they do how they do with the entire control of the decisions” [117].
Major discussion point
Data Governance, Sovereignty, and Reciprocity
Topics
Data governance | Artificial intelligence
International Partnership with France for Sustainable AI
Explanation
He highlights the longstanding India‑France partnership, noting joint research, business, and policy collaboration to build culturally inclusive AI solutions.
Evidence
“I kind of echo her and in fact the partnership between India and France have been there for quite some time… we have launched a year of innovation and many more activities have been announced by President Macron and our Prime Minister” [145]. “we are looking forward to joining you at the World Tech… partnership at the university level, partnership at the research level, partnership at the business level, partnership at the government level” [145].
Major discussion point
International Cooperation and Norm‑Setting (France‑India)
Topics
Artificial intelligence | The enabling environment for digital development
Anne Bouverot
Speech speed
157 words per minute
Speech length
1107 words
Speech time
422 seconds
Cultural Representation in AI
Explanation
Anne stresses that AI must reflect many cultures rather than a single tech‑centric monoculture, and that preserving cultural data safeguards diversity.
Evidence
“It’s not just about being able to have access to a French AI or an Indian AI” [7]. “So it is in the interest of a cultural group, of a civilization, that in the world of AI this culture is represented” [23]. “Part of the reason why you want to share cultural data is so that cultures are preserved and you don’t end up with one or two or three cultures in the world, but something that is more diverse” [97].
Major discussion point
Vision of AI that reflects many cultures rather than a single tech‑centric monoculture
Topics
Social and economic development | Human rights and the ethical dimensions of the information society
Data Governance, Artists’ Rights and Reciprocity
Explanation
She discusses the need for mechanisms that protect artists’ rights, allow opt‑out, and ensure that data sharing benefits creators while preserving privacy.
Evidence
“And so you need to find some ways to share data in a platform or in a way that you have trust into” [108]. “And maybe that’s, as Abhishek was saying, maybe the example of health data is a good one there because for cultural data, you want the general benefit and you want to preserve artists’ rights” [110]. “And so you have this tension between artists, for example, who say, well, I want my rights to be maintained and I want some type of compensation” [111]. “And at the same time, you can certainly have historical information and things that are not so subject to maybe having remuneration for living artists be part of the general cultural data that you use to train AI” [112]. “And many ways we can navigate that is to have a right of opposition by specific artists so that they can say, no, my data, my creations are not going to be used” [113].
Major discussion point
Data Governance, Sovereignty, and Reciprocity
Topics
Data governance | Human rights and the ethical dimensions of the information society
France‑India Joint Innovation for Sustainable AI
Explanation
Anne highlights the joint year of innovation between France and India, aiming to build alternative, sustainable AI solutions that are culturally inclusive.
Evidence
“Well, I’ll try to be short, but this is the year of joint innovation between India and France” [142]. “I really think we can and we will jointly build alternative solutions between France and India” [143]. “Working on AI that is resilient and sustainable by design… is clearly a priority” [146].
Major discussion point
International Cooperation and Norm‑Setting (France‑India)
Topics
Artificial intelligence | The enabling environment for digital development
Announcer
Speech speed
152 words per minute
Speech length
40 words
Speech time
15 seconds
Launch of the India AI Innovation Challenge
Explanation
The announcer cues the audience for the opening of the challenge, emphasizing its open‑source nature and the call for community imagination.
Evidence
“Please” [140]. “And Aya, we’d love to have you on to launch the Global Innovation Challenge in the spirit of what Anne said” [153]. “the point of it is to kind of like expand imagination and start this conversation about making your own AI and start the conversation about AI being personal and multilingual and solving communities and individuals own problems… this is the beginning of the journey” [154].
Major discussion point
Launch of the India AI Innovation Challenge
Topics
Financial mechanisms | Artificial intelligence
Device
Speech speed
113 words per minute
Speech length
11 words
Speech time
5 seconds
Informal Presentation Environment
Explanation
The presence of candy wrappers on the table during the device showcase indicates a relaxed, everyday setting that makes the technology feel approachable. This informal context supports the broader goal of presenting AI hardware as accessible and user‑friendly for diverse audiences.
Evidence
“The table has candy wrappers of Twix, Milky Way, and KitKat.” [1].
Major discussion point
Device Presentation Context
Topics
Closing all digital divides | Artificial intelligence
Agreements
Agreement points
Open source approach to AI development enables democratization and community innovation
Speakers
– Sushant Kumar
– Andrew Tergis
– Ayah Bdeir
– Abhishek Singh
Arguments
Collaboration between Bhashini and Current AI created an open source multilingual handheld device that works offline with zero connectivity
Device designed as open platform for any user to connect, write applications, and run inference locally
Current AI was born from Paris AI Action Summit as public-private partnership to create AI for public interest at scale
Launch of India AI Innovation Challenge offering open source hardware/software platform for developers to build solutions with substantial prizes
Summary
All speakers strongly support open source AI development as a means to democratize technology access and enable community-driven innovation, contrasting with proprietary big tech approaches
Topics
Artificial intelligence | The enabling environment for digital development | Capacity development
Multilingual AI is essential for preserving cultural diversity and ensuring no language is left behind
Speakers
– Amitabh Nag
– Ayah Bdeir
– Anne Bouverot
– Abhishek Singh
Arguments
Personal experience with linguistic challenges in school motivated work on preserving mother tongues and reducing language barriers
Technology has caused loss of native language use, with Arabic speakers communicating in English online due to poor voice recognition
AI should preserve cultural diversity rather than forcing users into Silicon Valley or Shanghai cultural frameworks
Ultimate goal is enabling farmers who only know local languages to access internet services through voice interaction
Summary
There is unanimous agreement that AI must support linguistic diversity to preserve cultures and ensure equitable access to technology for all language communities
Topics
Closing all digital divides | Social and economic development | Human rights and the ethical dimensions of the information society
Local, offline AI processing is crucial for accessibility and privacy
Speakers
– Sushant Kumar
– Andrew Tergis
– Amitabh Nag
– Ayah Bdeir
Arguments
Collaboration between Bhashini and Current AI created an open source multilingual handheld device that works offline with zero connectivity
Device runs multiple AI models locally including ASR, neural machine translation, LLM, and text-to-speech across 22+ languages
Focus on smaller form factors, offline capability, broader language coverage, and continuous model enrichment for depth and breadth
Major concern about embodied AI devices from big tech companies entering personal spaces with unknown training and data practices
Summary
All speakers agree that local processing capabilities are essential for ensuring privacy, accessibility in areas with poor connectivity, and user control over their data
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | Closing all digital divides
International collaboration is necessary for developing alternative AI solutions
Speakers
– Anne Bouverot
– Abhishek Singh
– Ayah Bdeir
Arguments
Collaboration includes joint research on resilient, sustainable AI and multilingual AI development
France and India have complementary strengths and can jointly build alternative AI solutions as trusted partners
Current AI was born from Paris AI Action Summit as public-private partnership to create AI for public interest at scale
Summary
Speakers agree that no single country can achieve AI sovereignty alone and that international partnerships are essential for creating viable alternatives to dominant tech companies
Topics
Artificial intelligence | The enabling environment for digital development | Information and communication technologies for development
Similar viewpoints
Both speakers share personal experiences of how inadequate language support in technology has negatively impacted their ability to use their native languages, driving their commitment to multilingual AI
Speakers
– Amitabh Nag
– Ayah Bdeir
Arguments
Personal experience with linguistic challenges in school motivated work on preserving mother tongues and reducing language barriers
Technology has caused loss of native language use, with Arabic speakers communicating in English online due to poor voice recognition
Topics
Closing all digital divides | Human rights and the ethical dimensions of the information society
Both speakers advocate for nuanced approaches to data governance that distinguish between public benefit and commercial exploitation, emphasizing the need for trusted intermediaries and context-specific decisions
Speakers
– Abhishek Singh
– Anne Bouverot
Arguments
Data sharing decisions should be context-specific based on whether they serve public interest versus private commercial interests
Need for privacy-preserving techniques and trusted third parties to balance individual rights with collective cultural preservation benefits
Topics
Data governance | Human rights and the ethical dimensions of the information society
Both speakers recognize the critical importance of controlling foundational technology layers to prevent monopolistic control while balancing openness with appropriate governance
Speakers
– Ayah Bdeir
– Martin Tisne
Arguments
Hardware represents the first point of technological lock-in, similar to how iPhone controlled innovation through APIs and developer dependency
Balance needed between open source AI development and controlled data governance approaches
Topics
Artificial intelligence | The enabling environment for digital development | Building confidence and security in the use of ICTs
Unexpected consensus
No country has complete AI sovereignty
Speakers
– Abhishek Singh
– Anne Bouverot
Arguments
AI sovereignty requires complete control over energy, data centers, infrastructure, chips, models, and applications – no country currently has full control
Collaboration includes joint research on resilient, sustainable AI and multilingual AI development
Explanation
It’s unexpected that government representatives would openly acknowledge their countries’ limitations in AI sovereignty, but this honest assessment enables more realistic collaborative approaches
Topics
Artificial intelligence | The enabling environment for digital development
Technical achievements in model quantization without accuracy loss
Speakers
– Shalindra Pal Singh
– Andrew Tergis
Arguments
Models were quantized to fit on device without accuracy loss, representing significant technical achievement
Device runs multiple AI models locally including ASR, neural machine translation, LLM, and text-to-speech across 22+ languages
Explanation
The consensus on achieving quantization without accuracy loss is unexpected given the typical trade-offs in AI model compression, representing a significant technical breakthrough
Topics
Artificial intelligence | Information and communication technologies for development
Rapid development timeline for complex multilingual AI device
Speakers
– Ayah Bdeir
– Sushant Kumar
Arguments
Challenge aims to expand imagination about personal, multilingual AI solving community-specific problems
Collaboration between Bhashini and Current AI created an open source multilingual handheld device that works offline with zero connectivity
Explanation
The consensus that a complex multilingual AI device could be developed in just 5-6 weeks through international collaboration is unexpected and demonstrates the potential for rapid innovation when organizations align on shared goals
Topics
Artificial intelligence | Capacity development | The enabling environment for digital development
Overall assessment
Summary
There is remarkably strong consensus among all speakers on the need for open, multilingual, locally-processing AI that serves public interest rather than commercial monopolies. Key areas of agreement include the importance of preserving linguistic and cultural diversity, the necessity of offline capabilities for true accessibility, the value of open source approaches for democratization, and the need for international collaboration to create viable alternatives to big tech dominance.
Consensus level
Very high consensus with no significant disagreements identified. This strong alignment suggests a mature understanding of the challenges and a shared vision for solutions. The implications are positive for advancing collaborative, inclusive AI development that serves diverse global communities rather than reinforcing existing digital divides and cultural homogenization.
Differences
Different viewpoints
Approach to data sharing governance – universal rules vs context-specific decisions
Speakers
– Abhishek Singh
– Anne Bouverot
Arguments
Data sharing decisions should be context-specific based on whether they serve public interest versus private commercial interests
Need for privacy-preserving techniques and trusted third parties to balance individual rights with collective cultural preservation benefits
Summary
Singh advocates for case-by-case evaluation based on public vs private interest, while Bouverot emphasizes the need for trusted intermediaries and systematic privacy-preserving mechanisms
Topics
Data governance | Human rights and the ethical dimensions of the information society
Definition and scope of AI sovereignty
Speakers
– Abhishek Singh
– Anne Bouverot
Arguments
AI sovereignty requires complete control over energy, data centers, infrastructure, chips, models, and applications – no country currently has full control
Collaboration includes joint research on resilient, sustainable AI and multilingual AI development
Summary
Singh defines sovereignty as complete control over all AI stack layers, while Bouverot emphasizes collaborative approaches and building alternative solutions through partnerships
Topics
Artificial intelligence | The enabling environment for digital development | Building confidence and security in the use of ICTs
Unexpected differences
Role of regulation vs market mechanisms in cultural preservation
Speakers
– Anne Bouverot
– Martin Tisne
Arguments
AI should preserve cultural diversity rather than forcing users into Silicon Valley or Shanghai cultural frameworks
Ensuring AI doesn’t squash cultural diversity into monocultures is one of the most important questions today
Explanation
While both agree on the importance of cultural preservation, Tisne questions whether regulatory mechanisms (like French quotas for music/films) should apply to AI, while Bouverot is more cautious about mandating specific norms, preferring funding mechanisms
Topics
Social and economic development | Human rights and the ethical dimensions of the information society | The enabling environment for digital development
Overall assessment
Summary
The discussion revealed subtle but important differences in approach to AI governance, data sovereignty, and cultural preservation. While all speakers shared common goals of democratizing AI and preserving cultural diversity, they differed on implementation strategies.
Disagreement level
Low to moderate disagreement level. The speakers largely aligned on objectives but showed different philosophical approaches – some favoring more structured/regulatory approaches while others emphasized collaborative and context-specific solutions. These differences reflect broader tensions in global AI governance between sovereignty and collaboration, individual rights and collective benefits, and open vs controlled development models.
Partial agreements
Partial agreements
Both agree on the need to balance individual data rights with collective benefits, but Bouverot focuses on trusted intermediaries while Tisne emphasizes the tension between open source and controlled governance
Speakers
– Anne Bouverot
– Martin Tisne
Arguments
Need for privacy-preserving techniques and trusted third parties to balance individual rights with collective cultural preservation benefits
Balance needed between open source AI development and controlled data governance approaches
Topics
Data governance | Human rights and the ethical dimensions of the information society
Both are concerned about big tech control and want to serve public interest, but Bdeir focuses on hardware lock-in while Singh focuses on data governance frameworks
Speakers
– Ayah Bdeir
– Abhishek Singh
Arguments
Major concern about embodied AI devices from big tech companies entering personal spaces with unknown training and data practices
Data sharing decisions should be context-specific based on whether they serve public interest versus private commercial interests
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | Data governance
Similar viewpoints
Both speakers share personal experiences of how inadequate language support in technology has negatively impacted their ability to use their native languages, driving their commitment to multilingual AI
Speakers
– Amitabh Nag
– Ayah Bdeir
Arguments
Personal experience with linguistic challenges in school motivated work on preserving mother tongues and reducing language barriers
Technology has caused loss of native language use, with Arabic speakers communicating in English online due to poor voice recognition
Topics
Closing all digital divides | Human rights and the ethical dimensions of the information society
Both speakers advocate for nuanced approaches to data governance that distinguish between public benefit and commercial exploitation, emphasizing the need for trusted intermediaries and context-specific decisions
Speakers
– Abhishek Singh
– Anne Bouverot
Arguments
Data sharing decisions should be context-specific based on whether they serve public interest versus private commercial interests
Need for privacy-preserving techniques and trusted third parties to balance individual rights with collective cultural preservation benefits
Topics
Data governance | Human rights and the ethical dimensions of the information society
Both speakers recognize the critical importance of controlling foundational technology layers to prevent monopolistic control while balancing openness with appropriate governance
Speakers
– Ayah Bdeir
– Martin Tisne
Arguments
Hardware represents the first point of technological lock-in, similar to how iPhone controlled innovation through APIs and developer dependency
Balance needed between open source AI development and controlled data governance approaches
Topics
Artificial intelligence | The enabling environment for digital development | Building confidence and security in the use of ICTs
Takeaways
Key takeaways
Successfully demonstrated first open-source multilingual AI hardware device that works offline, running multiple AI models locally across 22+ languages without connectivity requirements
AI democratization requires moving beyond Silicon Valley/big tech monocultures to preserve linguistic and cultural diversity globally
No country currently has complete AI sovereignty across all five layers (energy, data centers, infrastructure, chips, models, applications), making international collaboration essential
Data sharing and sovereignty decisions must be context-specific, balancing individual privacy rights with collective benefits for cultural preservation and public interest
Hardware represents the critical first point of technological lock-in, making open-source hardware platforms essential to prevent big tech monopolization of AI infrastructure
Cultural preservation in AI requires incorporating traditional knowledge, folklore, and diverse datasets beyond just language translation
France-India partnership demonstrates how countries with complementary strengths can jointly build alternative AI solutions as trusted partners
Resolutions and action items
Launch India AI Innovation Challenge with submissions opening February 25th, offering substantial prizes funded by both Bhashini and Current AI
Make the multilingual AI device prototype available as open-source hardware and software platform for developers to hack and build upon
Bhashini committed to provide ongoing technical support including quantization mechanisms and model enrichment for challenge participants
Continue expanding language coverage beyond current 22 languages, including digitizing tribal languages like Bheeli that lack scripts
Develop smaller form factors, improve battery life, and create mesh network capabilities for distributed inference
France and India to continue joint collaboration on AI research, innovation, and building alternative solutions through multiple partnership levels
Unresolved issues
How to balance individual artist/creator rights and compensation with collective cultural preservation needs when using cultural data for AI training
Specific mechanisms for ensuring communities have agency over how their cultural data is used while still enabling cultural preservation
Technical challenges of achieving complete AI sovereignty given current global supply chain dependencies, particularly in chip manufacturing
Scalability questions about moving from prototype to mass production while maintaining affordability and accessibility
Governance frameworks for managing embodied AI devices entering personal spaces, particularly regarding facial recognition and continuous data collection
Long-term sustainability models for maintaining and updating open-source AI hardware platforms
Suggested compromises
Implement right of opposition for individual artists to opt out of AI training while allowing historical and collective cultural data to be used
Use privacy-preserving techniques and trusted third-party institutions to manage data sharing decisions on behalf of communities
Focus on achieving practical sovereignty through choice and control over procurement and usage rather than complete technological independence
Develop context-specific data sharing frameworks that distinguish between public interest uses (like health research) versus commercial exploitation (like insurance discrimination)
Create tiered approach to cultural data preservation using both individual consent mechanisms and collective community governance structures
Thought provoking comments
So for such people, if they are able to talk to the developers, put their query into the internet or bandwidth or connectivity and get a reply back, that will be empowering. And that’s what I think the ultimate objective of this summit also. Democratizing use of AI and ultimately making AI work for all.
Speaker
Abhishek Singh
Reason
This comment reframes the entire AI discussion from a technical achievement to a fundamental question of human dignity and access. Singh moves beyond the usual metrics of AI success (accuracy, speed, efficiency) to focus on empowerment of marginalized communities who are typically excluded from technological advancement.
Impact
This comment established the moral and philosophical foundation for the entire discussion, shifting the conversation from ‘how can we build better AI?’ to ‘how can we ensure AI serves those who need it most?’ It influenced subsequent speakers to consistently return to themes of inclusion, cultural preservation, and community benefit rather than just technical capabilities.
I’m concerned about this new frontier of embodied AI… we have these devices. We don’t know how they work. They’re continuously recording our data, sending it out to the cloud… hardware is where the lockup first starts… you start to build a whole stack on a core piece of hardware that you do not control.
Speaker
Ayah Bdeir
Reason
This is a profound strategic insight that connects hardware control to cultural sovereignty. Bdeir identifies that the battle for AI independence isn’t just about software or models—it starts at the hardware level. This comment reveals how tech companies use hardware as a trojan horse to capture entire ecosystems.
Impact
This comment fundamentally reframed the discussion about open-source AI, showing that without open hardware, open software is meaningless. It elevated the conversation from technical specifications to geopolitical strategy, influencing the subsequent discussion about sovereignty and the need for alternative infrastructure.
She says that this tree grows in my local forest around where I live and I know that this worm eats only leaves which are dying. In a way, it helps the plants. It’s not a pest… having this traditional knowledge built into the corpus of data sets on which we train AI models will be very, very vital if we have to ensure that AI doesn’t hassle and hallucinate.
Speaker
Abhishek Singh
Reason
This anecdote brilliantly illustrates the epistemological crisis in AI—how Western-centric training data can literally misclassify reality for other cultures. It shows that ‘accuracy’ is not universal but culturally situated, and that AI trained without diverse knowledge systems will systematically misunderstand the world.
Impact
This story became a powerful metaphor that anchored the entire discussion about cultural data and local knowledge. It provided concrete evidence for why multilingual and multicultural AI isn’t just nice-to-have but essential for accuracy, shifting the conversation from cultural preservation as a social good to cultural inclusion as a technical necessity.
If I’m interested in music and if I come from a particular area in France, well, I’d like to be able to have that community and its culture represented there… It’s not just about being able to have access to a French AI or an Indian AI. It’s even more than that.
Speaker
Anne Bouverot
Reason
Bouverot articulates a sophisticated understanding of cultural granularity that goes beyond national boundaries to community-level representation. This challenges the typical nation-state framework for thinking about AI sovereignty and pushes toward a more nuanced, community-based approach to cultural preservation.
Impact
This comment deepened the discussion by introducing the complexity of sub-national cultural identities, leading to more sophisticated conversations about data sovereignty, community rights, and the tension between individual artist rights and collective cultural preservation.
Part of the reason why you want to share cultural data is so that cultures are preserved and you don’t end up with one or two or three cultures in the world… But at the same time, creators are saying, I don’t want my data to be used if I don’t have a mode of being compensated or recognized… So I’m not sure I have a solution, but I very clearly see the tension.
Speaker
Anne Bouverot
Reason
This comment honestly acknowledges one of the most complex challenges in AI governance—the tension between collective cultural preservation and individual creator rights. Rather than offering false solutions, Bouverot maps the genuine complexity of the problem, which is more valuable than simplistic answers.
Impact
This admission of complexity elevated the sophistication of the entire discussion, moving it away from easy answers toward a more mature engagement with the genuine trade-offs involved in cultural AI. It influenced subsequent speakers to also acknowledge uncertainties and focus on frameworks rather than definitive solutions.
Nobody else should decide to make decisions on my behalf… In the short term, if I decide that which chip I want to use, how I want to use, how I procure rather than be subject to conditionality rather we force people something which will be sovereignty.
Speaker
Abhishek Singh
Reason
Singh provides a pragmatic definition of sovereignty that moves beyond abstract concepts to concrete decision-making power. He acknowledges that complete technological independence may be impossible while defining sovereignty as having genuine choices and agency in technological decisions.
Impact
This redefinition of sovereignty as ‘choice rather than complete independence’ provided a practical framework that influenced the final discussion about France-India collaboration, showing how partnerships can enhance rather than compromise sovereignty when they increase rather than decrease options.
Overall assessment
These key comments transformed what could have been a routine tech product launch into a profound discussion about power, culture, and human agency in the AI age. The speakers consistently elevated technical discussions to questions of justice, sovereignty, and cultural survival. The conversation moved through several sophisticated layers—from individual empowerment to community preservation to geopolitical strategy—with each insight building on previous ones. The honest acknowledgment of complexity and trade-offs, rather than offering false certainties, gave the discussion intellectual integrity and practical relevance. The result was a conversation that connected immediate technical choices to long-term civilizational questions, showing how hardware design decisions today will shape cultural diversity and human agency for generations.
Follow-up questions
How can we develop smaller form factors for the AI device while maintaining functionality?
Speaker
Amitabh Nag
Explanation
This is important for improving portability and accessibility, especially for last-mile delivery and use in remote locations where smaller devices would be more practical.
How can we digitize and incorporate the 16 lakh place names identified by Survey of India into the language models?
Speaker
Amitabh Nag
Explanation
This is crucial for improving the depth and accuracy of language models, particularly for location-specific queries and cultural context preservation.
How can we capture and digitize undocumented traditional knowledge and folklore from rural and tribal communities?
Speaker
Abhishek Singh
Explanation
This is essential for creating comprehensive AI datasets that include diverse cultural contexts and traditional knowledge systems that are currently not available in digital formats.
What are the optimal privacy-preserving techniques and anonymization methods for sharing cultural and health data?
Speaker
Abhishek Singh
Explanation
This is critical for balancing the need to share data for public benefit while protecting individual privacy and preventing misuse by commercial entities.
How can we establish community standards rooted in local culture and belief systems for data sharing?
Speaker
Abhishek Singh
Explanation
This is important to ensure that data sharing practices respect cultural values and community consent, preventing exploitation while enabling beneficial AI development.
How can we balance artists’ rights and compensation with the need for cultural representation in AI models?
Speaker
Anne Bouverot
Explanation
This addresses the tension between protecting creators’ intellectual property rights and ensuring diverse cultural representation in AI systems.
What mechanisms can be developed to give communities agency over how their cultural data is used in AI systems?
Speaker
Martin Tisne
Explanation
This is crucial for implementing indigenous data sovereignty principles and ensuring communities have control over their cultural heritage in AI applications.
How can India develop its own chip manufacturing capabilities to achieve complete AI stack sovereignty?
Speaker
Abhishek Singh
Explanation
This is important for India’s technological independence and reducing dependence on foreign chip suppliers for AI infrastructure.
What are the specific technical requirements and standards for the India AI Innovation Challenge submissions?
Speaker
Implied from challenge announcement
Explanation
This is necessary for participants to understand the criteria and technical specifications for developing solutions based on the open-source prototype.
How can mesh networks of multiple AI devices be implemented for distributed inference?
Speaker
Ayah Bdeir
Explanation
This could enable more powerful AI capabilities by connecting multiple devices together, expanding the potential applications and processing power.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

