Inclusive AI_ Why Linguistic Diversity Matters
20 Feb 2026 15:00h - 16:00h
Inclusive AI_ Why Linguistic Diversity Matters
Summary
The summit opened by Sushant Kumar framed the discussion around “personal, local and multilingual AI” and announced a joint effort between Bhashani and Current AI to showcase an open-source, multilingual, handheld device that preserves privacy and works without connectivity [1-4]. After a short video, Aya Bdeir introduced the demo team, noting that the prototype was built in about five weeks through a close partnership between Current AI engineers and Bhashani’s model team [34-38]. Andrew Tergis explained that the device can run a full pipeline-automatic speech recognition, neural machine translation, a large language model, and text-to-speech-entirely on-device, demonstrated with a Hindi query answered in the user’s language [53-62]. Shalindra Pal Singh added that the models were heavily quantized to fit on the hardware without sacrificing accuracy, and that four to five models are currently operational offline on a Jetson platform [70-72][88-90][98-101].
Amitabh Nag described Bhashini’s origin in 2023, motivated by the difficulty of using non-native languages in school and the need to preserve linguistic nuance, leading to the creation of a corpus for many Indian languages [108-110][118-119]. He noted that the team now serves about 15 million inferences daily on a 200-GPU system and is expanding language coverage to 36 languages, including recently digitised tribal languages such as Bheeli [121-128][170-176]. Looking ahead, Nag highlighted four pillars for inclusive AI: a small offline form factor, broader language breadth, deeper model enrichment (e.g., adding place-name glossaries), and continuous contextualisation of data [163-176][180-186].
Aya Bdeir expressed concern that embodied AI devices from large tech firms are opaque, often trained on Western languages, and could lock users into proprietary stacks, whereas open-source hardware can democratise innovation similar to Linux [194-210]. She outlined hopeful trajectories, including cheaper, smaller devices, mesh networking for distributed inference, and specialised applications such as agricultural assistants or privacy-preserving toys [215-224][229-232]. The panel also debated cultural data sovereignty, with participants stressing the need for community involvement, consent, and reciprocity when data is used for AI, especially for health or tribal knowledge [311-318][326-332].
Anne Bouverot suggested that policy mechanisms, like France’s quotas for French content in media, could fund and protect cultural creation while supporting AI development [263-270]. The discussion concluded with the announcement of the India AI Innovation Challenge, an open-source competition to build on the prototype, offering prize funding from both Bhashini and Current AI and encouraging global collaboration, including with France [409-422][391-398]. Sushant’s closing remarks underscored that open, offline, multilingual AI hardware, coupled with collaborative governance of data and cultural assets, can empower diverse communities and drive inclusive AI innovation [233-236].
Keypoints
Major discussion points
– Launch of a personal, local, multilingual AI hardware prototype – The session introduced an open-source, handheld device that runs AI offline, preserves privacy and supports many languages, followed by a live demo showing speech-to-text, translation, LLM inference and text-to-speech all on-device [4-9][52-58][88-91].
– Collaboration model between Bhashani and Current AI – The project was built in a six-week partnership that emphasizes co-creation, open-source release as a public good, and a repeatable “identify-gap → co-develop → open-source” workflow [34-42][135-142].
– Multilingual and cultural inclusion as a core goal – Speakers highlighted the need to serve mother-tongues and tribal languages (e.g., Bheeli), to avoid linguistic bias, and to preserve cultural nuance; the device currently covers 22 spoken languages and aims to expand to 36 [108-112][163-176]. Concerns were raised about “embodied AI” that is centrally controlled and trained on Western languages, while hope was expressed that open hardware can democratise access and enable diverse use-cases [194-206][214-222].
– Future visions and application scenarios – The prototype can power vision-impaired assistants, agricultural tools, tourism guides, and can be networked in a mesh or scaled to a micro-data-center; the hardware is platform-agnostic (Jetson-based now but portable to other chips) and is intended to spark endless community-driven innovations [58-62][215-232][89-90].
– Data sovereignty, reciprocity and governance – The panel debated who owns the data used to train models, the need for community-level standards, artist compensation, and the broader concept of AI sovereignty at individual, community and national levels, arguing for privacy-preserving, trusted third-party mechanisms and a “complete sovereign AI stack” [299-307][314-324][368-387].
Overall purpose / goal
The discussion aimed to showcase a tangible open-source AI device that makes advanced, multilingual AI accessible offline, to illustrate how cross-sector collaboration (Bhashani, Current AI, Kalpa Impact) can accelerate such public-good technology, and to launch the India AI Innovation Challenge that invites developers to build on the prototype for real-world, culturally-relevant solutions.
Overall tone
The conversation began with enthusiastic optimism about “making AI work for everyone” and celebrating the prototype’s capabilities. It shifted to reflective, personal anecdotes about language loss, then to cautious concern over centralized, embodied AI and data ownership. Throughout, the tone remained collaborative and hopeful, ending on a forward-looking, rally-the-community call-to-action as the speakers announced the innovation challenge and future partnerships.
Speakers
– Sushant Kumar
– Areas of expertise: AI moderation, multilingual AI, public-interest AI
– Role: Session moderator / host
– Title: –
– Sources: [S1]
– Announcer
– Areas of expertise: –
– Role: Event announcer / moderator
– Title: –
– Anne Bouverot
– Areas of expertise: AI policy, digital diplomacy, telecommunications
– Role: Special Envoy for Artificial Intelligence, France; Chair of the board of École Normale Supérieure
– Title: Former Director General of the GSMA
– Sources: [S6]
– Ayah Bdeir
– Areas of expertise: Open-source hardware, multilingual AI, entrepreneurship
– Role: CEO of Current AI; Engineer & entrepreneur with 20 years experience building open-source tech infrastructure
– Title: –
– Sources: [S8]
– Amitabh Nag
– Areas of expertise: Linguistic AI, multilingual language models, large-scale inference systems
– Role: CEO of Bhashini
– Title: –
– Sources: [S9]
– Martin Tisne
– Areas of expertise: AI governance, democratic AI values, collaborative AI development
– Role: Chair of Current AI; Lead of the AI Collaborative organization
– Title: –
– Shalindra Pal Singh
– Areas of expertise: Integration of multilingual models, AI hardware-software co-design
– Role: General Manager at Bhashini; collaborator on device integration
– Title: –
– Sources: [S15]
– Abhishek Singh
– Areas of expertise: Public-interest AI policy, digital sovereignty, government-industry collaboration
– Role: Under-Secretary, Ministry of Electronics and Information Technology (India)
– Title: –
– Sources: [S16]
– Device
– Areas of expertise: On-device AI inference, multimodal processing (ASR, MMT, LLM, TTS)
– Role: AI hardware prototype that responded to queries
– Title: –
– Sources: – (information from transcript)
– Andrew Tergis
– Areas of expertise: Embedded AI engineering, hardware prototyping, model quantization
– Role: Lead engineer on the Current AI side of the project
– Title: –
– Sources: [S22]
Additional speakers:
– Aya Bhadel – identified in the transcript as “the CEO of Current AI” (likely the same person as Ayah Bdeir, but listed separately because of the distinct name used).
– Areas of expertise: –
– Role: CEO of Current AI
– Title: –
– Sources: – (derived from transcript)
The session opened with Sushant Kumar asking how a paradigm can be built so that artificial intelligence works for everyone and stating that this was the purpose of the gathering [1-2]. He introduced the theme “The case for personal, local and multilingual AI” and announced a joint effort between Bhashani and Current AI, coordinated by Kalpa Impact, to showcase a “seminal open-source AI hardware device” that is multilingual, handheld, privacy-preserving and capable of operating without connectivity [3-5]. After a brief outline of the agenda, he promised a video that would “capture our imagination of what this product would look like” and a live demonstration by the makers [6-12]. The video underscored that India’s AI journey has moved beyond pilots to “populations’ reach, clear use cases, last-mile delivery” and a vision of AI that is not governed by any single country or corporation [13-20].
Following the video, Sushant invited Ayah Bdeir, CEO of Current AI, to lead the product demonstration [22-24]. Ayah briefly paused the session to organise a group photo and then introduced the demo team: Andrew Tergis, the lead engineer from Current AI, and Shalindra Pal Singh, a general manager at Bhashani who had worked closely on integrating Bhashani’s models [30-33]. She highlighted that the prototype had been built in a remarkably short six-week (actually five-week) sprint, a timeline made possible by the pre-existing partnership discussions and her admiration for Bhashani’s work on linguistic diversity and its 250 models [34-38]. Ayah framed the collaboration as a model of “identify-gap → co-develop → open-source” that results in a public-good stack for OpenAI, emphasizing that Current AI seeks to learn partners’ priorities and release jointly built technology openly [70-78].
Andrew then described the prototype as an “open AI inference device” that differs from other conference products by being deliberately general-purpose: any user can connect, upload models and run inference locally on a handheld unit [53-56]. He demonstrated a flagship application co-created with Bosch for vision-impaired users, where a button press triggers a spoken question in the user’s native language, the device captures audio, transcribes it via automatic speech recognition (ASR), translates it to English, feeds the text and an image to a large language model (LLM), translates the answer back, and finally synthesises speech in the original language [58-66]. The entire pipeline-ASR, neural machine translation (MMT), LLM inference and text-to-speech (TTS)-runs on-device, illustrating the feasibility of full-stack offline AI [55-62].
During the live test, a Hindi query was processed: the ASR model converted the spoken input to text, the MMT translated it, the embedded LLM generated a response, and the TTS module vocalised the answer, all without any cloud connection [70-72][79-82]. In a second demonstration, an English query “What is on this table?” returned the answer “The table has candy wrappers of Twix, Milky Way, and KitKat,” showing the device’s ability to recognise objects and produce brand-level details [84-86]. Shalindra explained that the models had been heavily quantised to fit the limited hardware, yet the optimisation “reached a point where there is no hit on the accuracy fronts” [71-72]. The prototype currently runs on an Intel Jetson platform but is designed to be processor-agnostic, allowing future deployments on alternative chips while supporting the deployment of any model the community wishes to use [88-91].
Sushant praised the demonstration, noting the logistical challenge of clearing the device through customs and the significance of its offline operation, which meant that “all those queries, all the AI processing was happening on the device” [92-100]. He highlighted that four or five models are already operational on that particular device, a notable achievement for edge hardware [100-102].
Amitabh Nag then provided the backstory of Bhashani, explaining that it was founded in 2023 after he experienced the difficulty of learning in non-native languages at school, which motivated a drive to preserve linguistic nuance and create a corpus for Indian languages [108-110][118-120]. He described the early technical hurdles of building models without existing digital data, the reliance on “brute-force” data collection and collaboration with translators to create a digital corpus, and the subsequent scaling to a system that now handles roughly 15 million inferences per day on a 200-GPU cluster, monitored through real-time dashboards [121-131].
Regarding language coverage, Amitabh reported that the current system supports 22 spoken languages and aims to expand to 36, already having digitised the tribal Bheeli language, which previously lacked a script [170-176]. He outlined four pillars for inclusive AI: (1) a small, offline-first form factor that can reach the last mile; (2) expanding the breadth of language coverage so that no language is left behind; (3) deepening model enrichment, for example by adding place-name glossaries from the Survey of India; and (4) continuous contextualisation of data to improve relevance [163-168][180-186].
Ayah shifted the discussion to broader concerns, warning that the emerging wave of “embodied AI” – glasses, robots, voice assistants – often records continuously, sends data to the cloud, and is trained predominantly on Western languages, thereby creating a hardware lock-in similar to the iPhone’s ecosystem [194-206][210-213]. She argued that open-source hardware can break this lock-in, likening its potential impact to that of Linux, which provides a neutral foundation for community-driven innovation [210-213].
She then outlined hopeful trajectories for the platform: reducing cost, improving battery life, shrinking size, enabling mesh networking of multiple units, scaling to stationary micro-data-centres powered by solar panels, and developing specialised applications such as agricultural assistants, privacy-preserving toys for children, or tourism guides [215-232]. These possibilities are “infinite” once the hardware platform is open and modular [215-226][227-232].
Fireside chat – panel discussion
Martin Tisne opened the panel by introducing Abhishek Singh as “the master and orchestrator of this entire summit” and announcing the launch of Panteo, a framework for culturally-aware data sharing [300-310]. He then set the stage for a conversation on data sovereignty and reciprocity.
Abhishek Singh argued that communities must retain rights over data derived from them and should receive tangible benefits, especially in sectors like agriculture where aggregated data can improve advice, while recognising that health data may require stricter privacy controls [299-307][306-310]. He illustrated the point with a Netflix documentary about tribal women annotating pest data, highlighting how local knowledge can dramatically improve AI outcomes [260-270]. Anne Bouverot echoed the need for trusted third-party institutions that can manage privacy-preserving data sharing, ensuring that cultural creators can opt-out or be compensated, and that data use balances public-interest research with protection against misuse [318-324][330-342]. Martin Tisne highlighted the tension between open-source AI development and the need for controlled governance of cultural datasets, prompting a nuanced debate on reconciling openness with cultural rights [329-332][318-324][330-342].
On AI sovereignty, Abhishek defined it as complete national control over the five layers of the AI stack-energy, data-centre, chips, models and applications-asserting that no country should be dependent on external providers for any of these layers [368-373][374-382]. He noted that India already possesses energy sufficiency, data-centres, models and applications, and is progressing toward domestic chip design and eventual fabrication, aiming for full-stack independence within the next 5-10 years [383-387].
The conversation then turned to Indo-French cooperation. Anne highlighted existing joint research on resilient, multilingual AI and suggested that France’s policy mechanisms, such as cultural quotas that fund local creators, could be adapted to AI to ensure cultural representation and funding [263-270][271-276]. Amitabh added that the partnership between India and France, reinforced by recent high-level engagements, offers complementary strengths for building alternative, sovereign AI solutions and shaping global norms [391-398][401-403].
India AI Innovation Challenge
Abhishek announced the India AI Innovation Challenge, an open-source competition that invites researchers, developers and entrepreneurs to hack the Bhashani-Current AI prototype. Submissions open on 25 February, with prize funding from both organisations; Bhashani will provide quantisation expertise and technical support, while Current AI will continue to release the hardware and software as public-good resources [409-424][419-424]. Aya mentioned a possible prize pool of about $110 000, though the exact amount was not confirmed [420-422].
Sushant concluded by reaffirming that the combination of offline, multilingual, open-source hardware and collaborative governance of data and cultural assets can empower diverse communities, drive inclusive AI innovation and, ultimately, make AI work for everyone [233-236]. The session therefore illustrated a concrete technical achievement, a shared vision for culturally aware AI, and a concrete call-to-action through the innovation challenge-directly aligning with the overarching theme of personal, local, multilingual AI for everyone.
And therefore, how do we develop and support a paradigm that can make AI work for everyone? And that’s what we are here today. The session today is very aptly called: The case for personal, local and multilingual AI. Through a collaboration between Bhashani and Current AI, orchestrated by Kalpa Impact, we are proud to present to you today a seminal open source AI hardware device, one that is multilingual, handheld, privacy preserving and works in zero connectivity settings. So what we are going to do today is we are going to talk about the concept of AI. What we are going to show you after this will be a video that presents the imagination of what such a device could lead to.
in terms of making AI work for everyone. And once we have done that, there’s a special treat for all of you. The maker of the device and the collaborators at Bhashani are there in the room and they will demonstrate the product to you. So why don’t I begin with playing this video, which captures, which takes some creative liberties and captures our imagination of what this product would look like. And train on what I am watching. Audio, please. Thank you. Thank you. India’s real journey is no longer about pilots or promises. It’s about populations’ reach, clear use cases, last mile delivery. This is real world impact. This is real world impact. and connected vision for AI, not one that’s governed by any one country or one company.
I think all countries have a huge amount to bring to the table and a big relief in the power of collaboration. I was ready, the cup is open, now we need you. Come innovate AI for your own language, for your own community. We want to work with as diverse a group as possible. We can’t wait to see what we do. Yes, we’re back on. And for the next segment, I would like to invite Aya Bhadel, the CEO of Current AI, to take us through the product demonstration. Aya is an engineer and an entrepreneur with 20 years of experience building open source technology infrastructure. that works at global scale. Aya, over to you.
I have a quick interruption. I have to ask everybody to come here to take a picture so that the picture can be read by the end of the panel. You have 90 seconds free to speak amongst yourselves. Thank you. All right. All right. Thank you so much for coming, everyone. I’d like to introduce Andrew Turgis, who was the lead engineer on this project from the current AI team, who’s going to take us through a demo. Oh, there you are. And also Shalindra Pal Singh, who is a general manager at Bashni, who was Andrew’s collaborator and worked very closely to integrate Bashni models into the device. And I just want to say a couple of things. This project was undertaken in a six -week period, I think maybe closer to five weeks, actually.
So I just joined current AI in January of this year. When I came in, the partnership with Bashni had already been in discussion, and I was very inspired by Bashni’s work on linguistic diversity and the 250 models. And we thought this was an opportunity for us to go all the way, say, to the user and create something where really people can create AI that works for themselves, for their communities, and for their languages. So this prototype is the beginning of a journey and also a platform to imagine infinite things that are possible. And so you’ll see how it works. But as it’s working, I also would like you to imagine what you could do with it and where you could take it.
And from my perspective, I’ll just say for Current AI, this is an example of how we’d like to work with partners where we learn more about their interests and their focus areas and their priorities, and we zero in on a collaboration that we can develop together. We build it together, and then we release it as a public good. So in this case, it’s a piece of hardware and a development platform. In another case, it could be something else. But we’re really proud that this collaboration with Vashni is our first collaborative build, and you get to see it kind of firsthand as you’re sitting here. So, Andrew, Shalindra, please. Please join me on stage, and I’ll let you take us away for the demo.
All right. Perfect. Hello. I’m so pleased to be able to show you this prototype that we’ve created. Yes. Oh, thank you. In front of the table. Wonderful. So this is our prototype open AI inference device. So, you know, unlike some other products you might have seen at this conference, which might be designed for one very specific user or one very specific use case. This device is designed to be used by any number of users for any number of use cases. The hope is that anyone could feel empowered to connect up to this device, write their own application, pull any number of models onto the device and run inference locally in their hand. We have one flagship application that we’ve developed in concert with Bosch.
That demonstrates their the models that they’ve been developing over so much time. And this sample. Application we call here the world, which is. an application where a vision -impaired user can press a button, ask a question in their native language about their surrounding, and have the device read back their response again in their native language, leveraging Bosch, these 22 -plus languages. In particular, we’re leveraging an ASR, an automatic speech recognition module, to convert the audio into text in their native language. We’ll be leveraging an MMT, neural machine translation module, to convert that text into English. We’re running it through a large language model with the image data to answer the question, and then we’ll be converting it back into their native language using, again, the MMT model, and finally a TTS module to convert it back into audio.
So this device is able to run all of those modules in concert. So without further ado, let’s try and give it a test query. Shalinder, do you think you can help me out here? I guess you’ll take the photo, and then I’ll spin it around quickly so the audience can see what’s happening. We’ll ask in Hindi. Let me just triple check. Yep, you’re all good. All right.
What it has done is it has taken the image and then it has taken the automatic speech recognition model kicks in and then neural machine translation is happening and then again the response is getting from the LLM that we have embedded and the translation is happening and the text to speech which is being spoken out. We have quantized the model in such a way that it is fit in. Usually when we do the quantization there is always a trade -off that there is a hit on the accuracy but we have reached to a point where there is no hit on the accuracy fronts.
This is a great way to a truly huge effort from your team, and we wouldn’t have been able to fit such a high -fidelity LLM on this if you didn’t do that great optimization work. So let’s see. Let’s ask another question. We have a couple of candy bars on this desk here, which we can show you. Let’s see. Let’s try it. I’m going to put this in English. What is on this table?
The table has candy wrappers of Twix, Milky Way, and KitKat.
All right. All right. It actually got the brands. And we have one more question of grave importance. But I’ll ask him in Hindi. That’s right. I got it. This is the best candy bar in the world. There we go. Would anyone like a candy bar? Anyone? Anyone? There you go. So just very briefly while we’re handing this out, this is currently based on the Intel Jetson, the NVIDIA Jetson processing platform, but we’ve used it to support other platforms as well because the processing that we’re doing does not depend on that. That just happens to be the platform we’ve chosen at the moment. And, yeah, we’re working on the ability to deploy any model that you could dream of onto this device.
Thank you.
Thank you very much. How did everyone feel about that demonstration and the things that can be done? Thank you. Thank you. And kudos to the Bhashini team, which worked tirelessly, and, of course, Andrew and the current AI team, which worked tirelessly to make sure the hardware, software, all of that was integrated. We had to get a device through customs as well. So that took some time, but eventually it’s here. and it’s working, which is amazing. And the best part is that the device is offline. All those queries, all the AI processing was happening on the device. And there are four or five models operational. Four models operational on that particular device, no mean feat. I salute the engineers who have worked on this, and there’s more to come.
And we know we have to get in a lot in a short period of time. So I will invite Ayya Bedev, the CEO of Current AI, and Sri Amitabh Ma ‘amji, the CEO of Bhashini, to join me for a fireside chat. And we’ll try and understand what about the personal, local, multilingual AI is what they are passionate about. So this is also about what are their motivations. So why don’t we start with you, Anasabji. so we all know a lot about bhajani we have heard about it and you know it’s a superstar at this point in time in terms of what you have achieved tell us about the origins tell us about how this all started and why this is personal to
Hey thank you see we all are born with our mother tongue right we learn our mother tongues for good 4 -5 years before we land up in a school and when we land up in a school it’s a three language formula so I am a Bengali and I talk about you know when it is Bengali everything is eaten so chol khawe is the right word so when you go to the school and you have to do Hindi and English you know how it could be for first 6 months you are going to you know, people will be laughing at you when you are translating and speaking because that’s the first way of speaking. You are not a native language speaker, so you will be translating and speaking.
That’s the linguistic nuance that you went after. So, you know, over a period of time, of course, we grew up. We were told that you have to learn English to succeed in life. So that’s another given which was there. And obviously, this opportunity came up. You know, there was already a concept which was there. And obviously, we started with, you know, one room office, first employee.
When was this? Which year?
This was in 2023.
Okay. That’s recent. That’s recent.
And then obviously, we started growing up as a team looking at various use cases. People started initially looking at the first thing was what’s the accuracy, which was the first question which used to come up. But then, you know. So our models were built up in a difficult condition because we didn’t have digital data to build up the AI model and which we collected the data through a brute force. And then we built up the models which were there because we went across to multiple places with translators who actually created the corpus, which is digital corpus. We still had deficient data, but we went across to build the model and deploy it. And under deployment, we had challenges which obviously came up from all aspects.
And today, when we have actually deployed the use cases, learned from it, improved it, we are now in a situation where we are running about 15 million inferences a day with a 200 GPU system and all having dashboards which actually give you every inference timeliness, how much time it takes, et cetera, et cetera. So we are able to real -time monitor what is happening in our system, who are our customers, how they are using it.
Fantastic. It’s wonderful to hear about your personal motivations. And I’ll move to you. How many languages do you speak?
My native tongue is Arabic and then I speak French, English and I’m learning Spanish
So very apt to move this personal and multilingual I have two questions for you One, tell us a little bit about Current AI and why this interest in this open hardware and partnership with Bhashini, how does this tie back to Current AI’s strategy and second, why is this personal to you?
So, Current AI was actually born out of the AI Action Summit last year in Paris It’s a public -private partnership with a mission to create AI for the public interest. And so it’s a partnership between philanthropy, government and the private sector to really say, we’re going to tackle public interest AI at scale. And the reason we’re going to do that is because the dominant companies that are governing our lives in AI operate at a scale, a financial scale, operate at an ambition level, that if we don’t match it, we don’t really have a chance to be a real alternative. And so Current AI was born out of that desire. The goal is to rally a global community and collaboratively and collectively to build a public staff for OpenAI that’s completely vertically integrated.
And so the way we work is we work with partners because the core premise is collaboration. Work with partners where we’ll identify an area of common interest and a priority and a gap in technology, and then we’ll zero in on that gap, work on it together, and then develop a piece of tech and release it as a public good. And so encourage this collaboration. This creation of technology that is put back in the public good, as well as have grant making under sort of like our fund pillar in order to encourage people already doing this work. And this topic is important to me, has been important to me for many years. I’m from Lebanon, from Beirut, and like I said, my native tongue is Arabic.
For the past many years, you know, our use of WhatsApp and mobile and social and everything, a lot of us in the Arab world lost use of Arabic. You know, my family and I, my sisters, my mom and my sisters and I speak in English to each other online all day. We speak on WhatsApp in English. The voice recognition is never good enough in Arabic. You spend more time correcting it than you do doing anything else. And so now it’s improved a little bit. But really, you know, technology has had an effect on the way we communicate with each other. And so for many years, it’s been a real concern for me that, you know, technology, if it’s not made by us, it’s not for us.
And so when I joined Current AI early this year, multilingual diversity was already a topic. And I was very happy about that. And sort of really… I really wanted to expand it into this idea of not just… language diversity, but cultural diversity and cultural preservation as a whole. And so this sort of idea came about and you can tell more about it.
Fantastic. What a story of Genesis. And of course, Silicon Valley making devices on AI for local use cases is going to be as effective as giving power in the hands of people. So on inclusivity, Amitabhji, one of the visions of Bhashani is to expand access. So when you think of this partnership with current AI, what is the future you envision in terms of expanding access and creating inclusion with Bhashani as the linchpin?
So a few things. So, you know, when you look at the size of the device, you know, we have almost reached a form factor, which is quite significant. It’s small, right? And it can be carried through at the last mile. And since it works offline, you are in a position to actually use it anywhere or more. So that’s the first part of inclusivity. We obviously have, you know, plans to look at smaller form factor as we go forward. The second thing which is there is to look at the language coverage. We currently cover 22 languages. In our system, we already have 16 languages, 14 more languages on text, a total of 36 languages. And we would like to increase that on breadth.
And recently we have digitized one of the tribal languages, which is Bheeli, which doesn’t have script. So that also gets added to it. So that is about breadth of languages which is there, which will be continuously added. So when we are talking about form factor, second, we are talking about offline. Third, we are talking about creating a breadth of languages so that no language is left behind. Hence, no person is left behind, including the tribal languages. The fourth factor is about… So how do we, you know, enrich the models which are there, which is a continuous activity which Vashni takes over. There can be, there are multiple things where the models still have to be enriched.
Means India has got about, means we were talking to Survey of India, and they have about 16 lakh places named, which are still to be digitized. So, you know, and put into the system. So those are glossaries which we are building. There are contextualization efforts which are happening. So over the period of time, the language enrichment as far as depth is concerned, is another thing which we are looking at. So we’re looking at breadth, depth, offline form factor as the four things which will move forward in this.
Fantastic. I can certainly see the open hardware playing a big role in that as well. I have a question to you on how you look at future. So what gives you the most hope? and the most concern about the future of language? And you started talking about how, you know, you feel like Arabic and the nuances are getting lost. So what gives you most hope or most concern about the future of language in an AI -driven world? Could you talk about that?
So I’ll start with the concern. I’m concerned about this new frontier of embodied AI. So over the past, you know, year or so, every big tech company has released their version of an embodied AI device that wants to enter your home, wants to enter, that wants to be close to your body, wants to, you know, enter your personal space. So whether metal is glasses or whether the butts are robots or whether Amazon Alexa. And we’re in full control of these devices, and we don’t know how they’re developed, and we don’t know how they’re trained. You know, last week or the week before, Meta announces that that the glasses are going to start doing facial recognition on every person you encounter in the street.
So now, unknowingly, you’re walking down the street. If somebody is wearing meta glasses, you are being recorded and facially recognized. So we have these devices. We don’t know how they work. They’re continuously recording our data, sending it out to the cloud. We also don’t know how they’re trained, and oftentimes they’re trained on Western languages. And so hardware is where the lockup first starts. It’s how the iPhone locked up a lot of technology innovation, because what happens is these companies will then develop, give us APIs into their devices. Startups will start forming and building on top of these devices, and then the startups start building a dependency on the device, and you start to build a whole stack on.
a core piece of hardware that you do not control. So it’s really kind of like a core, you know, building block that we have to crack before we let them sort of own the entire stack or the supply chain. I spent 15 years, you know, before current AI in open source hardware. I’ve seen how powerful it is when you develop on an open platform and people do what they want with it. It’s, you know, the same power that you get from something like Linux. And so that’s sort of a big area of concern. The area of hope for me is, you know, there are many trajectories for us to kind of improve from here. On one side, you can improve the device itself.
You lower its cost. You improve its battery life. You shrink its size. You make it more beautiful. So, you know, that’s one access. Then there’s another access that you can develop. You can have multiple of these devices together, connect them in a mesh network, now you have a distributed inference that you can use. you can run something larger on. You can have a larger version of this device that’s stationary. It can be like a micro data center. You can put a solar panel on it. Now, suddenly, it doesn’t need a battery. So you can infinitely innovate on the possibilities of this core building block. And then the third kind of track is on what you do with it.
You make a device for a farmer to identify how to deal with their crops. You make a device for a parent who wants to give their kid a toy but doesn’t want the toy to be communicating their private data back to the cloud. You create some sort of, I don’t know, tourism device that you can put around your neck and helps you move around, various sorts of things. And the opportunities are infinite.
Fantastic. And I wish we had more time to just continue going. We’re just scratching the surface. But we’re at time. And I thank you, Amitabhji, for the great work that you and your team are doing. Thank you. And I wish you all the best and all the luck for making that vision into a reality. Thank you very much. Thank you. and we move into our next segment which is another fireside chat and for that i would now hand the floor to a long -time friend and colleague martin tisney martin tisney leads the ai collaborative an organization working on building ai grounded in democratic values and principles and he’s also the chair of current ai. Martin over to you
Thanks very much um and my first task is going to be to welcome Abhishek singh who everyone knows who is the master and orchestrator of this entire summit congratulations Abhishek and amazed you’re still standing welcome and who is the orchestrator of the Paris summit welcome special envoy to the president thank you very much please I hope that was enough I think it was the next step so we are setting something with a resource to follow so as Sushant was saying and Aya I’m extraordinarily excited by Aya’s leadership when it comes to current AI and the work in really turning this work around linguistic diversity to the question of cultural preservation it seems to me that ensuring that AI isn’t squashing all of these incredible cultures that make up the beauty of the world into a monoculture or into a small number of monocultures is one of the most important questions that we have today so my first question to both of you maybe starting with you Abhishek and then to Anne it’s the same question what is your vision?
what is the world that you would like us to live in when it comes to this intersection of AI and culture if we get it right what does it look like? whether it’s five years ten years from now what does it look like if we get it right?
languages. He knows only his local term, his bug term. He does not even know how to key in or how to navigate a captcha or he gets lost with the hashtags and the Amazon. So for such people, if they are able to talk to the developers, put their query into the internet or bandwidth or connectivity and get a reply back, that will be empowering. And that’s what I think the ultimate objective of this summit also. Democratizing use of AI and ultimately making AI work for all. Thanks.
Thank you very much, Abhishek. Anne, What is your vision?
So, of course, I share a lot of what Abhishek said. I also think that using AI through our phones and one way to say this is that when I get online to my phone, I mean I love San Francisco, I love Shanghai, but I’d like to have a wider choice. I don’t necessarily want to be transported to Silicon Valley. who are transported to Shanghai when I get into AI. And that’s a little bit of a joke, but if all the cultural representation, if all the legal background, if all the customs that are taken as just the de facto way you interact with people, if that’s the choice, well, that’s just such a reduction of cultural diversity.
And I think it’s just not okay. It’s not just about being able to have access to a French AI or an Indian AI. It’s even more than that. If I’m interested in music and if I come from a particular area in France, well, I’d like to be able to have that community and its culture represented there. So I think that’s part of my vision.
Thank you. And if I can stay with you just a second, Anne, from a French perspective, from France’s point of view, how do you see? Um, culture. and AI playing together? What does it look like? So when I was a kid growing up in France, from a cultural perspective, it was at a time where it was, I actually think it was a good idea in retrospect, you’ll tell us what you think, that there was a law that mandated a certain percentage of music on radio to be sung in French. There was a law that mandated a certain amount of productions, movie productions to be in French. And that’s ended up, it seems to me, with a certain amount of, you know, sort of cultural patrimoine, as we say, to exist.
So from a policy perspective in France, when it comes to artificial intelligence and culture, do you think that at some point there needs to be a sort of a set norm, like we did in sort of in movies and radio? What do you think?
That’s a good question. I don’t know whether we need a set norm, but yes, there’s mechanisms to encourage creation in France and in Europe. That’s quite important. With every movie that you go and see, which can be from any country, you can see that there’s a set norm. And I think that’s a good thing. I think that’s a good thing. that gives a certain tax on this, a certain amount of money, goes to a fund that then helps French creators to go and prepare whatever they want as their next film. And that mechanism doesn’t make it hegemonious. I mean, of course, we love culture from all over the world, but it helps ensure that there’s an element of French cultural creation.
And that’s what we definitely want to continue to have. And we want people to have the ability to see that in France, but all over the world, just like we love to see Indian movies or listen to Indian music or some symphony or some movement. So that city needs to be maintained, needs to be ensured in including through some mechanisms to fund it. Yes.
Thank you Anne so. Thank you very much,. Abhishek, similar question to you.
I think if AI has to be like covering all aspects, then it has to be rooted into data sets that are diverse and data sets when you talk about in any cultural context, it will include not only languages but it will also include the culture, the heritage, the music, the movies, the songs and lots of folklore. Because in fact, if you look at across India, if you go to the rural areas and all, there are lots of traditions which are not even documented well. So those things are not even available in a digital format. They are known to people. Like in fact, recently I was watching a documentary on Netflix called Human in the Moon.
It’s set up in a state of India, Jharkhand, with a lot of tribal population and there are these tribal women who are doing data annotation for an American firm. And it shows that they have to, they are seeing leaves and pests and they have to mark it whether it’s a pest or not. So this young girl is there and what she does is that she sees a pest, an image, and she marks it not a pest. Her manager comes down heavily on her and says that this is obviously a pest. How are you saying there’s not a pest? She says that this tree grows in my local forest around where I live and I know that this worm eats only leaves which are dying.
In a way, it helps the plants. It’s not a pest. So again, having this traditional knowledge built into the corpus of data sets on which we train AI models will be very, very vital if we have to ensure that AI doesn’t hassle and hallucinate if AI becomes near to what human is. So it becomes very important to capture this cultural context from all across the world, from all communities, all cultures, all traditions, and we only will be able to be something which is not a pest. Because we are human like atrocious. This just technological pursuit of AGI and all will not solve the problem that we are living.
that’s a great example thank you but then maybe staying with you abhishek for a second and i’ll come back to you on the question of reciprocity so you talked about the data sets communities cultures and all their diversity are sharing the data we want them to be sharing their data with different with different ai models what does it look like from the community perspective do you think like should they be involved in it should they be have rights over the data how do you how do you think
It is a very interesting question because when it was about sharing of data sharing of data across uh across companies across industry we have to kind of when the frameworks which allows data for public purposes that means data in a way which does not violate the privacy or the personal identity of the person who owns the data the person who the data belongs the data principle per se so when it is data sharing the data in the community will need to be involved If you don’t do that, in the interest of business and in the interest of commercial requirements, the possibility of missing the data goes up. So it’s very important to have standards, not only technical standards, but community standards which are rooted in the culture and the belief systems of a place from where the data is coming from in order to ensure that the models and the applications
Thank you. If I can get a little bit further on that question, there’s the question about the rights of the individual and the rights of the communities towards the data. Do you think, in the way that you’re working, is there also a reciprocity in terms of if data about them is used for a particular purpose, that then the community should benefit from it? How do you think about that? They should benefit whether it’s a translation or other device? How do you think about that?
So you need to think about like the different use cases may have different applications. Like for example, it’s data about say agriculture and if I have aggregate data about a particular area and that kind of is used to generally advise me so partners with regard to what they should show for maximum benefits at what time they should show. Then that data should be shareable and that’s in benefit of everyone. But if we say for example health data, then there in the individual might not be wanting to share that data with the lab and ecosystem. So I think it will be context specific and we cannot have general rules about sharing of data and the reciprocity principles across different sectors.
Thank you very much. Anne I have a similar question for you on this question of reciprocity. What’s your take?
I think that’s a very profound question. Part of the reason why you want to share cultural data is so that cultures are preserved and you don’t end up with one or two or three cultures in the world, but something that is more diverse. So it is in the interest of a cultural group, of a civilization, that in the world of AI this culture is represented. And from that perspective, you have a very natural reciprocity loop. But at the same time, creators are saying, I don’t want my data to be used if I don’t have a mode of being compensated or recognized or a way to oppose. And so you have this tension between artists, for example, who say, well, I want my rights to be maintained and I want some type of compensation.
If this is being used to feed AI. models and then for people to earn money out of it. But then on a collective basis, you do want that culture to be represented. So I’m not sure I have a solution, but I very clearly see the tension. And many ways we can navigate that is to have a right of opposition by specific artists so that they can say, no, my data, my creations are not going to be used. And at the same time, you can certainly have historical information and things that are not so subject to maybe having remuneration for living artists be part of the general cultural data that you use to train AI. But beyond these two obvious things, I’m not really sure.
So we need to continue to work on this.
Thank you. And again, just to go a bit deeper on the question. it really is, it’s a fascinating question because from the perspective of the communities I would imagine whose data it is it’s data about them, as you say you want people to know about your culture, you want the culture to be preserved and at the same time you want a certain degree of agency over how the data is used. In an earlier panel I was talking about, or we were talking about indigenous data sovereignty and we were talking about the Maori community in New Zealand and the degree to which, as I understand it in Maori culture, any data, any information that pertains to Maori culture is effectively part of Maori culture so there’s a real question of agency.
My question is, in the run up and when we were working together on the Paris summit we talked quite a bit about the relation between open source AI on one hand and then the governance of the data and the governance of the data then to be controlled in different ways so how do you think about this balance because it strikes me that getting the balance right between on one hand the open source components, and I’ll come to the same question to you in a second, that is and on the other hand, a more controlled approach around data governance, that’s the special source. What do you think?
Yeah, I completely agree. And maybe that’s, as Abhishek was saying, maybe the example of health data is a good one there because for cultural data, you want the general benefit and you want to preserve artists’ rights. I think those are the two dynamics. For health data, you do want, as an individual, as a patient, if you’re being asked the question, do you want to protect your personal data, the answer is yes. If you’re being asked the question, are you willing to share your data with other people who have or are at risk of a similar illness so that it can help them, the answer is yes. And then how do you balance the two? And so you need to find some ways to share data in a platform or in a way that you have trust into.
And so it needs. It needs to be privacy -preserving. It needs to be held by an actor you trust, even if you don’t go and look at all the terms and conditions, but you need to understand that it’s an institution or a third party that you can trust. And then you want to be able to rely on that third party to make the right decisions, like, yes, sharing the data to enable research and find new cures, but maybe to sharing it to insurance companies so that you can be charged a different rate, depending on what your personal situation is. And then when you get into sovereignty, maybe you’re happy for this to be shared with innovative startups in your country or your region that will develop cures and new foods, but maybe not with some other actors.
So you get to a number of different levels and questions. And for that, having third parties, most third parties that can vote for you, we make the right decisions, I think, is very important.
Thank you. Let me take the same question to you. How do you see that balance between, on the one hand, we’ve talked a lot about open source, open source AI over the course of the week. How do you see the balance between, on the one hand, open source, on the other hand, the question of the cultural data that we’ve been talking about?
Again, ultimately, I’ll go back to the end objectives. What is the purpose for which we are sharing the data? Is it serving public interest or is it serving private interest? Is there a benefit for the user to whom the data belongs? So, for example, health data is there. If aggregate level data is there, for example, we keep on hearing about outbreaks of, we are over COVID, but we keep on hearing about outbreaks of flu and other ailments. If aggregate data about incidence of such diseases and linkages with other factors, environmental factors, weather factors, rain factors, is shared so that people can think of devising a data. AI enables solutions of integrating various. data sets and trying to see why in a particular geography, in a particular locality, some element of this is happening.
That is the public interest. So, we will have to define in a case -by -case basis, whenever data is being shared, whether open source or in a proprietary solution, what is the end objective, what is the problem that I am going to solve? And is it serving the larger interest of the community? Is it serving larger public interest or is it being done to benefit a few corporations? Like, for example, the example she gave about insurance companies. If it leads to, if the data about health consumption or something leads to increase in my insurance premiums, that is like not fair. Because they are linking that data with the individuals to whom the data belongs. So, we will need to think of privacy preservation techniques, we will need to think of anonymizing techniques, so that in no ways the data principle to whom the data belongs is harmed in an adverse manner.
So, we will have to do this in a very new instrument. There is not one size fits all solution. If we do that, we will end the risks of the, the risks dominating the narrative and we will go somewhere towards the positives that would have been the most fun.
Thank you very much so then there’s a question i can’t resist asking you which is what’s your definition of sovereignty because then you mentioned the term we’ve talked a lot about this this week and in the context of this conversation it’s really interesting because there’s a question of sovereignty from a nation’s perspective there’s a question of sovereignty i mentioned that marie example indigenous data sovereignty from a community’s perspective and then both have been talking about health so there’s a question of you know at an individual level the sovereignty i have over a data about me so with your experience coming towards the end of the summit and the experience that india has how do you when you think about sovereignty and ai what do you what do you think of
I feel like sovereignty of course traditionally it’s a science concept wherein it seems that nations which are sovereign need to have complete control over what they do how they do with the entire control of the decisions so when it apply when you apply to technology and when you apply to ai specifically the same concepts will apply with regard to what I want to do, with whom I want to do and how I want to do. Nobody else should decide to make decisions on my behalf. So maybe in the short term ideally a complete sovereign AI stack will mean that we should have complete control over all the five layers of AI. Whether it’s the energy layer, the data center, infrastructure, chips, models, applications, use cases.
We should have complete control over it. The technology is evolving right now. In fact, good for a few countries. In fact, good for humans. I don’t think any other country has complete control over the entire AI stack. Every other country has complete control. In the context of India, we are there. We are there on energy sufficiency. We have the data centers. We have our models, our applications, but we don’t have the complete. We have the capability to distribute the computer. We do hold them three to five years. We design our own chip. And in five to ten years, we’ll be able to have a fab which we can take it out also. In the short term, if I decide that which chip I want to use, how I want to use, how I procure rather than be subject to conditionality rather we force people something which will be sovereignty.
So sovereignty will apply the same concept of sovereignty that we apply at the beginning of political science where in complete control of the business live with the sovereign government that should be the way we should look at sovereignty in AI as well.
Thank you very much. So just as we’re ending and we’re now in the time, feel free to weave in the questions of sovereignty. Anne, curious, in the wake of President Macron state visit and the bilateral relationship between France and India what do you both see, so starting with Anne and then finishing with you Abhishek, what do you both see as opportunities for France and India to jointly work on these global norms, global approaches for a more contextual approach to artificial intelligence, for a culturally inclusive approach to AI?
Well, I’ll try to be short, but this is the year of joint innovation between India and France. There’s many areas where we’re collaborating and will continue to collaborate. Clearly, current AI and this work on multilingual AI is one. Working on AI that is resilient and sustainable by design, as we were just discussing earlier with Abhishek, is clearly a priority. So, it’s a priority, joint research. And then just to weave, I can’t resist weaving the work on sovereignty. I think sovereignty, no one actually, not even the U .S., they don’t have the chips. So, nobody can do everything alone. I believe that means having a choice and building alternative solutions. And I really think we can and we will jointly build alternative solutions between France and India.
I kind of echo her and in fact the partnership between India and France have been there for quite some time, in fact last year we coached with France Action Summit and the partnership has continued this year and this year of course as you know we have launched a year of innovation and many more activities have been announced by President Macron and our Prime Minister in the last week and we are looking forward to joining you at the World Tech in the next few months and there are many more activities, partnership at the university level, partnership at the research level, partnership at the business level, partnership at the government level, so I strongly believe that working jointly with especially a trusted partner like France and India we have complementary strengths and we can try to present an approach to building a solution that can become an example for the whole world.
Thank you very much thank you very much, it was an honour to launch Panteo and Paris and a pleasure to launch this partnership in India, thank you. Thank you.
Hello. Thanks, Martin. Abhishek Singh sir, I request you to stay on stage. And Aya, we’d love to have you on to launch the Global Innovation Challenge in the spirit of what Anne said. And Amitabh Nagsir as well. Please.
Am I going first? Okay, great. So, great session, great thoughts, great demo. So, all of us have seen the demo of the reference device, the device which has been built in partnership with Bhashini and Current AI. And in fact, I must mention that it was just a few weeks back when we had this discussion, because I have been discussing with Martin after the discussions and announcement at Public Interest AI, like what will Current AI do with the 400 million dollars that they have, euros that they have raised. And I was saying that let’s do something which can really make an impact and if we can do something at the impact summit, it will be worthwhile. But kudos to the teams and they have built the collaborative build design which was designed by both the engineers from Bhashini and the Current AI’s support is being done in such a way that it’s a platform, it’s a prototype on which we can innovate.
It’s completely open source. It’s hackable, it’s privacy preserving, it’s multilingual. And with on -device AI, this prototype is capable of functioning in remote locations in not only India but anywhere else in the world where connectivity is a challenge or for any reason, if there’s an earthquake or there’s a problem or if there is a natural calamity and we can’t have connectivity it can work. So that can be really transformational for people to access services. And in partnership with Current AI and Bhashini, in fact it’s my honor and privilege to announce the India AI Innovation Challenge. which will give an opportunity to researchers, to engineers, to developers, to entrepreneurs to build on this prototype. And this prototype will be available in an open source manner for everyone to hack it and make it smaller, you can make it more sleeker, you can solve individual use cases for different sectors and it’s based on an open source software and hardware design and the kind of use cases one can think of will be limitless.
So there will not be one but multiple solutions that can be built on it. And we are opening it today and in the next few weeks the date here says that submissions will open on 25th Feb. The 25th Feb on our website will launch the challenge on which applications can be submitted and there is some time to build the actual device and those who will win will get a very handsome reward that will be funded both by Bhashini and Current AI and together we will try to ensure that we are able to build a product that the whole world can use.
so we will continue to you know support this effort to our quantization mechanism and also the technical support will be available with respect to the model enrichment etc so this will be a joint effort so people are supposed to put in the effort and come back to us on the challenges and we will work on that together
I just say for Amitabh because maybe he tried to say Bashi is offering I think $110 ,000 prize to the winners maybe no I guess should people make a demand how does the number increase On your way out please make a request everyone the number to go out so there’s a big page to it for also participants to make sure that they have support while they’re developing their hardware and software and showcase the work online to inspire many other people really the point of it is to kind of like expand imagination and start this conversation about making your own AI and start the conversation about AI being personal and multilingual and solving communities and individuals own problems and today it’s a piece of hardware tomorrow it could be something in the software the day after can be in data so really this is the beginning of the journey thank you so much for coming everyone thank you for doing such good partners thank you Amitabh and the Vashni team thank you to the current AI team thank you Martin for bringing us together and have a great rest of the week and rest hopefully for you bye
“And the best part is that the device is offline”<a href=”https://dig.watch/event/india-ai-impact-summit-2026/inclusive-ai_-why-linguistic-diversity-matters?diplo-deep-link-text=So%2C+Andrew%2C+Shalin…
Event2. Privacy-Enhancing Technologies and Techniques Audience: Hello. Thank you so much for the presentation. I’m from Nanting Youth Development Service Centre, and in my study field, there’s this te…
EventSunil Abraham:Thank you so much for that. And a special thanks to all my friends and colleagues at CGI.br I’m very grateful that they have given me another opportunity to work with them. Also, it’s in…
EventMelinda Claybaugh:Yeah, so I think we’ve all given some examples of benefits of open source technology. And I think I’ll point you to, there was a paper that came out last week from the Columbia conve…
EventAudience: My name is Satish and I have a long background in open source. I am presently part of ICANN and DotAsia organization. I sense a little bit of uncertainty when you refer to open source AI, be…
EventParticipant: ≫ Distinguished guests, dear friends, it is a great honor to speak to you today on a topic that is reshaping not only our economies, but the very essence of human progress, artificial int…
EventWhen asked about platform competition, the speakers showed different perspectives. Aisha emphasized that it’s not a zero-sum game and there are enough problems for everyone to solve. Illango focused o…
EventBedir expanded Current AI’s focus beyond language to broader cultural preservation, recognizing that culture encompasses behaviors, norms, and both physical and digital artifacts. She emphasized getti…
EventXianhong Hu:Thank you very much Mr. Ambassador. Good morning everyone. First of all please allow me, I’d like to be able to share my screen as I have prepared my presentation with a PowerPoint and als…
EventLow to moderate disagreement level with high strategic significance. While speakers agreed on fundamental goals of linguistic inclusion and cultural preservation, their different approaches could lead…
EventMoreover, Wagenrad expanded the applicability of her solar controllers beyond their conventional use. New prototype controllers have been created for irrigation and environmental monitoring purposes, …
EventDejan Jakovljevic:Thank you so much. I will, while I quickly share my screen. Wonderful. So first of all, thanks again for being with us today. It’s a great pleasure. I know there are a lot of session…
EventPathways for Prosperity Commissionpublishedtwo digital policy briefs onhealthandeducationthat provide guidelines for countries willing to espouse digital technology in both sectors. It pinpoints four …
UpdatesKosmowski provides examples such as healthcare equipment (MRI and CAT scan machines) and fast food restaurant ordering systems. The future of chip technology and its applications
EventBut as it’s working, I also would like you to imagine what you could do with it and where you could take it. And from my perspective, I’ll just say for Current AI, this is an example of how we’d like …
Event_reportingNo, as you said, there are different ideas, different theories, different narratives going on in sovereign. Everybody has their own take on sovereignty. And so many times, sovereignty is also confused…
EventDrudeisha Madhub Merci beaucoup de m’avoir invitée à l’OIF. C’est vraiment un joli atelier depuis hier, c’est une belle conférence. Pour moi, principalement, c’est important de le mentionner, je prési…
EventAnd that’s one of the key words is sovereignty. From the government view, from the big companies view, we need to manage the confidential data that they are reluctant to store their confidential data …
EventThis comment demonstrates sophisticated understanding that ‘AI sovereignty’ isn’t a monolithic concept but represents different concerns and needs across different contexts. It shows policy flexibilit…
Event“The prototype was built in a six‑week (actually five‑week) sprint, made possible by pre‑existing partnership discussions between Current AI and Bhashani.”
The knowledge base states the project was undertaken in a six-week period, possibly closer to five weeks, and notes that the partnership with Bhashani had already been in discussion when the Current AI team joined [S32].
“Bhashani’s work on linguistic diversity and its large portfolio of models (250) inspired the collaboration.”
Bhashani’s focus on linguistic diversity is highlighted in the knowledge base, which describes admiration for its work on language diversity [S1] and mentions the partnership discussions centered on this theme [S32]; however, the exact number of models (250) is not specified in the sources.
The discussion shows strong convergence among speakers on four pillars: (1) offline, on‑device AI for last‑mile reach; (2) multilingual, locally relevant AI; (3) open‑source collaborative development released as a public good; (4) robust community‑centric data governance and sovereignty. Additional shared visions include future hardware miniaturisation and mesh networking.
High consensus – the majority of participants align on the same strategic directions, indicating a solid foundation for coordinated policy and technical actions to advance inclusive, multilingual, and privacy‑preserving AI.
The discussion revealed several substantive disagreements: (1) the best mechanism for governing cultural data within open‑source AI (trusted third parties vs community standards vs opt‑out rights); (2) how reciprocity and benefit‑sharing should be structured; (3) the scope of AI sovereignty, ranging from full stack national control to targeted offline language solutions; (4) contrasting concerns about closed embodied AI versus promotion of offline, open hardware; and (5) unclear details on prize funding for the Innovation Challenge. While participants shared a common vision of inclusive, multilingual, and privacy‑preserving AI, they diverged on governance models, implementation pathways, and concrete funding commitments.
Moderate to high – the core vision is shared, but the lack of consensus on data governance, sovereignty, and funding details could impede coordinated policy or collaborative actions unless reconciled. These disagreements highlight the need for clearer frameworks that balance open‑source innovation with cultural data rights and national AI autonomy.
The discussion was anchored by a series of pivotal remarks that moved it from a product showcase to a nuanced debate about the future of AI in society. Sushant’s opening set a collaborative agenda, which was fleshed out by Ayah’s articulation of Current AI’s public‑good mission and her warnings about embodied AI’s privacy and bias risks. Amitabh’s concrete example of digitizing a tribal language and Anne’s policy analogy introduced the cultural‑preservation and regulatory dimensions. Ayah’s hopeful vision of open hardware and the launch of the India AI Innovation Challenge turned concerns into actionable pathways. Finally, Martin’s visionary question and Abhishek’s definition of AI sovereignty broadened the dialogue to include long‑term societal and geopolitical implications. Together, these comments redirected the conversation from a technical demo to a strategic, inclusive, and ethically grounded roadmap for personal, local, multilingual AI.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

