Inclusive AI_ Why Linguistic Diversity Matters

20 Feb 2026 15:00h - 16:00h

Inclusive AI_ Why Linguistic Diversity Matters

Session at a glanceSummary, keypoints, and speakers overview

Summary

The summit opened by Sushant Kumar framed the discussion around “personal, local and multilingual AI” and announced a joint effort between Bhashani and Current AI to showcase an open-source, multilingual, handheld device that preserves privacy and works without connectivity [1-4]. After a short video, Aya Bdeir introduced the demo team, noting that the prototype was built in about five weeks through a close partnership between Current AI engineers and Bhashani’s model team [34-38]. Andrew Tergis explained that the device can run a full pipeline-automatic speech recognition, neural machine translation, a large language model, and text-to-speech-entirely on-device, demonstrated with a Hindi query answered in the user’s language [53-62]. Shalindra Pal Singh added that the models were heavily quantized to fit on the hardware without sacrificing accuracy, and that four to five models are currently operational offline on a Jetson platform [70-72][88-90][98-101].


Amitabh Nag described Bhashini’s origin in 2023, motivated by the difficulty of using non-native languages in school and the need to preserve linguistic nuance, leading to the creation of a corpus for many Indian languages [108-110][118-119]. He noted that the team now serves about 15 million inferences daily on a 200-GPU system and is expanding language coverage to 36 languages, including recently digitised tribal languages such as Bheeli [121-128][170-176]. Looking ahead, Nag highlighted four pillars for inclusive AI: a small offline form factor, broader language breadth, deeper model enrichment (e.g., adding place-name glossaries), and continuous contextualisation of data [163-176][180-186].


Aya Bdeir expressed concern that embodied AI devices from large tech firms are opaque, often trained on Western languages, and could lock users into proprietary stacks, whereas open-source hardware can democratise innovation similar to Linux [194-210]. She outlined hopeful trajectories, including cheaper, smaller devices, mesh networking for distributed inference, and specialised applications such as agricultural assistants or privacy-preserving toys [215-224][229-232]. The panel also debated cultural data sovereignty, with participants stressing the need for community involvement, consent, and reciprocity when data is used for AI, especially for health or tribal knowledge [311-318][326-332].


Anne Bouverot suggested that policy mechanisms, like France’s quotas for French content in media, could fund and protect cultural creation while supporting AI development [263-270]. The discussion concluded with the announcement of the India AI Innovation Challenge, an open-source competition to build on the prototype, offering prize funding from both Bhashini and Current AI and encouraging global collaboration, including with France [409-422][391-398]. Sushant’s closing remarks underscored that open, offline, multilingual AI hardware, coupled with collaborative governance of data and cultural assets, can empower diverse communities and drive inclusive AI innovation [233-236].


Keypoints


Major discussion points


Launch of a personal, local, multilingual AI hardware prototype – The session introduced an open-source, handheld device that runs AI offline, preserves privacy and supports many languages, followed by a live demo showing speech-to-text, translation, LLM inference and text-to-speech all on-device [4-9][52-58][88-91].


Collaboration model between Bhashani and Current AI – The project was built in a six-week partnership that emphasizes co-creation, open-source release as a public good, and a repeatable “identify-gap → co-develop → open-source” workflow [34-42][135-142].


Multilingual and cultural inclusion as a core goal – Speakers highlighted the need to serve mother-tongues and tribal languages (e.g., Bheeli), to avoid linguistic bias, and to preserve cultural nuance; the device currently covers 22 spoken languages and aims to expand to 36 [108-112][163-176]. Concerns were raised about “embodied AI” that is centrally controlled and trained on Western languages, while hope was expressed that open hardware can democratise access and enable diverse use-cases [194-206][214-222].


Future visions and application scenarios – The prototype can power vision-impaired assistants, agricultural tools, tourism guides, and can be networked in a mesh or scaled to a micro-data-center; the hardware is platform-agnostic (Jetson-based now but portable to other chips) and is intended to spark endless community-driven innovations [58-62][215-232][89-90].


Data sovereignty, reciprocity and governance – The panel debated who owns the data used to train models, the need for community-level standards, artist compensation, and the broader concept of AI sovereignty at individual, community and national levels, arguing for privacy-preserving, trusted third-party mechanisms and a “complete sovereign AI stack” [299-307][314-324][368-387].


Overall purpose / goal


The discussion aimed to showcase a tangible open-source AI device that makes advanced, multilingual AI accessible offline, to illustrate how cross-sector collaboration (Bhashani, Current AI, Kalpa Impact) can accelerate such public-good technology, and to launch the India AI Innovation Challenge that invites developers to build on the prototype for real-world, culturally-relevant solutions.


Overall tone


The conversation began with enthusiastic optimism about “making AI work for everyone” and celebrating the prototype’s capabilities. It shifted to reflective, personal anecdotes about language loss, then to cautious concern over centralized, embodied AI and data ownership. Throughout, the tone remained collaborative and hopeful, ending on a forward-looking, rally-the-community call-to-action as the speakers announced the innovation challenge and future partnerships.


Speakers

Sushant Kumar


– Areas of expertise: AI moderation, multilingual AI, public-interest AI


– Role: Session moderator / host


– Title: –


– Sources: [S1]


Announcer


– Areas of expertise: –


– Role: Event announcer / moderator


– Title: –


– Sources: [S3], [S4], [S5]


Anne Bouverot


– Areas of expertise: AI policy, digital diplomacy, telecommunications


– Role: Special Envoy for Artificial Intelligence, France; Chair of the board of École Normale Supérieure


– Title: Former Director General of the GSMA


– Sources: [S6]


Ayah Bdeir


– Areas of expertise: Open-source hardware, multilingual AI, entrepreneurship


– Role: CEO of Current AI; Engineer & entrepreneur with 20 years experience building open-source tech infrastructure


– Title: –


– Sources: [S8]


Amitabh Nag


– Areas of expertise: Linguistic AI, multilingual language models, large-scale inference systems


– Role: CEO of Bhashini


– Title: –


– Sources: [S9]


Martin Tisne


– Areas of expertise: AI governance, democratic AI values, collaborative AI development


– Role: Chair of Current AI; Lead of the AI Collaborative organization


– Title: –


– Sources: [S11], [S12]


Shalindra Pal Singh


– Areas of expertise: Integration of multilingual models, AI hardware-software co-design


– Role: General Manager at Bhashini; collaborator on device integration


– Title: –


– Sources: [S15]


Abhishek Singh


– Areas of expertise: Public-interest AI policy, digital sovereignty, government-industry collaboration


– Role: Under-Secretary, Ministry of Electronics and Information Technology (India)


– Title: –


– Sources: [S16]


Device


– Areas of expertise: On-device AI inference, multimodal processing (ASR, MMT, LLM, TTS)


– Role: AI hardware prototype that responded to queries


– Title: –


– Sources: – (information from transcript)


Andrew Tergis


– Areas of expertise: Embedded AI engineering, hardware prototyping, model quantization


– Role: Lead engineer on the Current AI side of the project


– Title: –


– Sources: [S22]


Additional speakers:


Aya Bhadel – identified in the transcript as “the CEO of Current AI” (likely the same person as Ayah Bdeir, but listed separately because of the distinct name used).


– Areas of expertise: –


– Role: CEO of Current AI


– Title: –


– Sources: – (derived from transcript)


Full session reportComprehensive analysis and detailed insights

The session opened with Sushant Kumar asking how a paradigm can be built so that artificial intelligence works for everyone and stating that this was the purpose of the gathering [1-2]. He introduced the theme “The case for personal, local and multilingual AI” and announced a joint effort between Bhashani and Current AI, coordinated by Kalpa Impact, to showcase a “seminal open-source AI hardware device” that is multilingual, handheld, privacy-preserving and capable of operating without connectivity [3-5]. After a brief outline of the agenda, he promised a video that would “capture our imagination of what this product would look like” and a live demonstration by the makers [6-12]. The video underscored that India’s AI journey has moved beyond pilots to “populations’ reach, clear use cases, last-mile delivery” and a vision of AI that is not governed by any single country or corporation [13-20].


Following the video, Sushant invited Ayah Bdeir, CEO of Current AI, to lead the product demonstration [22-24]. Ayah briefly paused the session to organise a group photo and then introduced the demo team: Andrew Tergis, the lead engineer from Current AI, and Shalindra Pal Singh, a general manager at Bhashani who had worked closely on integrating Bhashani’s models [30-33]. She highlighted that the prototype had been built in a remarkably short six-week (actually five-week) sprint, a timeline made possible by the pre-existing partnership discussions and her admiration for Bhashani’s work on linguistic diversity and its 250 models [34-38]. Ayah framed the collaboration as a model of “identify-gap → co-develop → open-source” that results in a public-good stack for OpenAI, emphasizing that Current AI seeks to learn partners’ priorities and release jointly built technology openly [70-78].


Andrew then described the prototype as an “open AI inference device” that differs from other conference products by being deliberately general-purpose: any user can connect, upload models and run inference locally on a handheld unit [53-56]. He demonstrated a flagship application co-created with Bosch for vision-impaired users, where a button press triggers a spoken question in the user’s native language, the device captures audio, transcribes it via automatic speech recognition (ASR), translates it to English, feeds the text and an image to a large language model (LLM), translates the answer back, and finally synthesises speech in the original language [58-66]. The entire pipeline-ASR, neural machine translation (MMT), LLM inference and text-to-speech (TTS)-runs on-device, illustrating the feasibility of full-stack offline AI [55-62].


During the live test, a Hindi query was processed: the ASR model converted the spoken input to text, the MMT translated it, the embedded LLM generated a response, and the TTS module vocalised the answer, all without any cloud connection [70-72][79-82]. In a second demonstration, an English query “What is on this table?” returned the answer “The table has candy wrappers of Twix, Milky Way, and KitKat,” showing the device’s ability to recognise objects and produce brand-level details [84-86]. Shalindra explained that the models had been heavily quantised to fit the limited hardware, yet the optimisation “reached a point where there is no hit on the accuracy fronts” [71-72]. The prototype currently runs on an Intel Jetson platform but is designed to be processor-agnostic, allowing future deployments on alternative chips while supporting the deployment of any model the community wishes to use [88-91].


Sushant praised the demonstration, noting the logistical challenge of clearing the device through customs and the significance of its offline operation, which meant that “all those queries, all the AI processing was happening on the device” [92-100]. He highlighted that four or five models are already operational on that particular device, a notable achievement for edge hardware [100-102].


Amitabh Nag then provided the backstory of Bhashani, explaining that it was founded in 2023 after he experienced the difficulty of learning in non-native languages at school, which motivated a drive to preserve linguistic nuance and create a corpus for Indian languages [108-110][118-120]. He described the early technical hurdles of building models without existing digital data, the reliance on “brute-force” data collection and collaboration with translators to create a digital corpus, and the subsequent scaling to a system that now handles roughly 15 million inferences per day on a 200-GPU cluster, monitored through real-time dashboards [121-131].


Regarding language coverage, Amitabh reported that the current system supports 22 spoken languages and aims to expand to 36, already having digitised the tribal Bheeli language, which previously lacked a script [170-176]. He outlined four pillars for inclusive AI: (1) a small, offline-first form factor that can reach the last mile; (2) expanding the breadth of language coverage so that no language is left behind; (3) deepening model enrichment, for example by adding place-name glossaries from the Survey of India; and (4) continuous contextualisation of data to improve relevance [163-168][180-186].


Ayah shifted the discussion to broader concerns, warning that the emerging wave of “embodied AI” – glasses, robots, voice assistants – often records continuously, sends data to the cloud, and is trained predominantly on Western languages, thereby creating a hardware lock-in similar to the iPhone’s ecosystem [194-206][210-213]. She argued that open-source hardware can break this lock-in, likening its potential impact to that of Linux, which provides a neutral foundation for community-driven innovation [210-213].


She then outlined hopeful trajectories for the platform: reducing cost, improving battery life, shrinking size, enabling mesh networking of multiple units, scaling to stationary micro-data-centres powered by solar panels, and developing specialised applications such as agricultural assistants, privacy-preserving toys for children, or tourism guides [215-232]. These possibilities are “infinite” once the hardware platform is open and modular [215-226][227-232].


Fireside chat – panel discussion


Martin Tisne opened the panel by introducing Abhishek Singh as “the master and orchestrator of this entire summit” and announcing the launch of Panteo, a framework for culturally-aware data sharing [300-310]. He then set the stage for a conversation on data sovereignty and reciprocity.


Abhishek Singh argued that communities must retain rights over data derived from them and should receive tangible benefits, especially in sectors like agriculture where aggregated data can improve advice, while recognising that health data may require stricter privacy controls [299-307][306-310]. He illustrated the point with a Netflix documentary about tribal women annotating pest data, highlighting how local knowledge can dramatically improve AI outcomes [260-270]. Anne Bouverot echoed the need for trusted third-party institutions that can manage privacy-preserving data sharing, ensuring that cultural creators can opt-out or be compensated, and that data use balances public-interest research with protection against misuse [318-324][330-342]. Martin Tisne highlighted the tension between open-source AI development and the need for controlled governance of cultural datasets, prompting a nuanced debate on reconciling openness with cultural rights [329-332][318-324][330-342].


On AI sovereignty, Abhishek defined it as complete national control over the five layers of the AI stack-energy, data-centre, chips, models and applications-asserting that no country should be dependent on external providers for any of these layers [368-373][374-382]. He noted that India already possesses energy sufficiency, data-centres, models and applications, and is progressing toward domestic chip design and eventual fabrication, aiming for full-stack independence within the next 5-10 years [383-387].


The conversation then turned to Indo-French cooperation. Anne highlighted existing joint research on resilient, multilingual AI and suggested that France’s policy mechanisms, such as cultural quotas that fund local creators, could be adapted to AI to ensure cultural representation and funding [263-270][271-276]. Amitabh added that the partnership between India and France, reinforced by recent high-level engagements, offers complementary strengths for building alternative, sovereign AI solutions and shaping global norms [391-398][401-403].


India AI Innovation Challenge


Abhishek announced the India AI Innovation Challenge, an open-source competition that invites researchers, developers and entrepreneurs to hack the Bhashani-Current AI prototype. Submissions open on 25 February, with prize funding from both organisations; Bhashani will provide quantisation expertise and technical support, while Current AI will continue to release the hardware and software as public-good resources[409-424][419-424]. Aya mentioned a possible prize pool of about $110 000, though the exact amount was not confirmed [420-422].


Sushant concluded by reaffirming that the combination of offline, multilingual, open-source hardware and collaborative governance of data and cultural assets can empower diverse communities, drive inclusive AI innovation and, ultimately, make AI work for everyone [233-236]. The session therefore illustrated a concrete technical achievement, a shared vision for culturally aware AI, and a concrete call-to-action through the innovation challenge-directly aligning with the overarching theme of personal, local, multilingual AI for everyone.


Session transcriptComplete transcript of the session
Sushant Kumar

And therefore, how do we develop and support a paradigm that can make AI work for everyone? And that’s what we are here today. The session today is very aptly called: The case for personal, local and multilingual AI. Through a collaboration between Bhashani and Current AI, orchestrated by Kalpa Impact, we are proud to present to you today a seminal open source AI hardware device, one that is multilingual, handheld, privacy preserving and works in zero connectivity settings. So what we are going to do today is we are going to talk about the concept of AI. What we are going to show you after this will be a video that presents the imagination of what such a device could lead to.

in terms of making AI work for everyone. And once we have done that, there’s a special treat for all of you. The maker of the device and the collaborators at Bhashani are there in the room and they will demonstrate the product to you. So why don’t I begin with playing this video, which captures, which takes some creative liberties and captures our imagination of what this product would look like. And train on what I am watching. Audio, please. Thank you. Thank you. India’s real journey is no longer about pilots or promises. It’s about populations’ reach, clear use cases, last mile delivery. This is real world impact. This is real world impact. and connected vision for AI, not one that’s governed by any one country or one company.

I think all countries have a huge amount to bring to the table and a big relief in the power of collaboration. I was ready, the cup is open, now we need you. Come innovate AI for your own language, for your own community. We want to work with as diverse a group as possible. We can’t wait to see what we do. Yes, we’re back on. And for the next segment, I would like to invite Aya Bhadel, the CEO of Current AI, to take us through the product demonstration. Aya is an engineer and an entrepreneur with 20 years of experience building open source technology infrastructure. that works at global scale. Aya, over to you.

Ayah Bdeir

I have a quick interruption. I have to ask everybody to come here to take a picture so that the picture can be read by the end of the panel. You have 90 seconds free to speak amongst yourselves. Thank you. All right. All right. Thank you so much for coming, everyone. I’d like to introduce Andrew Turgis, who was the lead engineer on this project from the current AI team, who’s going to take us through a demo. Oh, there you are. And also Shalindra Pal Singh, who is a general manager at Bashni, who was Andrew’s collaborator and worked very closely to integrate Bashni models into the device. And I just want to say a couple of things. This project was undertaken in a six -week period, I think maybe closer to five weeks, actually.

So I just joined current AI in January of this year. When I came in, the partnership with Bashni had already been in discussion, and I was very inspired by Bashni’s work on linguistic diversity and the 250 models. And we thought this was an opportunity for us to go all the way, say, to the user and create something where really people can create AI that works for themselves, for their communities, and for their languages. So this prototype is the beginning of a journey and also a platform to imagine infinite things that are possible. And so you’ll see how it works. But as it’s working, I also would like you to imagine what you could do with it and where you could take it.

And from my perspective, I’ll just say for Current AI, this is an example of how we’d like to work with partners where we learn more about their interests and their focus areas and their priorities, and we zero in on a collaboration that we can develop together. We build it together, and then we release it as a public good. So in this case, it’s a piece of hardware and a development platform. In another case, it could be something else. But we’re really proud that this collaboration with Vashni is our first collaborative build, and you get to see it kind of firsthand as you’re sitting here. So, Andrew, Shalindra, please. Please join me on stage, and I’ll let you take us away for the demo.

Andrew Tergis

All right. Perfect. Hello. I’m so pleased to be able to show you this prototype that we’ve created. Yes. Oh, thank you. In front of the table. Wonderful. So this is our prototype open AI inference device. So, you know, unlike some other products you might have seen at this conference, which might be designed for one very specific user or one very specific use case. This device is designed to be used by any number of users for any number of use cases. The hope is that anyone could feel empowered to connect up to this device, write their own application, pull any number of models onto the device and run inference locally in their hand. We have one flagship application that we’ve developed in concert with Bosch.

That demonstrates their the models that they’ve been developing over so much time. And this sample. Application we call here the world, which is. an application where a vision -impaired user can press a button, ask a question in their native language about their surrounding, and have the device read back their response again in their native language, leveraging Bosch, these 22 -plus languages. In particular, we’re leveraging an ASR, an automatic speech recognition module, to convert the audio into text in their native language. We’ll be leveraging an MMT, neural machine translation module, to convert that text into English. We’re running it through a large language model with the image data to answer the question, and then we’ll be converting it back into their native language using, again, the MMT model, and finally a TTS module to convert it back into audio.

So this device is able to run all of those modules in concert. So without further ado, let’s try and give it a test query. Shalinder, do you think you can help me out here? I guess you’ll take the photo, and then I’ll spin it around quickly so the audience can see what’s happening. We’ll ask in Hindi. Let me just triple check. Yep, you’re all good. All right.

Shalindra Pal Singh

What it has done is it has taken the image and then it has taken the automatic speech recognition model kicks in and then neural machine translation is happening and then again the response is getting from the LLM that we have embedded and the translation is happening and the text to speech which is being spoken out. We have quantized the model in such a way that it is fit in. Usually when we do the quantization there is always a trade -off that there is a hit on the accuracy but we have reached to a point where there is no hit on the accuracy fronts.

Andrew Tergis

This is a great way to a truly huge effort from your team, and we wouldn’t have been able to fit such a high -fidelity LLM on this if you didn’t do that great optimization work. So let’s see. Let’s ask another question. We have a couple of candy bars on this desk here, which we can show you. Let’s see. Let’s try it. I’m going to put this in English. What is on this table?

Device

The table has candy wrappers of Twix, Milky Way, and KitKat.

Andrew Tergis

All right. All right. It actually got the brands. And we have one more question of grave importance. But I’ll ask him in Hindi. That’s right. I got it. This is the best candy bar in the world. There we go. Would anyone like a candy bar? Anyone? Anyone? There you go. So just very briefly while we’re handing this out, this is currently based on the Intel Jetson, the NVIDIA Jetson processing platform, but we’ve used it to support other platforms as well because the processing that we’re doing does not depend on that. That just happens to be the platform we’ve chosen at the moment. And, yeah, we’re working on the ability to deploy any model that you could dream of onto this device.

Thank you.

Sushant Kumar

Thank you very much. How did everyone feel about that demonstration and the things that can be done? Thank you. Thank you. And kudos to the Bhashini team, which worked tirelessly, and, of course, Andrew and the current AI team, which worked tirelessly to make sure the hardware, software, all of that was integrated. We had to get a device through customs as well. So that took some time, but eventually it’s here. and it’s working, which is amazing. And the best part is that the device is offline. All those queries, all the AI processing was happening on the device. And there are four or five models operational. Four models operational on that particular device, no mean feat. I salute the engineers who have worked on this, and there’s more to come.

And we know we have to get in a lot in a short period of time. So I will invite Ayya Bedev, the CEO of Current AI, and Sri Amitabh Ma ‘amji, the CEO of Bhashini, to join me for a fireside chat. And we’ll try and understand what about the personal, local, multilingual AI is what they are passionate about. So this is also about what are their motivations. So why don’t we start with you, Anasabji. so we all know a lot about bhajani we have heard about it and you know it’s a superstar at this point in time in terms of what you have achieved tell us about the origins tell us about how this all started and why this is personal to

Amitabh Nag

Hey thank you see we all are born with our mother tongue right we learn our mother tongues for good 4 -5 years before we land up in a school and when we land up in a school it’s a three language formula so I am a Bengali and I talk about you know when it is Bengali everything is eaten so chol khawe is the right word so when you go to the school and you have to do Hindi and English you know how it could be for first 6 months you are going to you know, people will be laughing at you when you are translating and speaking because that’s the first way of speaking. You are not a native language speaker, so you will be translating and speaking.

That’s the linguistic nuance that you went after. So, you know, over a period of time, of course, we grew up. We were told that you have to learn English to succeed in life. So that’s another given which was there. And obviously, this opportunity came up. You know, there was already a concept which was there. And obviously, we started with, you know, one room office, first employee.

Sushant Kumar

When was this? Which year?

Amitabh Nag

This was in 2023.

Sushant Kumar

Okay. That’s recent. That’s recent.

Amitabh Nag

And then obviously, we started growing up as a team looking at various use cases. People started initially looking at the first thing was what’s the accuracy, which was the first question which used to come up. But then, you know. So our models were built up in a difficult condition because we didn’t have digital data to build up the AI model and which we collected the data through a brute force. And then we built up the models which were there because we went across to multiple places with translators who actually created the corpus, which is digital corpus. We still had deficient data, but we went across to build the model and deploy it. And under deployment, we had challenges which obviously came up from all aspects.

And today, when we have actually deployed the use cases, learned from it, improved it, we are now in a situation where we are running about 15 million inferences a day with a 200 GPU system and all having dashboards which actually give you every inference timeliness, how much time it takes, et cetera, et cetera. So we are able to real -time monitor what is happening in our system, who are our customers, how they are using it.

Sushant Kumar

Fantastic. It’s wonderful to hear about your personal motivations. And I’ll move to you. How many languages do you speak?

Ayah Bdeir

My native tongue is Arabic and then I speak French, English and I’m learning Spanish

Sushant Kumar

So very apt to move this personal and multilingual I have two questions for you One, tell us a little bit about Current AI and why this interest in this open hardware and partnership with Bhashini, how does this tie back to Current AI’s strategy and second, why is this personal to you?

Ayah Bdeir

So, Current AI was actually born out of the AI Action Summit last year in Paris It’s a public -private partnership with a mission to create AI for the public interest. And so it’s a partnership between philanthropy, government and the private sector to really say, we’re going to tackle public interest AI at scale. And the reason we’re going to do that is because the dominant companies that are governing our lives in AI operate at a scale, a financial scale, operate at an ambition level, that if we don’t match it, we don’t really have a chance to be a real alternative. And so Current AI was born out of that desire. The goal is to rally a global community and collaboratively and collectively to build a public staff for OpenAI that’s completely vertically integrated.

And so the way we work is we work with partners because the core premise is collaboration. Work with partners where we’ll identify an area of common interest and a priority and a gap in technology, and then we’ll zero in on that gap, work on it together, and then develop a piece of tech and release it as a public good. And so encourage this collaboration. This creation of technology that is put back in the public good, as well as have grant making under sort of like our fund pillar in order to encourage people already doing this work. And this topic is important to me, has been important to me for many years. I’m from Lebanon, from Beirut, and like I said, my native tongue is Arabic.

For the past many years, you know, our use of WhatsApp and mobile and social and everything, a lot of us in the Arab world lost use of Arabic. You know, my family and I, my sisters, my mom and my sisters and I speak in English to each other online all day. We speak on WhatsApp in English. The voice recognition is never good enough in Arabic. You spend more time correcting it than you do doing anything else. And so now it’s improved a little bit. But really, you know, technology has had an effect on the way we communicate with each other. And so for many years, it’s been a real concern for me that, you know, technology, if it’s not made by us, it’s not for us.

And so when I joined Current AI early this year, multilingual diversity was already a topic. And I was very happy about that. And sort of really… I really wanted to expand it into this idea of not just… language diversity, but cultural diversity and cultural preservation as a whole. And so this sort of idea came about and you can tell more about it.

Sushant Kumar

Fantastic. What a story of Genesis. And of course, Silicon Valley making devices on AI for local use cases is going to be as effective as giving power in the hands of people. So on inclusivity, Amitabhji, one of the visions of Bhashani is to expand access. So when you think of this partnership with current AI, what is the future you envision in terms of expanding access and creating inclusion with Bhashani as the linchpin?

Amitabh Nag

So a few things. So, you know, when you look at the size of the device, you know, we have almost reached a form factor, which is quite significant. It’s small, right? And it can be carried through at the last mile. And since it works offline, you are in a position to actually use it anywhere or more. So that’s the first part of inclusivity. We obviously have, you know, plans to look at smaller form factor as we go forward. The second thing which is there is to look at the language coverage. We currently cover 22 languages. In our system, we already have 16 languages, 14 more languages on text, a total of 36 languages. And we would like to increase that on breadth.

And recently we have digitized one of the tribal languages, which is Bheeli, which doesn’t have script. So that also gets added to it. So that is about breadth of languages which is there, which will be continuously added. So when we are talking about form factor, second, we are talking about offline. Third, we are talking about creating a breadth of languages so that no language is left behind. Hence, no person is left behind, including the tribal languages. The fourth factor is about… So how do we, you know, enrich the models which are there, which is a continuous activity which Vashni takes over. There can be, there are multiple things where the models still have to be enriched.

Means India has got about, means we were talking to Survey of India, and they have about 16 lakh places named, which are still to be digitized. So, you know, and put into the system. So those are glossaries which we are building. There are contextualization efforts which are happening. So over the period of time, the language enrichment as far as depth is concerned, is another thing which we are looking at. So we’re looking at breadth, depth, offline form factor as the four things which will move forward in this.

Sushant Kumar

Fantastic. I can certainly see the open hardware playing a big role in that as well. I have a question to you on how you look at future. So what gives you the most hope? and the most concern about the future of language? And you started talking about how, you know, you feel like Arabic and the nuances are getting lost. So what gives you most hope or most concern about the future of language in an AI -driven world? Could you talk about that?

Ayah Bdeir

So I’ll start with the concern. I’m concerned about this new frontier of embodied AI. So over the past, you know, year or so, every big tech company has released their version of an embodied AI device that wants to enter your home, wants to enter, that wants to be close to your body, wants to, you know, enter your personal space. So whether metal is glasses or whether the butts are robots or whether Amazon Alexa. And we’re in full control of these devices, and we don’t know how they’re developed, and we don’t know how they’re trained. You know, last week or the week before, Meta announces that that the glasses are going to start doing facial recognition on every person you encounter in the street.

So now, unknowingly, you’re walking down the street. If somebody is wearing meta glasses, you are being recorded and facially recognized. So we have these devices. We don’t know how they work. They’re continuously recording our data, sending it out to the cloud. We also don’t know how they’re trained, and oftentimes they’re trained on Western languages. And so hardware is where the lockup first starts. It’s how the iPhone locked up a lot of technology innovation, because what happens is these companies will then develop, give us APIs into their devices. Startups will start forming and building on top of these devices, and then the startups start building a dependency on the device, and you start to build a whole stack on.

a core piece of hardware that you do not control. So it’s really kind of like a core, you know, building block that we have to crack before we let them sort of own the entire stack or the supply chain. I spent 15 years, you know, before current AI in open source hardware. I’ve seen how powerful it is when you develop on an open platform and people do what they want with it. It’s, you know, the same power that you get from something like Linux. And so that’s sort of a big area of concern. The area of hope for me is, you know, there are many trajectories for us to kind of improve from here. On one side, you can improve the device itself.

You lower its cost. You improve its battery life. You shrink its size. You make it more beautiful. So, you know, that’s one access. Then there’s another access that you can develop. You can have multiple of these devices together, connect them in a mesh network, now you have a distributed inference that you can use. you can run something larger on. You can have a larger version of this device that’s stationary. It can be like a micro data center. You can put a solar panel on it. Now, suddenly, it doesn’t need a battery. So you can infinitely innovate on the possibilities of this core building block. And then the third kind of track is on what you do with it.

You make a device for a farmer to identify how to deal with their crops. You make a device for a parent who wants to give their kid a toy but doesn’t want the toy to be communicating their private data back to the cloud. You create some sort of, I don’t know, tourism device that you can put around your neck and helps you move around, various sorts of things. And the opportunities are infinite.

Sushant Kumar

Fantastic. And I wish we had more time to just continue going. We’re just scratching the surface. But we’re at time. And I thank you, Amitabhji, for the great work that you and your team are doing. Thank you. And I wish you all the best and all the luck for making that vision into a reality. Thank you very much. Thank you. and we move into our next segment which is another fireside chat and for that i would now hand the floor to a long -time friend and colleague martin tisney martin tisney leads the ai collaborative an organization working on building ai grounded in democratic values and principles and he’s also the chair of current ai. Martin over to you

Martin Tisne

Thanks very much um and my first task is going to be to welcome Abhishek singh who everyone knows who is the master and orchestrator of this entire summit congratulations Abhishek and amazed you’re still standing welcome and who is the orchestrator of the Paris summit welcome special envoy to the president thank you very much please I hope that was enough I think it was the next step so we are setting something with a resource to follow so as Sushant was saying and Aya I’m extraordinarily excited by Aya’s leadership when it comes to current AI and the work in really turning this work around linguistic diversity to the question of cultural preservation it seems to me that ensuring that AI isn’t squashing all of these incredible cultures that make up the beauty of the world into a monoculture or into a small number of monocultures is one of the most important questions that we have today so my first question to both of you maybe starting with you Abhishek and then to Anne it’s the same question what is your vision?

what is the world that you would like us to live in when it comes to this intersection of AI and culture if we get it right what does it look like? whether it’s five years ten years from now what does it look like if we get it right?

Abhishek Singh

languages. He knows only his local term, his bug term. He does not even know how to key in or how to navigate a captcha or he gets lost with the hashtags and the Amazon. So for such people, if they are able to talk to the developers, put their query into the internet or bandwidth or connectivity and get a reply back, that will be empowering. And that’s what I think the ultimate objective of this summit also. Democratizing use of AI and ultimately making AI work for all. Thanks.

Martin Tisne

Thank you very much, Abhishek. Anne, What is your vision?

Anne Bouverot

So, of course, I share a lot of what Abhishek said. I also think that using AI through our phones and one way to say this is that when I get online to my phone, I mean I love San Francisco, I love Shanghai, but I’d like to have a wider choice. I don’t necessarily want to be transported to Silicon Valley. who are transported to Shanghai when I get into AI. And that’s a little bit of a joke, but if all the cultural representation, if all the legal background, if all the customs that are taken as just the de facto way you interact with people, if that’s the choice, well, that’s just such a reduction of cultural diversity.

And I think it’s just not okay. It’s not just about being able to have access to a French AI or an Indian AI. It’s even more than that. If I’m interested in music and if I come from a particular area in France, well, I’d like to be able to have that community and its culture represented there. So I think that’s part of my vision.

Martin Tisne

Thank you. And if I can stay with you just a second, Anne, from a French perspective, from France’s point of view, how do you see? Um, culture. and AI playing together? What does it look like? So when I was a kid growing up in France, from a cultural perspective, it was at a time where it was, I actually think it was a good idea in retrospect, you’ll tell us what you think, that there was a law that mandated a certain percentage of music on radio to be sung in French. There was a law that mandated a certain amount of productions, movie productions to be in French. And that’s ended up, it seems to me, with a certain amount of, you know, sort of cultural patrimoine, as we say, to exist.

So from a policy perspective in France, when it comes to artificial intelligence and culture, do you think that at some point there needs to be a sort of a set norm, like we did in sort of in movies and radio? What do you think?

Anne Bouverot

That’s a good question. I don’t know whether we need a set norm, but yes, there’s mechanisms to encourage creation in France and in Europe. That’s quite important. With every movie that you go and see, which can be from any country, you can see that there’s a set norm. And I think that’s a good thing. I think that’s a good thing. that gives a certain tax on this, a certain amount of money, goes to a fund that then helps French creators to go and prepare whatever they want as their next film. And that mechanism doesn’t make it hegemonious. I mean, of course, we love culture from all over the world, but it helps ensure that there’s an element of French cultural creation.

And that’s what we definitely want to continue to have. And we want people to have the ability to see that in France, but all over the world, just like we love to see Indian movies or listen to Indian music or some symphony or some movement. So that city needs to be maintained, needs to be ensured in including through some mechanisms to fund it. Yes.

Martin Tisne

Thank you Anne so. Thank you very much,. Abhishek, similar question to you.

Abhishek Singh

I think if AI has to be like covering all aspects, then it has to be rooted into data sets that are diverse and data sets when you talk about in any cultural context, it will include not only languages but it will also include the culture, the heritage, the music, the movies, the songs and lots of folklore. Because in fact, if you look at across India, if you go to the rural areas and all, there are lots of traditions which are not even documented well. So those things are not even available in a digital format. They are known to people. Like in fact, recently I was watching a documentary on Netflix called Human in the Moon.

It’s set up in a state of India, Jharkhand, with a lot of tribal population and there are these tribal women who are doing data annotation for an American firm. And it shows that they have to, they are seeing leaves and pests and they have to mark it whether it’s a pest or not. So this young girl is there and what she does is that she sees a pest, an image, and she marks it not a pest. Her manager comes down heavily on her and says that this is obviously a pest. How are you saying there’s not a pest? She says that this tree grows in my local forest around where I live and I know that this worm eats only leaves which are dying.

In a way, it helps the plants. It’s not a pest. So again, having this traditional knowledge built into the corpus of data sets on which we train AI models will be very, very vital if we have to ensure that AI doesn’t hassle and hallucinate if AI becomes near to what human is. So it becomes very important to capture this cultural context from all across the world, from all communities, all cultures, all traditions, and we only will be able to be something which is not a pest. Because we are human like atrocious. This just technological pursuit of AGI and all will not solve the problem that we are living.

Martin Tisne

that’s a great example thank you but then maybe staying with you abhishek for a second and i’ll come back to you on the question of reciprocity so you talked about the data sets communities cultures and all their diversity are sharing the data we want them to be sharing their data with different with different ai models what does it look like from the community perspective do you think like should they be involved in it should they be have rights over the data how do you how do you think

Abhishek Singh

It is a very interesting question because when it was about sharing of data sharing of data across uh across companies across industry we have to kind of when the frameworks which allows data for public purposes that means data in a way which does not violate the privacy or the personal identity of the person who owns the data the person who the data belongs the data principle per se so when it is data sharing the data in the community will need to be involved If you don’t do that, in the interest of business and in the interest of commercial requirements, the possibility of missing the data goes up. So it’s very important to have standards, not only technical standards, but community standards which are rooted in the culture and the belief systems of a place from where the data is coming from in order to ensure that the models and the applications

Martin Tisne

Thank you. If I can get a little bit further on that question, there’s the question about the rights of the individual and the rights of the communities towards the data. Do you think, in the way that you’re working, is there also a reciprocity in terms of if data about them is used for a particular purpose, that then the community should benefit from it? How do you think about that? They should benefit whether it’s a translation or other device? How do you think about that?

Abhishek Singh

So you need to think about like the different use cases may have different applications. Like for example, it’s data about say agriculture and if I have aggregate data about a particular area and that kind of is used to generally advise me so partners with regard to what they should show for maximum benefits at what time they should show. Then that data should be shareable and that’s in benefit of everyone. But if we say for example health data, then there in the individual might not be wanting to share that data with the lab and ecosystem. So I think it will be context specific and we cannot have general rules about sharing of data and the reciprocity principles across different sectors.

Martin Tisne

Thank you very much. Anne I have a similar question for you on this question of reciprocity. What’s your take?

Anne Bouverot

I think that’s a very profound question. Part of the reason why you want to share cultural data is so that cultures are preserved and you don’t end up with one or two or three cultures in the world, but something that is more diverse. So it is in the interest of a cultural group, of a civilization, that in the world of AI this culture is represented. And from that perspective, you have a very natural reciprocity loop. But at the same time, creators are saying, I don’t want my data to be used if I don’t have a mode of being compensated or recognized or a way to oppose. And so you have this tension between artists, for example, who say, well, I want my rights to be maintained and I want some type of compensation.

If this is being used to feed AI. models and then for people to earn money out of it. But then on a collective basis, you do want that culture to be represented. So I’m not sure I have a solution, but I very clearly see the tension. And many ways we can navigate that is to have a right of opposition by specific artists so that they can say, no, my data, my creations are not going to be used. And at the same time, you can certainly have historical information and things that are not so subject to maybe having remuneration for living artists be part of the general cultural data that you use to train AI. But beyond these two obvious things, I’m not really sure.

So we need to continue to work on this.

Martin Tisne

Thank you. And again, just to go a bit deeper on the question. it really is, it’s a fascinating question because from the perspective of the communities I would imagine whose data it is it’s data about them, as you say you want people to know about your culture, you want the culture to be preserved and at the same time you want a certain degree of agency over how the data is used. In an earlier panel I was talking about, or we were talking about indigenous data sovereignty and we were talking about the Maori community in New Zealand and the degree to which, as I understand it in Maori culture, any data, any information that pertains to Maori culture is effectively part of Maori culture so there’s a real question of agency.

My question is, in the run up and when we were working together on the Paris summit we talked quite a bit about the relation between open source AI on one hand and then the governance of the data and the governance of the data then to be controlled in different ways so how do you think about this balance because it strikes me that getting the balance right between on one hand the open source components, and I’ll come to the same question to you in a second, that is and on the other hand, a more controlled approach around data governance, that’s the special source. What do you think?

Anne Bouverot

Yeah, I completely agree. And maybe that’s, as Abhishek was saying, maybe the example of health data is a good one there because for cultural data, you want the general benefit and you want to preserve artists’ rights. I think those are the two dynamics. For health data, you do want, as an individual, as a patient, if you’re being asked the question, do you want to protect your personal data, the answer is yes. If you’re being asked the question, are you willing to share your data with other people who have or are at risk of a similar illness so that it can help them, the answer is yes. And then how do you balance the two? And so you need to find some ways to share data in a platform or in a way that you have trust into.

And so it needs. It needs to be privacy -preserving. It needs to be held by an actor you trust, even if you don’t go and look at all the terms and conditions, but you need to understand that it’s an institution or a third party that you can trust. And then you want to be able to rely on that third party to make the right decisions, like, yes, sharing the data to enable research and find new cures, but maybe to sharing it to insurance companies so that you can be charged a different rate, depending on what your personal situation is. And then when you get into sovereignty, maybe you’re happy for this to be shared with innovative startups in your country or your region that will develop cures and new foods, but maybe not with some other actors.

So you get to a number of different levels and questions. And for that, having third parties, most third parties that can vote for you, we make the right decisions, I think, is very important.

Martin Tisne

Thank you. Let me take the same question to you. How do you see that balance between, on the one hand, we’ve talked a lot about open source, open source AI over the course of the week. How do you see the balance between, on the one hand, open source, on the other hand, the question of the cultural data that we’ve been talking about?

Abhishek Singh

Again, ultimately, I’ll go back to the end objectives. What is the purpose for which we are sharing the data? Is it serving public interest or is it serving private interest? Is there a benefit for the user to whom the data belongs? So, for example, health data is there. If aggregate level data is there, for example, we keep on hearing about outbreaks of, we are over COVID, but we keep on hearing about outbreaks of flu and other ailments. If aggregate data about incidence of such diseases and linkages with other factors, environmental factors, weather factors, rain factors, is shared so that people can think of devising a data. AI enables solutions of integrating various. data sets and trying to see why in a particular geography, in a particular locality, some element of this is happening.

That is the public interest. So, we will have to define in a case -by -case basis, whenever data is being shared, whether open source or in a proprietary solution, what is the end objective, what is the problem that I am going to solve? And is it serving the larger interest of the community? Is it serving larger public interest or is it being done to benefit a few corporations? Like, for example, the example she gave about insurance companies. If it leads to, if the data about health consumption or something leads to increase in my insurance premiums, that is like not fair. Because they are linking that data with the individuals to whom the data belongs. So, we will need to think of privacy preservation techniques, we will need to think of anonymizing techniques, so that in no ways the data principle to whom the data belongs is harmed in an adverse manner.

So, we will have to do this in a very new instrument. There is not one size fits all solution. If we do that, we will end the risks of the, the risks dominating the narrative and we will go somewhere towards the positives that would have been the most fun.

Martin Tisne

Thank you very much so then there’s a question i can’t resist asking you which is what’s your definition of sovereignty because then you mentioned the term we’ve talked a lot about this this week and in the context of this conversation it’s really interesting because there’s a question of sovereignty from a nation’s perspective there’s a question of sovereignty i mentioned that marie example indigenous data sovereignty from a community’s perspective and then both have been talking about health so there’s a question of you know at an individual level the sovereignty i have over a data about me so with your experience coming towards the end of the summit and the experience that india has how do you when you think about sovereignty and ai what do you what do you think of

Abhishek Singh

I feel like sovereignty of course traditionally it’s a science concept wherein it seems that nations which are sovereign need to have complete control over what they do how they do with the entire control of the decisions so when it apply when you apply to technology and when you apply to ai specifically the same concepts will apply with regard to what I want to do, with whom I want to do and how I want to do. Nobody else should decide to make decisions on my behalf. So maybe in the short term ideally a complete sovereign AI stack will mean that we should have complete control over all the five layers of AI. Whether it’s the energy layer, the data center, infrastructure, chips, models, applications, use cases.

We should have complete control over it. The technology is evolving right now. In fact, good for a few countries. In fact, good for humans. I don’t think any other country has complete control over the entire AI stack. Every other country has complete control. In the context of India, we are there. We are there on energy sufficiency. We have the data centers. We have our models, our applications, but we don’t have the complete. We have the capability to distribute the computer. We do hold them three to five years. We design our own chip. And in five to ten years, we’ll be able to have a fab which we can take it out also. In the short term, if I decide that which chip I want to use, how I want to use, how I procure rather than be subject to conditionality rather we force people something which will be sovereignty.

So sovereignty will apply the same concept of sovereignty that we apply at the beginning of political science where in complete control of the business live with the sovereign government that should be the way we should look at sovereignty in AI as well.

Martin Tisne

Thank you very much. So just as we’re ending and we’re now in the time, feel free to weave in the questions of sovereignty. Anne, curious, in the wake of President Macron state visit and the bilateral relationship between France and India what do you both see, so starting with Anne and then finishing with you Abhishek, what do you both see as opportunities for France and India to jointly work on these global norms, global approaches for a more contextual approach to artificial intelligence, for a culturally inclusive approach to AI?

Anne Bouverot

Well, I’ll try to be short, but this is the year of joint innovation between India and France. There’s many areas where we’re collaborating and will continue to collaborate. Clearly, current AI and this work on multilingual AI is one. Working on AI that is resilient and sustainable by design, as we were just discussing earlier with Abhishek, is clearly a priority. So, it’s a priority, joint research. And then just to weave, I can’t resist weaving the work on sovereignty. I think sovereignty, no one actually, not even the U .S., they don’t have the chips. So, nobody can do everything alone. I believe that means having a choice and building alternative solutions. And I really think we can and we will jointly build alternative solutions between France and India.

Abhishek Singh

I kind of echo her and in fact the partnership between India and France have been there for quite some time, in fact last year we coached with France Action Summit and the partnership has continued this year and this year of course as you know we have launched a year of innovation and many more activities have been announced by President Macron and our Prime Minister in the last week and we are looking forward to joining you at the World Tech in the next few months and there are many more activities, partnership at the university level, partnership at the research level, partnership at the business level, partnership at the government level, so I strongly believe that working jointly with especially a trusted partner like France and India we have complementary strengths and we can try to present an approach to building a solution that can become an example for the whole world.

Martin Tisne

Thank you very much thank you very much, it was an honour to launch Panteo and Paris and a pleasure to launch this partnership in India, thank you. Thank you.

Announcer

Hello. Thanks, Martin. Abhishek Singh sir, I request you to stay on stage. And Aya, we’d love to have you on to launch the Global Innovation Challenge in the spirit of what Anne said. And Amitabh Nagsir as well. Please.

Abhishek Singh

Am I going first? Okay, great. So, great session, great thoughts, great demo. So, all of us have seen the demo of the reference device, the device which has been built in partnership with Bhashini and Current AI. And in fact, I must mention that it was just a few weeks back when we had this discussion, because I have been discussing with Martin after the discussions and announcement at Public Interest AI, like what will Current AI do with the 400 million dollars that they have, euros that they have raised. And I was saying that let’s do something which can really make an impact and if we can do something at the impact summit, it will be worthwhile. But kudos to the teams and they have built the collaborative build design which was designed by both the engineers from Bhashini and the Current AI’s support is being done in such a way that it’s a platform, it’s a prototype on which we can innovate.

It’s completely open source. It’s hackable, it’s privacy preserving, it’s multilingual. And with on -device AI, this prototype is capable of functioning in remote locations in not only India but anywhere else in the world where connectivity is a challenge or for any reason, if there’s an earthquake or there’s a problem or if there is a natural calamity and we can’t have connectivity it can work. So that can be really transformational for people to access services. And in partnership with Current AI and Bhashini, in fact it’s my honor and privilege to announce the India AI Innovation Challenge. which will give an opportunity to researchers, to engineers, to developers, to entrepreneurs to build on this prototype. And this prototype will be available in an open source manner for everyone to hack it and make it smaller, you can make it more sleeker, you can solve individual use cases for different sectors and it’s based on an open source software and hardware design and the kind of use cases one can think of will be limitless.

So there will not be one but multiple solutions that can be built on it. And we are opening it today and in the next few weeks the date here says that submissions will open on 25th Feb. The 25th Feb on our website will launch the challenge on which applications can be submitted and there is some time to build the actual device and those who will win will get a very handsome reward that will be funded both by Bhashini and Current AI and together we will try to ensure that we are able to build a product that the whole world can use.

Amitabh Nag

so we will continue to you know support this effort to our quantization mechanism and also the technical support will be available with respect to the model enrichment etc so this will be a joint effort so people are supposed to put in the effort and come back to us on the challenges and we will work on that together

Ayah Bdeir

I just say for Amitabh because maybe he tried to say Bashi is offering I think $110 ,000 prize to the winners maybe no I guess should people make a demand how does the number increase On your way out please make a request everyone the number to go out so there’s a big page to it for also participants to make sure that they have support while they’re developing their hardware and software and showcase the work online to inspire many other people really the point of it is to kind of like expand imagination and start this conversation about making your own AI and start the conversation about AI being personal and multilingual and solving communities and individuals own problems and today it’s a piece of hardware tomorrow it could be something in the software the day after can be in data so really this is the beginning of the journey thank you so much for coming everyone thank you for doing such good partners thank you Amitabh and the Vashni team thank you to the current AI team thank you Martin for bringing us together and have a great rest of the week and rest hopefully for you bye

Related ResourcesKnowledge base sources related to the discussion topics (19)
Factual NotesClaims verified against the Diplo knowledge base (2)
Confirmedhigh

“The prototype was built in a six‑week (actually five‑week) sprint, made possible by pre‑existing partnership discussions between Current AI and Bhashani.”

The knowledge base states the project was undertaken in a six-week period, possibly closer to five weeks, and notes that the partnership with Bhashani had already been in discussion when the Current AI team joined [S32].

Additional Contextmedium

“Bhashani’s work on linguistic diversity and its large portfolio of models (250) inspired the collaboration.”

Bhashani’s focus on linguistic diversity is highlighted in the knowledge base, which describes admiration for its work on language diversity [S1] and mentions the partnership discussions centered on this theme [S32]; however, the exact number of models (250) is not specified in the sources.

External Sources (122)
S1
Inclusive AI_ Why Linguistic Diversity Matters — -Sushant Kumar- Session moderator/host
S2
Building Public Interest AI Catalytic Funding for Equitable Compute Access — – Dr. Shikha Gitao- Andrew Sweet- Sushant Kumar
S3
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi — -Announcer: Role/Title: Event announcer/moderator; Area of expertise: Not mentioned
S4
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Takahito Tokita Fujitsu — -Announcer: Role as event announcer/host, expertise/title not mentioned
S5
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Cristiano Amon — -Announcer: Role/Title: Event announcer/moderator; Areas of expertise: Not mentioned
S6
Building Trusted AI at Scale – Keynote Anne Bouverot — -Anne Bouverot: Special Envoy for Artificial Intelligence, France; Diplomat and technologist; Former Director General of…
S7
How to make AI governance fit for purpose? — – Anne Bouverot- Chuen Hong Lew – Jennifer Bachus- Anne Bouverot
S8
Inclusive AI_ Why Linguistic Diversity Matters — – Amitabh Nag- Ayah Bdeir – Ayah Bdeir- Martin Tisne
S9
Inclusive AI_ Why Linguistic Diversity Matters — -Amitabh Nag- CEO of Bhashini
S10
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — – Kritika K.R.- Amitabh Nag – Prasanta Ghosh- Amitabh Nag
S11
Inclusive AI_ Why Linguistic Diversity Matters — – Ayah Bdeir- Martin Tisne
S12
Building Public Interest AI Catalytic Funding for Equitable Compute Access — The panelists challenged the narrow focus on compute ownership, with Martin Tisné warning against potential “white eleph…
S13
ElevenLabs Voice AI Session & NCRB/NPMFireside Chat — -Shailendra Pal Singh: Role/title not explicitly mentioned, but appears to be a co-presenter/expert on Bhashini translat…
S14
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — -Shailendra Pal Singh- Senior General Manager, Bhashani
S15
Inclusive AI_ Why Linguistic Diversity Matters — -Shalindra Pal Singh- General manager at Bhashini, worked on integrating Bhashini models into the device
S16
Open Forum #30 High Level Review of AI Governance Including the Discussion — – **Abhishek Singh** – Under-Secretary from the Indian Ministry of Electronics and Information Technology Abhishek Sing…
S17
Announcement of New Delhi Frontier AI Commitments — -Abhishek: Role/Title: Not specified (invited as distinguished leader of organization), Area of expertise: Not specified
S18
GPAI: A Multistakeholder Initiative on Trustworthy AI | IGF 2023 Open Forum #111 — Abhishek Singh:I can take that, no worries. Thank you, Abhishek. The floor is yours. You can give your question. Yeah, t…
S19
Mobile Working Group Peer Reviewed Document — –  Device : …’a piece of equipment with the mandatory capabilities of communication and the optional capabilities of se…
S20
Foreword — 16. BT. 2019. BT’s Cyber Index reveals the scale of today’s cyber threat . https://newsroom.bt.com/ bts-cyber-index-reve…
S21
WSIS+20 Open Consultation session with Co-Facilitators — – **Jennifer Chung** – (Role/affiliation not clearly specified)
S22
Inclusive AI_ Why Linguistic Diversity Matters — – Shalindra Pal Singh- Andrew Tergis
S23
Announcement of New Delhi Frontier AI Commitments — -Andrew: Role/Title: Not specified (invited as distinguished leader of organization), Area of expertise: Not specified
S24
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Ante este panorama, los países del sur global debemos priorizar estrategias y normativas para un uso ético y responsable…
S25
IGF 2024 Global Youth Summit — Margaret Nyambura Ndung’u: Thank you, Madam Moderator. Good morning, good afternoon, and good evening to all of the di…
S26
Open Forum #33 Building an International AI Cooperation Ecosystem — Participant: ≫ Distinguished guests, dear friends, it is a great honor to speak to you today on a topic that is reshapin…
S27
AI for agriculture Scaling Intelegence for food and climate resiliance — The speaker stresses that moving beyond pilot projects to full‑scale platforms demands trust, investment, and a replicab…
S28
Building Climate-Resilient Systems with AI — The focus must shift from research and pilots to deployment and impact through coordinated international efforts
S29
Transforming Health Systems with AI From Lab to Last Mile — And it has become so sensitive that today a lot of our customers, they do ask us whether you have a continuous, continuo…
S30
How Multilingual AI Bridges the Gap to Inclusive Access — And I was like, oh, my gosh, this is so cool. and really the fact that they were going to sort of the source and getting…
S31
AI race shows diverging paths for China and the US — The US administration’s new AI action plan frames global development as anAI racewith a single winner. Officials argue A…
S32
https://dig.watch/event/india-ai-impact-summit-2026/inclusive-ai_-why-linguistic-diversity-matters — And so the way we work is we work with partners because the core premise is collaboration. Work with partners where we’l…
S33
Collaborative AI Network – Strengthening Skills Research and Innovation — So that is definitely a public rail. And I know that in different parts of the world, there are many such rails being cr…
S34
AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023 — Waqas Hassan:I’d like to add one thing to say, we would just start, and I said, she’s spoken about global cooperation as…
S35
The strategic imperative of open source AI — A similar dynamic unfolded in the 1990s as expensive, proprietary systems like commercial Unix and Microsoft Windows dom…
S36
China unveils new open-source operating system: reducing reliance on US technology — China’s first open-source desktop operating system, OpenKylin 1.0, was unveiled on 5 July, marking a significant milesto…
S37
DPI High-Level Session — He impressed upon the audience that even the largest tech corporations are increasingly dependent on open-source softwar…
S38
Generative AI presents the biggest data-risk challenge in history — Cybersecurity specialistswarnthat generative AI systems, such as large language models, are creating a data risk frontie…
S39
Decolonise Digital Rights: For a Globally Inclusive Future | IGF 2023 WS #64 — Ananya Singh:Yes, apparently it’s no longer oil, but it’s sunlight. Well, historically, the era of colonialism ushered i…
S40
Data first in the AI era — AI system, and particularly I’m thinking like large models like GPT-4, Lama, are trained on enormous data sets that are …
S41
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-social-good-using-technology-to-create-real-world-impact — And I think that’s what we’re doing. And to give you another example of how it reduces the complexity, there’s a very in…
S42
https://dig.watch/event/india-ai-impact-summit-2026/shaping-ais-story-trust-responsibility-real-world-outcomes — Well, AI is… …energy -intense, especially now in the training phase. I think some of the data that are out there, it…
S43
WS #119 AI for Multilingual Inclusion — To achieve multilingual inclusion in AI, there is a need for innovation and local solutions. Communities should create t…
S44
Multistakeholder platform regulation and the Global South | IGF 2023 Town Hall #170 — Sunil Abraham:Thank you so much for that. And a special thanks to all my friends and colleagues at CGI.br I’m very grate…
S45
WS #208 Democratising Access to AI with Open Source LLMs — Abraham Fifi Selby: you’d like to answer. Yeah, I agree with you 100%. There is no competition in terms of this. Tha…
S46
To share or not to share: the dilemma of open source vs. proprietary Large Language Models — Melinda Claybaugh:Yeah, so I think we’ve all given some examples of benefits of open source technology. And I think I’ll…
S47
How Small AI Solutions Are Creating Big Social Change — When asked about platform competition, the speakers showed different perspectives. Aisha emphasized that it’s not a zero…
S48
WSIS Action Line C8: Multilingualism in the Digital Age: Inclusive Strategies for a People-Centered Information Society — Low to moderate disagreement level with high strategic significance. While speakers agreed on fundamental goals of lingu…
S49
How to ensure cultural and linguistic diversity in the digital and AI worlds? — Xianhong Hu:Thank you very much Mr. Ambassador. Good morning everyone. First of all please allow me, I’d like to be able…
S50
Agents of inclusion: Community networks & media meet-up | IGF 2023 — It amalgamates a variety of open-source applications which can be deployed offline. Notably, ‘Local’ can be implemented …
S51
The Future of Digital Agriculture: Process for Progress — Dejan Jakovljevic:Thank you so much. I will, while I quickly share my screen. Wonderful. So first of all, thanks again f…
S52
1 Introduction — In the case of R&D focused on Life sciences technologies/biotechnologies , the projects mostly deal more with the us…
S53
Positive disruption: Health and education in a digital age — Pathways for Prosperity Commissionpublishedtwo digital policy briefs onhealthandeducationthat provide guidelines for cou…
S54
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — This comment demonstrates sophisticated understanding that ‘AI sovereignty’ isn’t a monolithic concept but represents di…
S55
Panel Discussion Data Sovereignty India AI Impact Summit — No, as you said, there are different ideas, different theories, different narratives going on in sovereign. Everybody ha…
S56
Technology Regulation and AI Governance Panel Discussion — And that’s one of the key words is sovereignty. From the government view, from the big companies view, we need to manage…
S57
Responsible AI for Shared Prosperity — The balance between open-source development and community sovereignty presents ongoing challenges. While open-source app…
S58
Inclusive AI_ Why Linguistic Diversity Matters — The conversation expanded to broader themes of cultural preservation, data sovereignty, and the balance between open-sou…
S59
Global dialogue on AI governance highlights the need for an inclusive, coordinated international approach — Global AI governance was the focus of a high-levelforumat the IGF 2024 in Riyadhthat brought together leaders from gover…
S60
Open-source tech shapes the future of global AI governance — As the world marks a decade since China introduced the idea of building a ‘community of shared future in cyberspace,’ th…
S61
1.1 CHALLENGES IN ENVIRONMENTAL INNOVATION — 1 ‘Imperfect appropriability of knowledge creation due to positive externalities: due to the non-rivalry nature of many …
S62
Hardware for Good: Scaling Clean Tech — Ann Mettler: Because I work on these issues also every single day. The challenge in clean tech and innovation is that…
S63
Table of Contents — Tutorial: The introduction of new technology to replace traditional systems can result in new systems being deployed wit…
S64
The impact of regulatory frameworks on the global digital communications industry — Ms Ellie Templeton is a Cyber Security Research Assistant at the Geneva Centre for Security Policy. She has an Internati…
S65
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Factors such as restricted access to computing resources and data further impede policy efficacy. Nevertheless, the cont…
S66
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Chris Martin: Thanks, Ahmed. Well, everyone, I’ll walk through I think a little bit of this presentation here on what…
S67
Beneath the Shadows: Private Surveillance in Public Spaces | IGF 2023 — The debate centres around the issue of control and consent regarding users’ biometric and personal data. One perspective…
S68
Ethics and AI | Part 6 — Even if the Act itself does not make direct reference to “ethics”, it is closely tied to the broader context of ethical …
S69
Democratizing AI Building Trustworthy Systems for Everyone — “of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control …
S70
WS #323 New Data Governance Models for African Nlp Ecosystems — This comment became a cornerstone for the rest of the discussion. Multiple speakers referenced this ownership vs. consen…
S71
WS #203 Protecting Children From Online Sexual Exploitation Including Livestreaming Spaces Technology Policy and Prevention — The disagreement level is moderate but significant for policy implications. While speakers largely agree on the severity…
S72
Policy Papers and Briefs – 1, 2014 — Based on these elements, two solutions can be envisaged: ‘software’ and ‘hardware’ inviolability.
S73
European Tech Sovereignty: Feasibility, Challenges, and Strategic Pathways Forward — Moderate disagreement with significant implications. The disagreements are not fundamental conflicts but represent diffe…
S74
INTERNET GOVERNANCE FOR DEVELOPMENT — – the importance of retaining policy space for developing countries with regard to the use of Free and Open Sourc…
S75
Global AI Policy Framework: International Cooperation and Historical Perspectives — -Sovereignty vs. Openness in AI Development: The concept of “open sovereignty” emerged as a key theme – the idea that co…
S76
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — This comment demonstrates sophisticated understanding that ‘AI sovereignty’ isn’t a monolithic concept but represents di…
S77
Panel Discussion Data Sovereignty India AI Impact Summit — High level of consensus with complementary perspectives rather than conflicting viewpoints. The implications suggest a m…
S78
The perils of forcing encryption to say “AI, AI captain” | IGF 2023 Town Hall #28 — The analysis of the arguments reveals several important points regarding the use of technology in different contexts. On…
S79
AI and EDTs in Warfare: Ethics, Challenges, Trends | IGF 2023 WS #409 — We propose a session that will delve into the opportunities, challenges, and risks arising from the use of artificial in…
S80
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Alex Moltzau: I want to address this with an anecdote. Because I am Norwegian, I feel partly responsible here. I mean, I…
S81
C O N T E N T S — shall lead to more innovative and practical Blockchain solutions. – v. Creation of Supportive Legal and Regulatory Fram…
S82
 Network Evolution: Challenges and Solutions  — Miguel González-Sancho:Okay, I am Miguel Gonzalez-Sancho. I am head of unit at the European Commission in DigiConnect of…
S83
1 Introduction — EUlevel research and innovation support policy increasingly focuses on a ‘ mission-oriented innovation policy ‘, i.e. a …
S84
Inclusive AI_ Why Linguistic Diversity Matters — “And the best part is that the device is offline”[46]. “Four models operational on that particular device, no mean feat”…
S85
Open Forum #38 Harnessing AI innovation while respecting privacy rights — 2. Privacy-Enhancing Technologies and Techniques Audience: Hello. Thank you so much for the presentation. I’m from N…
S86
Multistakeholder platform regulation and the Global South | IGF 2023 Town Hall #170 — Sunil Abraham:Thank you so much for that. And a special thanks to all my friends and colleagues at CGI.br I’m very grate…
S87
To share or not to share: the dilemma of open source vs. proprietary Large Language Models — Melinda Claybaugh:Yeah, so I think we’ve all given some examples of benefits of open source technology. And I think I’ll…
S88
Open Forum #70 the Future of DPI Unpacking the Open Source AI Model — Audience: My name is Satish and I have a long background in open source. I am presently part of ICANN and DotAsia organi…
S89
Open Forum #33 Building an International AI Cooperation Ecosystem — Participant: ≫ Distinguished guests, dear friends, it is a great honor to speak to you today on a topic that is reshapin…
S90
How Small AI Solutions Are Creating Big Social Change — When asked about platform competition, the speakers showed different perspectives. Aisha emphasized that it’s not a zero…
S91
How Multilingual AI Bridges the Gap to Inclusive Access — Bedir expanded Current AI’s focus beyond language to broader cultural preservation, recognizing that culture encompasses…
S92
How to ensure cultural and linguistic diversity in the digital and AI worlds? — Xianhong Hu:Thank you very much Mr. Ambassador. Good morning everyone. First of all please allow me, I’d like to be able…
S93
WSIS Action Line C8: Multilingualism in the Digital Age: Inclusive Strategies for a People-Centered Information Society — Low to moderate disagreement level with high strategic significance. While speakers agreed on fundamental goals of lingu…
S94
Agents of inclusion: Community networks & media meet-up | IGF 2023 — Moreover, Wagenrad expanded the applicability of her solar controllers beyond their conventional use. New prototype cont…
S95
The Future of Digital Agriculture: Process for Progress — Dejan Jakovljevic:Thank you so much. I will, while I quickly share my screen. Wonderful. So first of all, thanks again f…
S96
Positive disruption: Health and education in a digital age — Pathways for Prosperity Commissionpublishedtwo digital policy briefs onhealthandeducationthat provide guidelines for cou…
S97
State of Play: Chips / DAVOS 2025 — Kosmowski provides examples such as healthcare equipment (MRI and CAT scan machines) and fast food restaurant ordering s…
S98
https://dig.watch/event/india-ai-impact-summit-2026/inclusive-ai_-why-linguistic-diversity-matters — But as it’s working, I also would like you to imagine what you could do with it and where you could take it. And from my…
S99
Panel Discussion Data Sovereignty India AI Impact Summit — No, as you said, there are different ideas, different theories, different narratives going on in sovereign. Everybody ha…
S100
Panel #3: « Gouverner les données : entre souveraineté, éthique et sécurité à l’ère de l’interconnexion » — Drudeisha Madhub Merci beaucoup de m’avoir invitée à l’OIF. C’est vraiment un joli atelier depuis hier, c’est une belle …
S101
Technology Regulation and AI Governance Panel Discussion — And that’s one of the key words is sovereignty. From the government view, from the big companies view, we need to manage…
S102
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — This comment demonstrates sophisticated understanding that ‘AI sovereignty’ isn’t a monolithic concept but represents di…
S103
Main Session on Artificial Intelligence | IGF 2023 — Audience:Okay. Hello, everybody. This is Hossein Mirzapour from data for governance lab for the record. Thank you for br…
S104
Opening address of the co-chairs of the AI Governance Dialogue — Tomas Lamanauskas: Thank you, thank you very much Charlotte indeed, and thank you everyone coming here this morning to j…
S105
Harnessing Collective AI for India’s Social and Economic Development — The discussion began with the moderator asking the audience whether they believed technology was reserved for the elite,…
S106
Open Internet Inclusive AI Unlocking Innovation for All — Anandan acknowledged the economic reality that makes open-source challenging: “if you invest a trillion dollars, you can…
S107
Digital Safety and Cyber Security Curriculum | IGF 2023 Launch / Award Event #71 — Moderator:perhaps. If she can launch it again. Steemed attendees, allow me to welcome you on behalf of the creators unio…
S108
Seeing, moving, living: AI’s promise for accessible technology — TheRYO bionic handdemonstrated what95% of natural movement looks like. Visitors watched it handle delicate objects, perf…
S109
(Plenary segment) Summit of the Future – General Assembly, 5th plenary meeting, 79th session — Nikos Christodoulides: Excellencies, distinguished colleagues, last year’s Summit marked the beginning of a new phase …
S110
Driving Indias AI Future Growth Innovation and Impact — And lastly, goes back to the same thing. And maybe I’ll use the same example. You know, we had the UPI of money. We need…
S111
AI 2.0 Reimagining Indian education system — So these are the fundamental shifts which we have witnessed post -COVID. And then if you look at the artificial intellig…
S112
India deploys AI to modernise its military operations — In amovereflecting its growing strategic ambitions,Indiais rapidly implementingAIacross its defence forces. The country’…
S113
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — Although I did check, and I can gently point out that England remains just ahead of India in the ICC test rankings, so n…
S114
AI for Safer Workplaces &amp; Smarter Industries Transforming Risk into Real-Time Intelligence — <strong>Naveen GV:</strong> out a long, lengthy form of information for that to be processed much later by another human…
S115
Founders Adda Raw Conversations with India’s Top AI Pioneers — A group photo was planned to conclude the session and maintain networking connections
S116
https://dig.watch/event/india-ai-impact-summit-2026/digital-democracy-leveraging-the-bhashini-stack-in-the-parliamen — mostly from my understanding and experience with the English that has happened, in the past. Yeah. interesting points, P…
S117
Webinar session — Darkwah contends that regardless of whether participants viewed the process as successful or not, everyone can identify …
S118
https://dig.watch/event/india-ai-impact-summit-2026/keynote-vishal-sikka — Thank you so much. Thank you so much. Wow, wonderful introduction and what an amazing event. I want to share three point…
S119
Day 0 Event #59 How to Develop Trustworthy Products and Policies — The project timeline was estimated at six months, though government integration requirements might extend this timeframe…
S120
The strategic shift toward open-source AI — The release of DeepSeek’s open-source reasoning model in January 2025, followed by the Trump administration’s July endor…
S121
OpenAI joins dialogue with the EU on fair and transparent AI development — The US AI company, OpenAI,has metwith the European Commission to discuss competition in the rapidly expanding AI sector….
S122
Bridging the AI innovation gap — LJ Rich, as the moderator, acknowledges and reinforces the invitation for new partners to join the collaborative AI for …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Sushant Kumar
3 arguments109 words per minute999 words546 seconds
Argument 1
Inclusive AI must be personal, local, and multilingual to serve everyone
EXPLANATION
Sushant frames the need for AI systems that are tailored to individual users, work in local contexts, and support many languages so that no community is left behind. He links this vision to the broader goal of making AI work for everyone.
EVIDENCE
He opens the session by asking how to develop a paradigm that makes AI work for everyone and introduces the session titled “The case for personal, local and multilingual AI” [1-4]. He later emphasizes real-world impact, population reach, and a vision of AI not governed by any single country or company [12-15].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for personal, local, multilingual AI is highlighted in discussions on linguistic diversity and offline, on-device models [S1], as well as broader calls for inclusive AI norms in the Global South [S24] and the role of multilingual AI in bridging access gaps [S30].
MAJOR DISCUSSION POINT
Personalized, locally relevant, multilingual AI
Argument 2
AI initiatives must move beyond pilots to real‑world impact that reaches whole populations.
EXPLANATION
Sushant stresses that India’s AI journey is no longer about experimental pilots or promises, but about delivering clear use cases at scale to the last mile, ensuring that AI benefits entire communities.
EVIDENCE
He notes that “India’s real journey is no longer about pilots or promises. It’s about populations’ reach, clear use cases, last mile delivery” and that this represents “real world impact” [12-13]. He also links this vision to a connected AI ecosystem that is not governed by a single country or company [15].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Several reports stress the shift from pilot projects to population-scale deployment for real impact, e.g., AI for agriculture and climate-resilient systems [S27][S28], and inclusive AI agendas calling for deployment at scale [S24].
MAJOR DISCUSSION POINT
Shift from pilot projects to scalable, population‑wide AI deployment
Argument 3
Offline, on‑device AI processing is essential for last‑mile deployment and resilience.
EXPLANATION
Sushant highlights that the prototype operates entirely offline, with all inference happening locally, which makes the technology usable in remote areas or during connectivity outages.
EVIDENCE
He points out that “the best part is that the device is offline” and that “all those queries, all the AI processing was happening on the device” [98-99].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Offline, on-device inference is presented as a core requirement for last-mile AI in inclusive AI discussions [S1] and in the Bhashini stack description [S10], reinforced by calls for resilient, connectivity-independent AI [S24].
MAJOR DISCUSSION POINT
Importance of offline capability for inclusive AI
A
Ayah Bdeir
6 arguments163 words per minute1650 words606 seconds
Argument 1
Current AI’s mission is to build public‑interest, multilingual AI that can compete with dominant players
EXPLANATION
Ayah explains that Current AI was created as a public‑private partnership to develop AI that serves the public interest, especially in multilingual contexts, and to offer an alternative to the large, profit‑driven tech firms. The mission is to rally a global community to create open, vertically integrated AI.
EVIDENCE
She describes Current AI’s origin at the AI Action Summit, its public-private partnership model, and its aim to tackle public-interest AI at scale while matching the financial and ambition scale of dominant companies [135-141]. She also notes the collaborative approach and grant-making to support partners [140-143].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The mission aligns with global inclusive AI agendas emphasizing public-interest, multilingual solutions [S24], and the importance of multilingual AI for equitable access [S30]; collaborative India-France initiatives also underline competitive, public-good AI development [S33].
MAJOR DISCUSSION POINT
Public‑interest, multilingual AI mission
Argument 2
Current AI co‑develops with partners and releases outcomes as public goods
EXPLANATION
Ayah outlines a collaboration model where Current AI works closely with partners to identify shared priorities, builds technology together, and then releases the results as open, public‑good resources. This approach is meant to democratize AI development.
EVIDENCE
She states that Current AI works with partners to learn about interests, zero in on a collaboration, build together, and release as a public good [41-44]. She further emphasizes the partnership model, grant-making, and the goal of creating public-interest technology [140-143].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The partnership model mirrors described collaborative approaches that identify common gaps, co-create technology, and release it publicly [S32], and reflects the strategic role of open-source in fostering innovation [S35].
MAJOR DISCUSSION POINT
Co‑development and public‑good release
Argument 3
Open‑source hardware empowers community innovation like Linux
EXPLANATION
Ayah draws a parallel between open‑source hardware and the Linux operating system, arguing that an open platform enables anyone to innovate and build on top of it, fostering a vibrant ecosystem. She reflects on her 15‑year experience in open‑source hardware.
EVIDENCE
She recounts 15 years in open-source hardware, noting its power to let people do what they want, comparing it to Linux’s impact [210-213].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The analogy to Linux and the broader strategic imperative of open-source AI are discussed in analyses of open-source ecosystems [S35][S36][S37].
MAJOR DISCUSSION POINT
Open‑source hardware as an innovation catalyst
Argument 4
Proprietary embodied AI devices risk uncontrolled data collection and Western‑centric training
EXPLANATION
Ayah expresses concern that consumer‑facing embodied AI (e.g., glasses, robots, voice assistants) are often closed, collect data continuously, and are trained primarily on Western languages, creating privacy and cultural bias risks. She argues that hardware control is the first line of defense.
EVIDENCE
She outlines the proliferation of embodied AI devices, their unknown training data, continuous recording, and bias toward Western languages, citing recent announcements like Meta’s facial-recognition glasses [195-206].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Risks of data-intensive generative AI and bias toward Western languages are highlighted in data-risk assessments [S38] and decolonisation of digital rights discussions [S39][S40]; privacy-preserving frameworks are also noted [S29].
MAJOR DISCUSSION POINT
Risks of closed embodied AI
DISAGREED WITH
Sushant Kumar, Andrew Tergis
Argument 5
Hope for cheaper, smaller, longer‑battery devices and distributed inference
EXPLANATION
Ayah envisions a future where the core AI hardware becomes more affordable, compact, and energy‑efficient, and can be networked together to provide distributed inference capabilities. She sees multiple pathways for improving the device and its applications.
EVIDENCE
She lists trajectories such as lowering cost, improving battery life, shrinking size, making the device beautiful, and creating mesh networks or larger stationary versions with solar power [215-221][222-226].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Future hardware trajectories emphasizing lower cost, energy efficiency, and mesh networking are mentioned in offline multilingual AI visions [S1] and energy-intensity considerations for AI hardware [S42][S41].
MAJOR DISCUSSION POINT
Future hardware improvements and distributed AI
Argument 6
Aim to inspire community to create personal, multilingual AI solutions for diverse problems
EXPLANATION
Ayah concludes by urging participants to expand their imagination, build personal multilingual AI applications, and contribute to an open‑source ecosystem that can address a wide range of community needs. She frames the challenge as the beginning of a broader journey.
EVIDENCE
She calls for expanding imagination, making personal multilingual AI, and notes that the hardware could evolve into software or data solutions, emphasizing the start of a journey [425].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for community-driven multilingual AI solutions are echoed in multilingual inclusion workshops [S30][S43] and inclusive AI policy discussions [S24].
MAJOR DISCUSSION POINT
Inspiring community‑driven AI creation
A
Amitabh Nag
6 arguments164 words per minute814 words297 seconds
Argument 1
Bhashini aims to bring offline, language‑agnostic AI to the last mile
EXPLANATION
Amitabh describes Bhashini’s goal of delivering AI that works without connectivity, is portable, and can serve remote users, thereby extending AI reach to the “last mile.” He highlights the device’s small form factor and offline capability as key to inclusion.
EVIDENCE
He notes the device’s small size, portability, and offline operation, which enable use anywhere, especially at the last mile [163-168].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bhashini’s offline, language-agnostic design is documented in the Bhashini stack overview [S10] and in broader inclusive AI narratives stressing offline capability for last-mile reach [S1][S24].
MAJOR DISCUSSION POINT
Offline, last‑mile AI delivery
DISAGREED WITH
Abhishek Singh
Argument 2
Bhashini supports 22+ languages, recently added the tribal Bheeli language without script
EXPLANATION
Amitabh reports that Bhashini currently covers 22 languages, with a total of 36 language models, and has recently digitized the tribal Bheeli language, which previously lacked a written script, demonstrating a commitment to linguistic breadth.
EVIDENCE
He lists coverage of 22 languages, 36 total including text languages, and the addition of the script-less tribal Bheeli language [170-176].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The expansion of language coverage, including low-resource and script-less languages, aligns with multilingual AI inclusion efforts described in community-centric language projects [S30] and multilingual inclusion sessions [S43].
MAJOR DISCUSSION POINT
Expanding language coverage
Argument 3
Plans to shrink form factor, expand language breadth, enrich models, and enable mesh networking
EXPLANATION
Amitabh outlines a roadmap that includes making the device even smaller, adding more languages, continuously enriching model quality, and connecting multiple devices in a mesh to create distributed inference capabilities. These steps aim to broaden inclusion and technical capability.
EVIDENCE
He discusses the near-final form factor, plans for smaller devices, language breadth expansion, model enrichment, and mesh networking for distributed inference [164-169][170-176][180-186][222-226].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Future roadmap items such as smaller devices, broader language support, and mesh networking are discussed in offline multilingual AI visions [S1] and in collaborative India-France AI research plans [S33].
MAJOR DISCUSSION POINT
Future scaling and networking
Argument 4
Current operation handles 15 million daily inferences, showing scalability
EXPLANATION
Amitabh shares operational metrics indicating that Bhashini’s platform processes around 15 million inference requests per day on a 200‑GPU system, demonstrating that the solution can operate at large scale.
EVIDENCE
He mentions running about 15 million inferences a day with a 200-GPU system and real-time monitoring dashboards [128-129].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Scalable AI deployments moving beyond pilots are highlighted in agriculture and climate-resilient AI case studies [S27][S28].
MAJOR DISCUSSION POINT
Demonstrated scalability
Argument 5
Bhashini will provide quantization expertise and technical support for participants
EXPLANATION
Amitabh commits Bhashini’s team to continue supporting the innovation challenge by offering their quantization know‑how and technical assistance for model enrichment, ensuring participants can build on the prototype effectively.
EVIDENCE
He states that Bhashini will continue supporting quantization mechanisms and provide technical help for model enrichment as part of the joint effort [424].
MAJOR DISCUSSION POINT
Technical support for the challenge
Argument 6
Continuous model enrichment and contextualisation are needed to deepen language coverage and improve accuracy.
EXPLANATION
Amitabh describes ongoing work to enrich existing language models with glossaries, contextual data, and domain‑specific knowledge, ensuring that the AI becomes more accurate and relevant for diverse use cases.
EVIDENCE
He mentions “we are looking at breadth, depth, offline form factor as the four things which will move forward” and details efforts such as building glossaries for 16 lakh place names and contextualisation activities [180-186].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Enriching models with community-sourced linguistic knowledge is emphasized in multilingual AI inclusion workshops [S30] and local solution initiatives [S43].
MAJOR DISCUSSION POINT
Depth and quality improvement of multilingual models
A
Andrew Tergis
4 arguments161 words per minute546 words202 seconds
Argument 1
Device runs inference offline, supports multiple models, and enables a vision‑impaired use case
EXPLANATION
Andrew demonstrates that the prototype can perform on‑device inference without internet, host several AI models, and power a specific application for vision‑impaired users that combines speech, translation, image understanding, and audio output. This showcases the device’s versatility and accessibility.
EVIDENCE
He explains that the device is designed for any user or use case, runs inference locally, and describes a vision-impaired application that uses ASR, translation, LLM, and TTS to answer questions in the user’s native language [53-56][58-62]. Later he notes that all AI processing happens offline and that four or five models are operational on the hardware [98-101].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Offline, multi-model inference for accessibility mirrors the offline multilingual AI prototypes described in inclusive AI discussions [S1] and the Bhashini offline stack [S10][S24].
MAJOR DISCUSSION POINT
Offline, multi‑model inference with accessibility use case
DISAGREED WITH
Ayah Bdeir, Sushant Kumar
Argument 2
Hardware built on Jetson but platform‑agnostic, allowing any model deployment
EXPLANATION
Andrew clarifies that while the current prototype uses the NVIDIA Jetson platform, the software architecture is not tied to it, enabling deployment of models on other hardware platforms in the future. This design choice promotes flexibility and broader adoption.
EVIDENCE
He states that the prototype is currently based on the Intel (actually NVIDIA) Jetson platform but the processing does not depend on it, allowing support for other platforms [88-90].
MAJOR DISCUSSION POINT
Platform‑agnostic hardware design
Argument 3
Six‑week rapid build demonstrates feasibility of joint open‑hardware effort
EXPLANATION
Andrew highlights that the prototype was conceived, designed, and built within roughly five to six weeks, illustrating that a tight, collaborative effort between Current AI and Bhashini can quickly produce functional open‑hardware. This rapid timeline validates the collaborative model.
EVIDENCE
He notes that the project was undertaken in a six-week (actually five-week) period and that this is the first collaborative build between the two organisations, now being demonstrated live [34-35][46-47].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Rapid collaborative hardware development aligns with described partnership-driven co-creation models [S32] and the strategic role of open-source in accelerating innovation [S35].
MAJOR DISCUSSION POINT
Fast, collaborative hardware development
Argument 4
The prototype proves that AI can function in zero‑connectivity settings, enabling use in remote or disaster‑affected locations.
EXPLANATION
Andrew demonstrates that the device runs inference locally without any network connection, allowing users to run applications such as vision‑impaired assistance even when internet is unavailable.
EVIDENCE
He explains that the device “runs inference locally in their hand” and that “all those queries, all the AI processing was happening on the device” [55][98-99]. He also notes that the hardware is platform-agnostic, supporting deployment on various processors, which further enhances its suitability for offline use [88-90].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Deployments that function without connectivity are advocated in scaling AI for agriculture and climate resilience [S27][S28] and in inclusive AI frameworks for remote contexts [S24].
MAJOR DISCUSSION POINT
Offline AI for remote and emergency contexts
S
Shalindra Pal Singh
1 argument136 words per minute108 words47 seconds
Argument 1
Integrated pipeline (ASR → translation → LLM → TTS) retains accuracy thanks to quantization
EXPLANATION
Shalindra explains the end‑to‑end processing chain of the device, from automatic speech recognition through neural machine translation, large language model inference, and text‑to‑speech synthesis, and notes that careful quantization allowed the models to fit on‑device without sacrificing accuracy.
EVIDENCE
She describes the sequence of ASR, translation, LLM response, and TTS, and mentions that their quantization approach avoided the usual accuracy trade-off [70-72].
MAJOR DISCUSSION POINT
Quantized pipeline with maintained accuracy
A
Anne Bouverot
3 arguments157 words per minute1107 words422 seconds
Argument 1
Trusted third parties are needed to manage privacy‑preserving data sharing
EXPLANATION
Anne argues that for sensitive data—especially health data—privacy can be protected if a trusted, neutral third party holds and governs the data, enabling controlled sharing for research while preventing misuse. She stresses the need for institutional trust and clear governance mechanisms.
EVIDENCE
She discusses the necessity of a trusted third party to ensure privacy-preserving data sharing, giving examples of health data use, consent, and the balance between research benefit and potential commercial exploitation [330-342].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The necessity of trusted intermediaries for privacy-preserving data sharing is discussed in health-AI privacy frameworks [S29] and data-consent guidelines [S40].
MAJOR DISCUSSION POINT
Role of trusted intermediaries for privacy
DISAGREED WITH
Abhishek Singh
Argument 2
Artists need opt‑out and compensation mechanisms for cultural data used in AI
EXPLANATION
Anne points out that creators should retain control over their works when those works are used to train AI models, proposing opt‑out rights and compensation to reconcile cultural preservation with commercial exploitation. She highlights the tension between collective cultural representation and individual creator rights.
EVIDENCE
She notes that artists may demand compensation or the right to oppose the use of their creations in AI training, and suggests mechanisms such as opt-out and differentiated treatment for historical versus contemporary works [318-324].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for cultural data governance, opt-out rights, and compensation for creators appear in decolonising digital rights debates [S39] and data-rights discussions [S40].
MAJOR DISCUSSION POINT
Protecting artists’ rights in AI training data
Argument 3
Joint research on multilingual, resilient AI design strengthens both countries
EXPLANATION
Anne describes ongoing and future joint research initiatives between India and France on multilingual, resilient AI, emphasizing that collaboration leverages complementary strengths and contributes to shared innovation and global leadership in AI.
EVIDENCE
She references the year of joint innovation, collaborative work on multilingual AI, resilient design, and joint research as priority areas for India-France cooperation [391-396].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India-France collaborative AI research on multilingual, resilient systems is highlighted in partnership reports [S33] and inclusive AI cooperation agendas [S24][S30].
MAJOR DISCUSSION POINT
India‑France joint AI research
A
Abhishek Singh
7 arguments182 words per minute2010 words659 seconds
Argument 1
Diverse datasets must capture cultural practices and indigenous knowledge
EXPLANATION
Abhishek stresses that AI models need training data that reflect local customs, traditional knowledge, and indigenous practices to avoid mis‑classification and ensure culturally appropriate outcomes. He cites real‑world examples where lack of such data leads to errors.
EVIDENCE
He recounts a Netflix documentary showing tribal women annotating pest data, illustrating how local knowledge can differ from generic labels and why such cultural context must be incorporated into datasets [281-295].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of incorporating indigenous knowledge and culturally diverse data is emphasized in decolonising digital rights dialogues [S39] and community-driven multilingual AI projects [S30].
MAJOR DISCUSSION POINT
Incorporating cultural and indigenous knowledge into data
Argument 2
Communities should retain rights over their data and benefit from its utilization
EXPLANATION
Abhishek argues that data originating from communities must be governed by community standards, ensuring that the data is used ethically and that the community gains tangible benefits, whether through services or compensation.
EVIDENCE
He emphasizes the need for community involvement, standards rooted in local culture, and benefit sharing when data is used, especially to avoid commercial exploitation without community gain [299-304].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Community data rights and benefit-sharing are discussed in privacy frameworks for health data [S29] and broader data-consent principles [S40], as well as in decolonisation contexts [S39].
MAJOR DISCUSSION POINT
Community data rights and benefit sharing
DISAGREED WITH
Anne Bouverot
Argument 3
Community‑driven standards are essential to protect privacy and ensure ethical use
EXPLANATION
Abhishek highlights that beyond technical safeguards, culturally informed community standards are required to safeguard privacy and guide ethical AI deployment, especially when data sensitivity varies across sectors.
EVIDENCE
He notes the importance of technical and community standards, citing the need for context-specific rules for health versus agricultural data, and the risk of misuse without proper standards [306-310].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for culturally grounded standards to safeguard privacy aligns with data-rights and decolonisation discussions [S39][S40].
MAJOR DISCUSSION POINT
Need for culturally grounded standards
Argument 4
Sovereignty means full control over the entire AI stack—from chips to applications
EXPLANATION
Abhishek defines AI sovereignty as a nation’s ability to control every layer of the AI ecosystem, from hardware (chips) through data centers, models, and applications, thereby avoiding dependence on external providers.
EVIDENCE
He outlines the five layers-energy, data centers, chips, models, applications-and asserts that full control over these layers constitutes sovereignty, noting that most countries lack such complete control [368-373].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI sovereignty and control over the full stack are examined in analyses of AI race dynamics and national self-reliance [S31] and in policy calls for sovereign AI development [S24].
MAJOR DISCUSSION POINT
Definition of AI sovereignty
Argument 5
Nations need independent AI infrastructure to avoid dependence on external providers
EXPLANATION
Abhishek observes that while India has many components (energy, data centers, models), it still lacks full end‑to‑end control, and calls for building domestic chip‑fabrication and procurement capabilities to achieve true AI independence.
EVIDENCE
He points out that no country fully controls the entire stack, describes India’s current capabilities and gaps (e.g., chip design, future fab), and argues that choosing one’s own hardware and procurement pathways is essential for sovereignty [374-382].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for national AI infrastructure and reduced dependence on foreign providers are featured in AI race and sovereignty literature [S31] and inclusive AI policy recommendations [S24].
MAJOR DISCUSSION POINT
Building national AI self‑reliance
Argument 6
India and France can combine complementary strengths to shape global AI norms
EXPLANATION
Abhishek highlights the long‑standing partnership between India and France, noting recent joint initiatives and expressing confidence that their combined expertise can help define inclusive, culturally aware AI standards for the world.
EVIDENCE
He references past collaborations, the France Action Summit, upcoming joint events, and the belief that the partnership can produce a model for global AI governance [401-403].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Strategic India-France AI collaboration is highlighted in joint innovation network reports [S33] and broader cooperative AI governance agendas [S24].
MAJOR DISCUSSION POINT
Strategic India‑France AI partnership
Argument 7
Launch of an open‑source challenge with prize funding to hack the prototype device
EXPLANATION
Abhishek announces the India AI Innovation Challenge, an open‑source competition offering prize money to developers who build applications or improvements on the Bhashini‑Current AI prototype, aiming to catalyze community‑driven innovation.
EVIDENCE
He details the challenge’s open-source nature, prize funding, submission timeline (starting 25 Feb), and its goal to inspire diverse solutions for remote or disaster-affected areas [418-424].
MAJOR DISCUSSION POINT
AI Innovation Challenge launch
DISAGREED WITH
Ayah Bdeir
M
Martin Tisne
1 argument221 words per minute1206 words326 seconds
Argument 1
Balancing open‑source innovation with controlled cultural data governance is crucial
EXPLANATION
Martin raises the question of how to reconcile the openness of open‑source AI development with the need for controlled governance of cultural data, emphasizing that both approaches must be balanced to protect community interests while fostering innovation.
EVIDENCE
He asks directly about the balance between open-source components and a more controlled approach to cultural data governance [329-330].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The tension between open-source development and cultural data governance is discussed in open-source strategy analyses [S35] and decolonising digital rights debates on data control [S39][S40].
MAJOR DISCUSSION POINT
Tension between open‑source and cultural data control
DISAGREED WITH
Anne Bouverot, Abhishek Singh
A
Announcer
1 argument152 words per minute40 words15 seconds
Argument 1
The Global Innovation Challenge should involve key leaders to ensure broad participation and impact.
EXPLANATION
The Announcer calls on Abhishek Singh, Aya, and Amitabh Nag to stay on stage and help launch the challenge, emphasizing that their involvement is crucial for a successful, inclusive competition.
EVIDENCE
The announcement says, “Abhishek Singh sir, I request you to stay on stage… And Aya, we’d love to have you on to launch the Global Innovation Challenge… And Amitabh Nagsir as well” [404-408].
MAJOR DISCUSSION POINT
Call for inclusive leadership in the innovation challenge
D
Device
1 argument113 words per minute11 words5 seconds
Argument 1
The device can identify objects and provide multilingual audio feedback, illustrating practical AI applications.
EXPLANATION
When queried, the device correctly lists the candy wrappers on the table, showing its ability to perform visual recognition and generate a spoken response in the user’s language.
EVIDENCE
The device responds, “The table has candy wrappers of Twix, Milky Way, and KitKat” [79].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multilingual, offline object recognition demos are cited in inclusive AI prototypes that run locally [S1] and in multilingual AI inclusion workshops emphasizing practical applications [S30].
MAJOR DISCUSSION POINT
Demonstration of multimodal, multilingual AI capability
Agreements
Agreement Points
Offline, on‑device AI is essential for last‑mile deployment and resilience
Speakers: Sushant Kumar, Andrew Tergis, Amitabh Nag, Ayah Bdeir
Sushant highlights that the device operates entirely offline, enabling use anywhere [98-99] Andrew notes that inference runs locally on the handheld device [55][98-99] Amitabh stresses the offline capability as a key inclusion factor [163-168] Ayah envisions future devices that are low-cost, energy-efficient and can operate without constant connectivity [215-221]
All speakers emphasized that running AI inference locally without internet is crucial for reaching remote users and ensuring resilience [98-99][55][163-168][215-221].
AI must be multilingual and locally relevant to serve diverse communities
Speakers: Sushant Kumar, Ayah Bdeir, Amitabh Nag, Andrew Tergis, Abhishek Singh
Sushant frames the need for personal, local and multilingual AI [1-4] Ayah describes Current AI’s mission to build public-interest multilingual AI [135-141] Amitabh reports support for 22+ languages and addition of the tribal Bheeli language [170-176] Andrew demonstrates a multilingual application for vision-impaired users [58-62] Abhishek stresses the importance of cultural and linguistic data for AI accuracy [281-295]
The participants agreed that supporting many languages and tailoring AI to local contexts is vital to avoid leaving any community behind [1-4][135-141][170-176][58-62][281-295].
POLICY CONTEXT (KNOWLEDGE BASE)
The emphasis on linguistic diversity and local relevance mirrors the Inclusive AI discussion on why linguistic diversity matters and the need for culturally appropriate data, as highlighted in Inclusive AI_ Why Linguistic Diversity Matters [S58] and the data-governance models for African NLP ecosystems [S70].
Open‑source, collaborative development and release as a public good
Speakers: Ayah Bdeir, Andrew Tergis, Sushant Kumar, Amitabh Nag
Ayah outlines a co-development model that releases outcomes as public goods [41-44] Andrew calls the prototype the first collaborative build between Current AI and Bhashini [45-46] Sushant introduces the project as a seminal open-source AI hardware device [4] Amitabh announces an open-source innovation challenge with technical support [424]
All highlighted that the hardware and software should be open-source, co-created with partners and made freely available as a public good [41-44][45-46][4][424].
POLICY CONTEXT (KNOWLEDGE BASE)
Open-source as a public-good aligns with the balance between open-source development and community sovereignty noted in Responsible AI for Shared Prosperity [S57], the role of open-source tech in shaping global AI governance [S60], and calls for FOSS policy space for developing countries [S74].
Community/data sovereignty and benefit‑sharing are required for ethical AI
Speakers: Abhishek Singh, Anne Bouverot, Martin Tisne, Ayah Bdeir
Abhishek argues that communities must retain rights over their data and receive benefits [299-304] Anne stresses the need for trusted third parties to manage privacy-preserving data sharing [330-342] Martin raises the need to balance open-source innovation with controlled cultural data governance [329-330] Ayah warns about uncontrolled data collection and Western-centric training in embodied AI [195-206]
There is a shared view that data originating from communities should be governed with rights, reciprocity and trusted oversight to protect privacy and cultural integrity [299-304][330-342][329-330][195-206].
POLICY CONTEXT (KNOWLEDGE BASE)
Community data sovereignty and benefit-sharing are reinforced by the African NLP data-governance framework [S70], the Responsible AI for Shared Prosperity analysis of community rights over cultural data [S57], and ethical AI policy tools referencing EU Trustworthy AI guidelines [S66][S68].
Future hardware should become cheaper, smaller, energy‑efficient and networkable
Speakers: Ayah Bdeir, Amitabh Nag
Ayah envisions lower cost, better battery life, smaller size and mesh networking for distributed inference [215-226] Amitabh outlines plans for a smaller form factor and mesh networking of multiple devices [164-169][222-226]
Both speakers share a vision of evolving the device into a more affordable, compact, low-power platform that can be linked in a mesh for distributed AI [215-226][164-169][222-226].
POLICY CONTEXT (KNOWLEDGE BASE)
The push for affordable, energy-efficient hardware echoes the clean-tech scaling challenges and goals for cheaper, smaller devices described in Hardware for Good: Scaling Clean Tech [S62] and broader innovation spillover considerations [S61].
Similar Viewpoints
Both see offline, on‑device AI as essential for reaching users without connectivity [98-99][55].
Speakers: Sushant Kumar, Andrew Tergis
Sushant stresses offline operation as a key inclusion factor [98-99] Andrew confirms that inference runs locally on the handheld device [55][98-99]
Both emphasize linguistic diversity as a core requirement for inclusive AI [135-141][170-176].
Speakers: Ayah Bdeir, Amitabh Nag
Ayah describes Current AI’s multilingual mission [135-141] Amitabh reports support for 22+ languages and addition of tribal Bheeli [170-176]
Both advocate for community‑centric data governance that protects rights and ensures benefit sharing [299-304][330-342].
Speakers: Anne Bouverot, Abhishek Singh
Anne calls for trusted third parties to ensure privacy-preserving data sharing [330-342] Abhishek argues that communities must retain rights and benefit from data use [299-304]
Both recognize a tension between openness and the need to protect cultural/creative rights [329-330][318-324].
Speakers: Martin Tisne, Anne Bouverot
Martin asks how to balance open-source AI with controlled cultural data governance [329-330] Anne describes the tension between open innovation and artists’ rights to opt-out or be compensated [318-324]
Unexpected Consensus
Hardware control as a means to safeguard privacy and promote inclusive AI
Speakers: Ayah Bdeir, Sushant Kumar
Ayah warns that closed embodied AI devices collect data continuously and are trained on Western languages, posing privacy and cultural bias risks [195-206] Sushant celebrates an open-source, offline hardware platform that puts control in users’ hands and avoids reliance on external providers [4][98-99]
Despite Ayah’s cautionary stance on existing proprietary devices and Sushant’s promotional tone for a new open device, both converge on the idea that controlling the hardware layer is crucial for privacy, data sovereignty and inclusive deployment [195-206][4][98-99].
POLICY CONTEXT (KNOWLEDGE BASE)
Advocating hardware control for privacy reflects arguments for user control to prevent data misuse in surveillance debates [S67] and the policy notion of hardware inviolability [S72], which also ties into tech-sovereignty discussions [S73].
Overall Assessment

The discussion shows strong convergence among speakers on four pillars: (1) offline, on‑device AI for last‑mile reach; (2) multilingual, locally relevant AI; (3) open‑source collaborative development released as a public good; (4) robust community‑centric data governance and sovereignty. Additional shared visions include future hardware miniaturisation and mesh networking.

High consensus – the majority of participants align on the same strategic directions, indicating a solid foundation for coordinated policy and technical actions to advance inclusive, multilingual, and privacy‑preserving AI.

Differences
Different Viewpoints
How to balance open‑source AI development with cultural data governance and ownership
Speakers: Martin Tisne, Anne Bouverot, Abhishek Singh
Balancing open‑source innovation with controlled cultural data governance is crucial Trusted third parties are needed to manage privacy‑preserving data sharing Communities should retain rights over their data and benefit from its utilization
Martin asks how to reconcile the openness of open-source AI with the need for controlled governance of cultural data [329-330]. Anne responds that privacy-preserving data sharing should be overseen by trusted third parties and that artists need opt-out and compensation mechanisms [318-324][330-342]. Abhishek argues that community-driven standards and benefit-sharing are essential, emphasizing context-specific rules and community involvement [299-304][306-310]. All agree that cultural data needs protection but propose different mechanisms (trusted institutions vs community standards vs opt-out rights).
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between open-source AI and cultural data governance is a core issue in Responsible AI for Shared Prosperity [S57], the Inclusive AI focus on linguistic and cultural rights [S58], and the open-sovereignty framing in the Global AI Policy Framework [S75].
Approach to data sharing reciprocity and benefit‑sharing with communities
Speakers: Abhishek Singh, Anne Bouverot
Communities should retain rights over their data and benefit from its utilization Trusted third parties are needed to manage privacy‑preserving data sharing
Abhishek stresses that data about a community must be governed by community standards and that the community should receive tangible benefits, especially in sectors like agriculture or health [299-304][306-310]. Anne emphasizes that a trusted, neutral third party should hold and govern data to ensure privacy while enabling research, suggesting institutional trust rather than direct community control [330-342]. Both aim for ethical data use but differ on whether control resides primarily with the community or with an external trusted entity.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on reciprocity and benefit-sharing draw on the African NLP data-governance model that emphasizes ownership vs consent and community benefit [S70] and ethical AI governance tools that stress benefit-sharing mechanisms [S66].
Definition and implementation of AI sovereignty
Speakers: Abhishek Singh, Amitabh Nag
Sovereignty means full control over the entire AI stack— from chips to applications Bhashini aims to bring offline, language‑agnostic AI to the last mile
Abhishek defines AI sovereignty as complete national control over all five layers of the AI stack (energy, data centres, chips, models, applications) and calls for domestic chip fabrication and procurement [368-373][374-382]. Amitabh focuses on delivering an offline, portable device with expanding language coverage and model enrichment, without addressing full-stack control [163-168][170-176]. Their visions share the goal of self-reliance but differ on the scope: full technological independence versus targeted offline language solutions.
POLICY CONTEXT (KNOWLEDGE BASE)
The concept of AI sovereignty is explored in multiple policy analyses, including European Tech Sovereignty’s sovereignty-vs-openness debate [S73], the ‘open sovereignty’ approach in the Global AI Policy Framework [S75], and varied national perspectives on AI sovereignty [S76][S77].
Concern about embodied AI devices versus promotion of offline, on‑device AI
Speakers: Ayah Bdeir, Sushant Kumar, Andrew Tergis
Proprietary embodied AI devices risk uncontrolled data collection and Western‑centric training Offline, on‑device AI processing is essential for last‑mile deployment and resilience Device runs inference offline, supports multiple models, and enables a vision‑impaired use case
Ayah warns that closed embodied AI (glasses, robots, voice assistants) continuously record data, are trained on Western languages, and pose privacy risks [195-206]. In contrast, Sushant highlights that the demonstrated prototype operates entirely offline, enabling use in remote or disaster settings [98-99][163-168]. Andrew reinforces the offline, multi-model capability of the device, emphasizing its suitability for zero-connectivity environments [55-56][98-99]. The disagreement lies in focus: Ayah cautions against closed, cloud-dependent AI, while the others champion offline, open hardware as a solution.
Details of prize funding for the Innovation Challenge
Speakers: Ayah Bdeir, Abhishek Singh
I just say for Amitabh because maybe he tried to say Bashi is offering I think $110,000 prize to the winners Launch of an open‑source challenge with prize funding to hack the prototype device
Ayah mentions a possible $110,000 prize for winners of the challenge [425]. Abhishek announces the India AI Innovation Challenge, noting funding from Bhashini and Current AI but does not specify the amount, only describing “handsome reward” [419-424]. The mismatch in prize amount details reflects a disagreement or lack of alignment on the exact funding commitment.
Unexpected Differences
Prize amount ambiguity for the Innovation Challenge
Speakers: Ayah Bdeir, Abhishek Singh
I just say for Amitabh because maybe he tried to say Bashi is offering I think $110,000 prize to the winners Launch of an open‑source challenge with prize funding to hack the prototype device
Ayah references a specific $110,000 prize figure, while Abhishek’s announcement mentions a “handsome reward” funded by Bhashini and Current AI without quantifying it, indicating an unexpected lack of alignment on the exact prize amount [425][419-424].
Different emphases on hardware control versus software openness
Speakers: Ayah Bdeir, Andrew Tergis
Open‑source hardware empowers community innovation like Linux Device runs inference offline, supports multiple models, and enables a vision‑impaired use case
Ayah stresses the strategic importance of open-source hardware as a foundational platform for innovation, while Andrew focuses on the specific functional capabilities of the prototype without explicitly addressing the broader hardware openness, revealing a subtle divergence in priorities that was not anticipated [210-213][55-56].
POLICY CONTEXT (KNOWLEDGE BASE)
The hardware-vs-software emphasis reflects the software and hardware inviolability solutions discussion [S72] and the broader sovereignty versus openness tension in European tech policy [S73], as well as the ‘open sovereignty’ third-way model [S75].
Overall Assessment

The discussion revealed several substantive disagreements: (1) the best mechanism for governing cultural data within open‑source AI (trusted third parties vs community standards vs opt‑out rights); (2) how reciprocity and benefit‑sharing should be structured; (3) the scope of AI sovereignty, ranging from full stack national control to targeted offline language solutions; (4) contrasting concerns about closed embodied AI versus promotion of offline, open hardware; and (5) unclear details on prize funding for the Innovation Challenge. While participants shared a common vision of inclusive, multilingual, and privacy‑preserving AI, they diverged on governance models, implementation pathways, and concrete funding commitments.

Moderate to high – the core vision is shared, but the lack of consensus on data governance, sovereignty, and funding details could impede coordinated policy or collaborative actions unless reconciled. These disagreements highlight the need for clearer frameworks that balance open‑source innovation with cultural data rights and national AI autonomy.

Partial Agreements
All three agree that AI should be accessible and respect user privacy, but Ayah stresses avoiding closed, cloud‑dependent devices, while Sushant and Andrew promote an offline, open‑hardware solution as the means to achieve that goal [195-206][98-99][55-56].
Speakers: Ayah Bdeir, Sushant Kumar, Andrew Tergis
Proprietary embodied AI devices risk uncontrolled data collection and Western‑centric training Offline, on‑device AI processing is essential for last‑mile deployment and resilience Device runs inference offline, supports multiple models, and enables a vision‑impaired use case
Both emphasize ethical data handling and the need for safeguards, yet Anne proposes institutional trusted intermediaries, whereas Abhishek advocates community‑driven standards and benefit‑sharing mechanisms [330-342][299-304].
Speakers: Anne Bouverot, Abhishek Singh
Trusted third parties are needed to manage privacy‑preserving data sharing Communities should retain rights over their data and benefit from its utilization
Takeaways
Key takeaways
Personal, local, multilingual AI is essential for inclusive access; AI must work offline and be adaptable to any language or community. Current AI’s mission is to create public‑interest, multilingual AI that can compete with dominant proprietary platforms. Bhashini’s prototype demonstrates offline, on‑device inference with a full pipeline (ASR → translation → LLM → TTS) and supports vision‑impaired use cases. Quantization techniques allowed a high‑fidelity LLM to run on a handheld device without noticeable loss of accuracy. The hardware is built on Jetson but is platform‑agnostic, enabling any model deployment and future form‑factor reductions. Collaboration between Current AI and Bhashini follows an open‑source philosophy: co‑develop, release as a public good, and empower community hacking. Language coverage is expanding (22+ languages now, 36 total including tribal Bheeli) and will continue to grow in breadth and depth. Privacy and data sovereignty are critical; uncontrolled embodied AI poses risks of surveillance and Western‑centric bias. Open‑source hardware is likened to Linux – it provides a neutral foundation that prevents lock‑in to any single vendor. Scalability is already proven (15 million daily inferences) and future plans include smaller devices, mesh networking, and solar‑powered micro‑data‑centers. Reciprocity and community rights over data are necessary; cultural creators need opt‑out and compensation mechanisms. AI sovereignty means full control over the entire stack (chips, models, data, applications) for nations and communities. India–France partnership is seen as a model for joint research, resilient multilingual AI, and shaping global AI norms.
Resolutions and action items
Launch of the India AI Innovation Challenge – open‑source competition to hack the Bhashini‑Current AI prototype, with submissions opening 25 Feb and prize funding from both organisations. Bhashini commits to provide quantization expertise and technical support for challenge participants. Current AI will continue to release the prototype hardware and software as public‑good assets and encourage community‑driven extensions. Both organisations will pursue further reduction of device form‑factor, battery improvements, and mesh‑network capabilities. Commitment to expand language coverage (including tribal languages) and enrich existing models with contextual glossaries. Agreement to explore joint India‑France research initiatives on multilingual, resilient AI and to contribute to global AI governance norms.
Unresolved issues
Exact standards and mechanisms for community‑driven data governance and how to enforce opt‑out/compensation for cultural creators. How to balance open‑source hardware innovation with controlled, privacy‑preserving data sharing at scale. Long‑term funding and manufacturing pathways to bring the device to mass‑market pricing. Specific policies or regulatory frameworks needed to ensure AI sovereignty without fragmenting global interoperability. Details of how third‑party trusted entities will be selected and governed for privacy‑preserving data stewardship. Concrete metrics for measuring the impact of multilingual AI on marginalized communities beyond the demo.
Suggested compromises
Adopt an open‑source hardware base while applying controlled access to culturally sensitive datasets, allowing community opt‑out and compensation. Use trusted third‑party institutions to manage privacy‑preserving data sharing, balancing openness with individual/collective rights. Combine open‑source innovation (e.g., hackable prototype) with public‑good licensing that mandates any commercial derivative to contribute back to the ecosystem. Implement mesh networking and solar‑powered nodes to reduce reliance on proprietary cloud services while still enabling scalable inference.
Thought Provoking Comments
India’s real journey is no longer about pilots or promises. It’s about populations’ reach, clear use cases, last‑mile delivery – a connected vision for AI not governed by any one country or one company.
Frames AI development as a collective, inclusive effort rather than a proprietary race, setting a collaborative tone for the entire session.
Established the overarching theme of the discussion, prompting speakers to emphasize partnership, open‑source models, and multilingual inclusivity throughout the conversation.
Speaker: Sushant Kumar
Current AI was born out of the AI Action Summit… a public‑private partnership with a mission to create AI for the public interest, working with partners to identify gaps, develop technology together and release it as a public good.
Clarifies the strategic purpose behind Current AI’s involvement, highlighting a model of co‑creation and open‑source ethos that contrasts with typical corporate AI development.
Guided the dialogue toward how collaborations can be structured, influencing later remarks about open hardware, community ownership, and the launch of the India AI Innovation Challenge.
Speaker: Ayah Bdeir
I’m concerned about this new frontier of embodied AI… devices that continuously record, send data to the cloud, are trained on Western languages, and lock up innovation behind proprietary hardware – similar to how the iPhone created a hardware lock‑in.
Raises a critical ethical and technical risk of AI proliferation, linking privacy, cultural bias, and hardware monopoly in a single, compelling argument.
Shifted the conversation from showcasing technology to questioning its societal implications, leading to deeper discussion on data sovereignty, privacy‑preserving designs, and the need for open, controllable hardware.
Speaker: Ayah Bdeir
We have already digitized a tribal language, Bheeli, which has no script, and we aim to cover 22‑plus languages now, expanding to 36, ensuring no language or community is left behind.
Demonstrates concrete progress in linguistic inclusion, moving the abstract idea of multilingual AI into tangible achievements and highlighting the importance of preserving minority languages.
Prompted follow‑up questions about breadth vs. depth of language coverage, reinforced the theme of cultural preservation, and inspired optimism about scaling the platform to underserved communities.
Speaker: Amitabh Nag
I’m concerned about embodied AI… but I also see hope: open hardware can be like Linux – you can shrink size, improve battery, mesh devices, create micro‑data‑centers, and build applications for farmers, kids, tourists… the possibilities are infinite.
Balances the earlier warning with a visionary outlook, illustrating how open, modular hardware can democratize AI and foster endless innovation across sectors.
Catalyzed the discussion on practical use‑cases, sparked excitement about future applications, and set the stage for announcing the India AI Innovation Challenge.
Speaker: Ayah Bdeir
Should there be a set norm for AI, similar to French radio quotas that require a percentage of French music and film, to ensure cultural representation and funding for local creators?
Introduces the policy dimension of AI governance, drawing a parallel with existing cultural protection mechanisms and questioning how similar safeguards could be applied to AI.
Opened a new line of debate on regulatory frameworks, leading participants to discuss reciprocity, compensation for creators, and the balance between open source and cultural rights.
Speaker: Anne Bouverot
Sovereignty in AI means having complete control over all five layers – energy, data‑center, chips, models, applications – so no external entity decides for us. India is progressing but still lacks full stack control.
Provides a clear, layered definition of AI sovereignty, linking national security, economic independence, and technological autonomy.
Steered the conversation toward strategic national considerations, influencing later remarks on Indo‑French collaboration and the need for diversified, sovereign AI ecosystems.
Speaker: Abhishek Singh
What is the world you would like us to live in when AI and culture get it right? If we get it right, what does it look like in five or ten years?
A provocative, forward‑looking question that reframes the discussion from technical demos to long‑term societal vision.
Prompted panelists to articulate aspirational goals, linking technical work to broader cultural preservation, inclusivity, and policy, thereby deepening the conversation’s scope.
Speaker: Martin Tisne
Overall Assessment

The discussion was anchored by a series of pivotal remarks that moved it from a product showcase to a nuanced debate about the future of AI in society. Sushant’s opening set a collaborative agenda, which was fleshed out by Ayah’s articulation of Current AI’s public‑good mission and her warnings about embodied AI’s privacy and bias risks. Amitabh’s concrete example of digitizing a tribal language and Anne’s policy analogy introduced the cultural‑preservation and regulatory dimensions. Ayah’s hopeful vision of open hardware and the launch of the India AI Innovation Challenge turned concerns into actionable pathways. Finally, Martin’s visionary question and Abhishek’s definition of AI sovereignty broadened the dialogue to include long‑term societal and geopolitical implications. Together, these comments redirected the conversation from a technical demo to a strategic, inclusive, and ethically grounded roadmap for personal, local, multilingual AI.

Follow-up Questions
How can we involve communities in data sharing and ensure they have rights over their data?
Ensures ethical AI development and community empowerment by giving data contributors agency over their information.
Speaker: Martin Tisne (to Abhishek Singh)
Should there be reciprocity where communities benefit from the use of their data?
Addresses fairness and creates incentives for communities to contribute data if they receive tangible benefits.
Speaker: Martin Tisne (to Abhishek Singh)
How can we balance open‑source AI components with controlled data governance for cultural data?
Seeks a framework that preserves openness while protecting cultural heritage and respecting ownership.
Speaker: Martin Tisne (to Anne Bouverot)
How should open‑source AI be balanced with cultural‑data considerations?
Looks for practical ways to keep AI tools open while safeguarding culturally sensitive datasets.
Speaker: Martin Tisne (to Abhishek Singh)
What is the definition of sovereignty in the context of AI?
Clarifies national, community, and individual control over the full AI stack, informing policy and strategy.
Speaker: Martin Tisne (to Abhishek Singh)
What opportunities exist for France and India to jointly develop global norms for culturally inclusive AI?
Identifies avenues for bilateral cooperation to set standards that protect cultural diversity in AI systems.
Speaker: Martin Tisne (to Anne Bouverot and Abhishek Singh)
How can language coverage be expanded to include more languages, especially tribal languages without script?
Ensures no language is left behind, supporting linguistic diversity and inclusion.
Speaker: Amitabh Nag
How can models be enriched with contextual data such as place‑names from the Survey of India?
Improves model relevance and accuracy by incorporating localized geographic knowledge.
Speaker: Amitabh Nag
What privacy‑preserving, trusted‑third‑party platforms are needed for sharing sensitive data (e.g., health data) while enabling public benefit?
Balances individual privacy with societal gains, requiring robust governance and trust mechanisms.
Speaker: Anne Bouverot (and Abhishek Singh)
What technical and community standards are required for data sharing that respect cultural belief systems?
Creates ethically sound frameworks for data use that align with local customs and values.
Speaker: Abhishek Singh
How can mesh networks of devices be built to enable distributed inference and larger workloads?
Extends the capability of low‑power hardware, allowing scalable AI processing in offline or remote settings.
Speaker: Ayah Bdeir
How can the device’s cost, battery life, size, and aesthetics be improved to increase accessibility?
Reduces barriers to adoption, especially in low‑resource environments.
Speaker: Ayah Bdeir
What novel application domains (e.g., agriculture, toys, tourism) can be built on the open‑source hardware platform?
Demonstrates the versatility of the platform and its potential societal impact across sectors.
Speaker: Ayah Bdeir
How does multilingual AI affect cultural preservation and prevent language loss?
Addresses concerns that dominant languages may erode minority languages without targeted AI support.
Speaker: Ayah Bdeir
What are the trade‑offs in model quantization and how can accuracy be maintained?
Technical optimisation is crucial for fitting high‑fidelity models on edge devices without degrading performance.
Speaker: Shalindra Pal Singh
How can the prototype be scaled to production, including exploring hardware platforms beyond Jetson?
Ensures the solution can move from demo to widespread deployment across varied environments.
Speaker: Andrew Tergis
What grant mechanisms and funding models can sustain public‑good AI projects?
Provides financial support for ongoing development, community contributions, and open‑source maintenance.
Speaker: Ayah Bdeir
How can real‑world impact be measured and monitored (e.g., inference counts, latency dashboards)?
Enables performance tracking, informs improvements, and demonstrates value to stakeholders.
Speaker: Amitabh Nag
What legal frameworks are needed to protect cultural content in AI (e.g., analogous to French media quotas)?
Policy tools can ensure cultural representation and prevent homogenisation by dominant players.
Speaker: Anne Bouverot
How can artists retain rights, opt‑out, or receive compensation when their works are used to train AI models?
Addresses the tension between open data use and creators’ intellectual‑property interests.
Speaker: Anne Bouverot
How can an open‑source hardware platform be leveraged to foster hackathons and sector‑specific solutions?
Encourages community‑driven innovation, leading to diverse applications and rapid iteration.
Speaker: Abhishek Singh (India AI Innovation Challenge)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.