How Multilingual AI Bridges the Gap to Inclusive Access

20 Feb 2026 14:00h - 15:00h

How Multilingual AI Bridges the Gap to Inclusive Access

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session opened with Markus Reubi emphasizing that AI must serve the public good by supporting all languages and cultures, framing multilingual access as a democratic imperative and previewing the Geneva AI Summit 2027 as a venue for continued cooperation [4-5][6-9][15]. He announced that the Indo-Swiss Joint Research Programme will launch three new joint calls covering geosciences, social sciences, and One Health, and introduced a longer-term Indo-Swiss Research Framework that will include artificial intelligence as a high-priority topic [27-34][41-44]. New funding mechanisms such as Explore, Experiment, and Expand grants were also presented to foster novel collaborations, increase mobility, and host flagship events in both countries [45-49]. Nina Frey then introduced the panel, noting the focus on language diversity and inviting Amitabh Nag to discuss India’s Bhashini initiative, which targets 22 constitutionally recognized languages across speech, text, and OCR modalities [77-86][89]. Nag explained that Bhashini overcame a lack of digital data by field-collecting corpora from 200 volunteers and has already deployed a voice-first agricultural advisory system for farmers, while expanding to 36 languages and scripts without written form [99-108][93-95]. Aya Bedir of Current AI described the organization as a public-private partnership with $400 million pledged, dedicated to multilingual diversity and cultural preservation, and warned that large-tech data-scraping can treat communities as mere data rather than partners [124-136][158-164]. Alex Ilic presented the open-source Apertus model, highlighting a global talent shortage-only about a hundred experts can build foundation models-and argued that academia must receive compute, data, and benchmark resources to scale multilingual AI [183-190][194-202][210-218]. He noted that current training data is 60 % English and outlined a plan to incrementally raise performance for the next hundred languages while leveraging collaborations such as with the ICAIN network [194-198][201-202]. Petri Myllymäki from the Nordic ELIS network stressed that language access is a human right, that cultural value frameworks differ, and called for inclusive global initiatives that invite all nations to the AI “dinner table” [224-232][236-239]. A representative from NTU Singapore described the C-Line model covering 13 Southeast Asian languages, emphasizing frugal data approaches, sovereignty concerns, and the need to reflect code-switching and dialectal variation in AI systems [250-263][265-272]. Annie Hartley illustrated the risks of deploying poorly adapted models in high-stakes medical settings, recounting a mis-diagnosis in Ethiopia due to reliance on a Bible-trained model, and advocated for neutral academic validation through the MOVE project that gathers real-world feedback [287-301][326-334]. She argued that such implementation science, though costly, is essential for ensuring models work accurately across diverse cultural contexts and for maintaining control-or “sovereignty”-over AI tools [338-347][354-357]. Across the contributions, participants agreed that multilingual, culturally aware AI requires coordinated funding, open-source models, talent development, and robust validation pipelines [8][237-239]. The discussion concluded with a reaffirmation that the upcoming Geneva AI Summit will serve as a platform to advance these collaborative efforts and embed multilingual equity into future AI governance [15][210][358].


Keypoints


Major discussion points


Multilingual AI as a democratic and public-good imperative – Swiss representatives framed language inclusion as essential for democratic participation and digital equity, citing the need to serve “all languages and all cultures” and describing linguistic exclusion as a “persistent barrier” and a “democratic imperative” [4-5]. They highlighted the open-source multilingual model Apertus (developed by ETH Zurich and EPFL) as a concrete example of a public-interest tool [14-15]. Later speakers repeatedly returned to the theme, stressing that language diversity underpins cultural preservation and equitable AI [68-69][135-136][225-233].


Indo-Swiss research collaboration and new funding programmes – Torsten Schwede announced three new joint calls (geosciences, social sciences, One Health) and the launch of an Indo-Swiss Research Framework Program, emphasizing “high-impact research” and “long-term co-created research” [27-34][41-45]. He also introduced new grant schemes (Explore, Experiment, Expand) and expanded mobility funding to sustain durable collaborations [46-48].


India’s Bhashini initiative: building multilingual data and applications – Amitabh Nag described Bhashini (Bhasha Interface for India) as a platform covering 22 constitutional languages, detailing the five technical pillars (ASR, text-to-text, text-to-speech, OCR, digital dictionary) [83-87] and the grassroots data-collection effort that created monolingual and bilingual corpora [99-105]. He gave concrete use-cases such as a voice-first agricultural advisory system for farmers [108] and the “Gyan Bharatam” manuscript project [108-109].


Public-private partnerships and open-source models to scale multilingual AI – Aya Bedir outlined the Current AI public-private partnership, its $400 million initial commitment (aiming for $2.5 billion), and its focus on multilingual diversity and cultural preservation [124-131][135-144]. Alex Ilic explained the Apertus model, the talent bottleneck (≈ 100 experts worldwide) and the need for academia-driven compute, data and benchmarks [183-194][195-200]. Petri Myllymäki reinforced the human-right framing of language access and the necessity of inclusive global initiatives [224-233][236-239].


Validation, real-world high-stakes testing, and sovereignty over AI tools – Annie Hartley warned that language-only performance is insufficient for high-stakes domains (e.g., medical advice in Ethiopia) and described the MOVE (Massive Open Online Validation and Evaluation) project to collect real-world feedback [287-298][329-337]. She linked this to broader concerns about “sovereignty” – the need for communities and nations to control AI systems rather than be passive data sources [158-166][266-272].


Overall purpose / goal of the discussion


The session served as a high-level convening of governments, research institutions, and public-private initiatives to (1) announce new Indo-Swiss research funding, (2) showcase concrete multilingual AI projects (Apertus, Bhashini, Current AI), (3) stress the democratic necessity of language inclusion, and (4) chart a collaborative roadmap-culminating in future summits (Geneva 2027) and joint validation efforts-to build a globally equitable AI ecosystem.


Tone of the discussion


The conversation began with formal, diplomatic language emphasizing partnership and policy [1-10]. As the agenda progressed, speakers adopted a more enthusiastic and celebratory tone when announcing funding and showcasing projects [27-34][77-89]. Mid-session, the tone shifted to reflective and cautionary, highlighting ethical concerns, data-ownership issues, and the need for community-centric approaches [158-166][287-298]. Throughout, the overall atmosphere remained collaborative and forward-looking, ending on an appreciative and hopeful note [358-364].


Speakers

Alex Ilic – Executive Director of the AI Center; Co-founder of ICAIN; expertise in multilingual AI model development and academic-industry collaboration [S1][S3]


Annie Hartley – Professor (EPFL/Yale); Director of the LIGHTS Lab (Laboratory for Intelligent Global Health and Humanitarian Response Technology); expertise in high-stakes medical AI applications and validation [S2]


Aya Bedir – CEO of Current AI; expertise in public-interest AI, multilingual diversity, and hardware-focused AI initiatives [S4][S5]


Markus Reubi – Swiss delegate/speaker representing Switzerland’s AI policy and multilingual AI agenda; expertise in AI governance and international collaboration [transcript]


Participant – Dean of the College of Humanities, Arts and Social Sciences at NTU Singapore; historian; expertise in cultural aspects of AI, multilingual models for Southeast Asia, and sovereignty in AI [S8]


Amitabh Nag – CEO of Bhasha Interface for India (Bhashini); expertise in multilingual speech, text, and OCR technologies for Indian languages [S11][S12]


Nina Frey – Executive Director of ICAIN (also referred to as ICANN in the transcript); expertise in network coordination for multilingual AI research and policy [S13][S14]


Petri Myllymäki – Founding member of ICAIN; representative of the ELIS Network and Finnish Supercomputing Centre; former member of the UN Age Lab; expertise in language preservation, human-rights aspects of language access [S15]


Torsten Schwede – President of the Swiss National Science Foundation; expertise in research funding, Indo-Swiss scientific collaboration, and multidisciplinary AI research [S16]


Additional speakers:


(None identified beyond the listed speakers)


Full session reportComprehensive analysis and detailed insights

The session opened with Markus Reubi framing multilingual artificial intelligence as a democratic necessity. He argued that “AI can only serve the public good if it serves all languages and all cultures” and described linguistic exclusion as “one of the most persistent barriers to digital participation” – a technical challenge that is also a “democratic imperative” [4-5]. Reubi placed this message within a broader international trajectory that began with the Paris 2025 public-interest AI process, continued at the India AI Summit 2026, and will culminate in the Geneva AI Summit 2027 [6-9]. He highlighted Switzerland’s contribution of the open-source multilingual model Apertus, developed by ETH Zurich and EPFL, as a concrete public-interest tool that underpins inclusive digital public services [14-15]. Reubi also underscored ICANN’s role in providing equitable access to compute, data and multilingual models [12-13].


Torsten Schwede then announced a suite of new Indo-Swiss research initiatives. Under the Indo-Swiss Joint Research Programme (JRP), three calls were launched – one on geosciences, one on social sciences, and a recent One Health call that addresses the interconnected health of humans, animals and the environment [27-34]. He presented the Indo-Swiss Research Framework Programme as a long-term mechanism to co-create research, with artificial intelligence identified as a high-priority thematic area [41-44]. To stimulate novel collaborations, Schwede introduced “Explore, Experiment and Expand” grants that allow consortia to test blue-sky ideas, extend proven partnerships and increase mobility funding for sustained collaboration [45-48]. He also pledged a series of flagship events in both Switzerland and India to keep the network engaged [49-50].


The panel was introduced by Nina Frey, Executive Director of ICANN. She noted that the network links academic partners across Europe, Africa and Singapore and that the session’s focus on language diversity reflects a “red line” running through the series of summits, from Bletchley to the present [54-60][68-69][70-72]. Frey also noted the presence of a board member from the Finnish Supercomputing Center, highlighting the importance of high-performance computing for multilingual AI[54-60]. She then handed the floor to Amitabh Nag to discuss India’s Bhashini initiative [73-75].


Amitabh Nag described Bhashini (Bhasha Interface for India) as a platform that initially covered the 22 languages enumerated in India’s Eighth Schedule and aimed to “transcend the language barrier using artificial intelligence” [77-86]. The programme targets five technical pillars – automatic speech recognition, text-to-text translation, text-to-speech synthesis, optical character recognition and a digital dictionary – all built for the 22 languages [83-87]. Since its launch, Bhashini has expanded to 36 languages, including scripts that previously lacked written form, and is actively digitising tribal languages [91-96]. A key obstacle was the “non-availability of digital data”, which was overcome by a field-based “brute-force” effort involving about 200 volunteers who collected speech, images and text to create monolingual and bilingual corpora [99-105]. Nag highlighted two early deployments: a voice-first agricultural advisory system that lets farmers ask questions in their native language, and the “Gyan Bharatam” manuscript digitisation project [108-109].


Aya Bedir presented Current AI, a public-private partnership that has secured an initial $400 million commitment (with a target of $2.5 billion) from the French government and multiple other national and philanthropic partners [124-131][132-134]. The initiative places multilingual diversity and cultural preservation at its core, extending its focus beyond language to behaviours, norms and artefacts [135-144]. Bedir warned that large-tech firms often “scrape data” and treat communities as mere data points, arguing that genuine progress requires “getting as close as possible to the communities themselves” and supporting them to preserve their own cultures [158-164].


Alex Ilic elaborated on the open-source foundation model Apertus, noting that only about a hundred experts worldwide possess the expertise to build such large-scale models and that both talent and high-performance computing are critical bottlenecks [183-186]. He explained that the current training data is 60 % English and 40 % non-English, limiting performance for many languages [191-196]. Ilic outlined a strategic plan to incrementally raise performance for the next hundred languages, assess the associated costs, and leverage ICANN’s shared compute infrastructure [197-202][210-218]. He stressed the need for community-defined benchmarks that reflect cultural contexts rather than generic corporate metrics [191-196][197-200].


Petri Myllymäki, representing the Nordic ELIS network, framed language access as a human right, noting that “access to language and culture is a human right” and that AI systems must respect diverse value frameworks [224-233]. He called for inclusive global initiatives that invite every nation to the AI “dinner table” as guests, not merely as part of the menu [236-239].


A participant from NTU Singapore described the regional C-Line model, which supports 13 Southeast Asian languages (including Tamil) and is built partly on Apertus[250-263]. The speaker highlighted the model’s frugal data approach, its respect for national sovereignty, and its ability to handle code-switching and dialectal variation in everyday speech [264-272].


Annie Hartley illustrated the dangers of deploying inadequately adapted models in high-stakes settings. In Ethiopia, an AI system trained primarily on the Bible incorrectly advised a patient not to take insulin, demonstrating that “language-only performance is insufficient for medical advice” [287-295][296-304]. Hartley also highlighted that she heads the LIGHTS laboratory (Laboratory for Intelligent Global Health and Humanitarian Response Technology), which coordinates the MOVE (Massive Open Online Validation and Evaluation) project to collect real-world clinical feedback and continuously improve models [329-337]. She argued for rigorous, real-world validation and introduced MOVE as a neutral, open-science platform because commercial entities lack incentive to test models in such critical contexts [305-313][338-345].


Agreements emerged across the discussion: all speakers affirmed that multilingual AI is essential for democratic participation, human rights and equitable digital development [4-5][57-58][224-229][126-132][158-164][190-196][250-263][288-295][78-86][89-95]; talent scarcity and the need for shared compute resources were identified as major bottlenecks [183-186][220-222][57-58]; and there was unanimous support for collaborative, multistakeholder funding mechanisms-such as the Indo-Swiss joint calls, the Explore/Experiment/Expand grants, and the public-private structure of Current AI-to sustain long-term research and deployment [27-34][41-44][45-49][124-131][124-134].


Disagreements were noted. Bedir advocated for frugal, community-led scaling that avoids exploitative big-tech data scraping, whereas Reubi and Ilic emphasized the necessity of high-performance computing and specialised talent to train foundation models [126-164][220-222][183-186]. A second tension concerned the role of big-tech versus academia: Bedir warned against “brute-force” data collection by large firms, while Ilic highlighted that current benchmarks are dominated by big-tech publications and called for academia-driven alternatives [126-164][183-186][27-34][41-44]. A third divergence related to validation focus: Hartley called for extensive health-sector testing, whereas Ilic’s remarks centred on model development and benchmark creation without explicit health-specific validation [287-295][183-186].


In conclusion, participants reaffirmed that the upcoming Geneva AI Summit 2027 will serve as a pivotal platform to advance these collaborative efforts [7-9][15][210-212]. Concrete action items include: launching the three Indo-Swiss joint research calls and the broader Research Framework Programme [27-34][41-44]; deploying the Explore, Experiment and Expand grants [45-49]; expanding Bhashini to cover all 100 + Indian languages, including script-less tribal languages [77-86][91-96]; continuing development and open dissemination of Apertus with improved, culturally relevant benchmarks [183-202]; and scaling the MOVE validation pipeline for high-stakes domains such as healthcare [329-337]. Unresolved issues-sustainable financing for large-scale data collection, ethical data-ownership practices, standards for culturally relevant benchmarks, and mechanisms to balance national sovereignty with interoperable global models-were acknowledged as priorities for future work. The session closed with a collective commitment to maintain momentum through regular flagship events, shared compute resources, and ongoing multilateral dialogue [358-364].


Session transcriptComplete transcript of the session
Markus Reubi

as a bridge to democratic access. Switzerland is very pleased to contribute to this global conversation at a pivotal time, a pivotal moment for responsible AI. Our message, which was supposed to be delivered by our president, is very clear. AI can only serve the public good if it serves all languages and all cultures. Today, linguistic exclusion remains one of the most persistent barriers to digital participation, ensuring multilingual access is therefore not only a technical challenge, it’s a democratic imperative. This discussion forms part of the international arc that began with the Paris 2025 public interest AI process, continues here at India AI Summit 2026, and will advance further when Switzerland will happily host the Geneva AI Summit. The Geneva AI Summit in 2027.

Our shared objective is continuity, cooperation and genuinely global approach to AI governance. Switzerland is proud that this session brings together partners who embody open and collaborative innovation. India’s Barshini Initiative, current AI that emerged from the French AI Summit and then many partners from the broader network of academic and policy institutions of ICAIN, the International Computation and AI Network. Such partners as ELIS, NTU Singapore and of course the Swiss partners ETH and EPFL. ICAIN really reflects Switzerland’s commitment to equitable access to compute data and multilingual models. A notable example is Apertus, which maybe many of you have heard of. It was developed by ETH Zurich and EPFL, fully open and transparent multilingual model designed to support public interest applications across diverse linguistic communities.

As we prepare for Geneva 2027, Switzerland views multilingual AI as a foundation for inclusive digital public services and for strengthening participation across societies. Allow me to briefly, just very briefly outline today’s agenda. We will begin with the announcement of the launch of the three new joint calls under the lead of the Indo -Swiss Joint Research Programme, JRP, which is making a further strengthening of our bilateral ties in science, innovation and research between Switzerland and India. This will be followed by a panel discussion. We have distinguished international guests and I’m very happy to announce that this will be moderated by my colleague Nina Frey, the Executive Director of ICAIN. Thank you so much for attending. I will hand over the floor to the next speaker, Professor Thorsten Svede, President of the Swiss National Science Foundation.

Very warm welcome. Thank you.

Torsten Schwede

Your Excellencies, ladies and gentlemen, namaste. It’s my great pleasure to be here today with you. It’s a moment to highlight a particularly exciting moment in the Indo -Swiss research collaboration. As many of you know, Switzerland and India have long -standing trusted partnership in research built on reciprocity, on joint excellence, and on shared priorities. Today, this collaboration is stronger than ever, and I’m delighted to announce three new calls for joint research projects, as well as the launch of our new Indo -Swiss research framework program, between the Swiss National Science Foundation and the Swiss National Science Foundation. and our Indian partner organizations. This is a really remarkable convergence that underscores both the depths and the breadth of our bilateral engagements.

The three calls for joint research programs span a very diverse range of disciplines and are designed to foster cutting -edge, high -impact research. The first two calls that we launched earlier this year are in the geosciences and in the social sciences. Together with the Indian Ministry of Earth Sciences, we are inviting proposals on natural hazards in mountain regions, a field of great relevance for both our countries as we are each facing very unique geological challenges. In parallel, our call with the Indian Council of Social Science Research opens the door for joint projects on pressing social and societal questions, again strengthening our collaboration in a domain where cross -cultural perspectives are significantly enriching the research outcomes.

And two weeks ago, the Swiss National Science Foundation, together with the Indian Department of Biotechnology and the Indian Council of Medical Research, launched a third call focused on One Health, a topic of real global urgency. This One Health call is particularly important for us. It reflects many months of preparations and close coordinations with our Indian partners and embodies a holistic approach needed to understand interconnected health of humans, animals, and the environment. The challenges we face in this area know no borders, and international collaboration is indispensable. We therefore anticipate a very high uptake and interest and participation of researchers in both our communities. Taken together, these three simultaneous calls represent an exceptional moment in IndusVis research cooperation.

They showcase our commitment enabling ambitious science from fundamental research questions in the natural and the life sciences to complex issues shaped by society, geography or technology. And with each call, we reaffirm our shared belief that long -term co -created research is the key to addressing the major challenges of our times. So building on these strong foundations now is the right moment to announce a new strategic long -term collaboration, the Indo -Swiss Research Framework Program between the SNSF and our Indian partner organizations. We aim to create a program in which all researchers wishing to contribute to the Indo -Swiss cooperation can find appropriate support. Thematic calls on strategic areas will be launched together with our Indian partner partners and remain at the core of this program.

And to this audience, it might not come as a real surprise. that one of the high -priority topics we are currently considering is artificial intelligence. In addition to these bilateral and multilateral calls, I’m also pleased to announce that we are launching several new measures and funding schemes to support collaborative research. With our brand -new Explore, Experiment, and Expand grants, we want to give consortia the opportunities to explore new collaborations, new networks, new partnerships. We want to allow them to experiment with blue -sky thinking topics and methodologies that haven’t been tried before, but we also want to allow them to expand on already established functional collaborations and build them in an innovative way into the future.

We’re also increasing mobility funding for existing consortia to make sure that every project we fund by our program can lead to a durable collaboration, impactful events that connect with the wider world, and the wider society. and early career researchers can truly benefit from the mobility and the capacity building. We plan to hold frequent flagship events, both in Switzerland and in India, to keep connecting our various partners of this program from funding actors, beneficiaries of the calls, policy makers and prospective applicants and early career researchers. So make sure you follow our website and social media and there’s more updates coming soon. I want to extend my sincere thanks to all our partner organizations here in India for their continued trust and collaboration and the two research communities in both our countries that show a lot of enthusiasm and engagement in these programs.

So I encourage all interested researchers here in the room and out there to take advantage of these new opportunities and continue building the bridges that make our partnership so successful. Thank you. Thank you very much for your attention.

Nina Frey

Thank you so much. Thank you so much also from my side. My name is Nina Frey or Katharina Frey as my colleague or former colleague, Markus Reuvi, has introduced me. I am the executive director of ICANN, which is this network linking already academic partners from Europe, Africa, and Singapore. And I’m very glad that I have many representatives from the network that will be on the panel and actually also one of the board members sitting in the second row from the Finnish Supercomputing Center. Thank you, Damian, for coming. So we have such a big panel representing ICANN that there’s not even a space for me, so I will be standing here. And I would like to invite my panelists to take seats on the different names.

I will introduce you and hand over the mic to you. In a minute. Please have a seat. turned out there was a seat for me yeah I know we do a group photo at 1225 ok oh now so you have to bear with us this afternoon ok you can have a fast smile I have to stand oh we have to stand how can I get in ok ok so we have to stand ok thank you now we can all take a picture do I have a mask to join me here no ok we have we have a mask to join me here ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok ok Thank you.

Thank you. Thank you. Wonderful. Thank you so much. Thank you so much for bearing with us, for taking pictures. We actually talk about language, but let me think about an analogy to pictures. We’ll dive right into the importance of, I would say, as to the language question, obviously also the cultural and the contextual embedment of different AI in the different settings. So again, allow me to extend my thanks to all my distinguished panelists for coming, for also allowing us to show how this ICANN collaboration works from very different angles. The idea of this next 40 minutes is really to try to give a red line, I think you say, between the different summits. Actually, it started obviously in Bletchley, and I hope we can then showcase how this topic of language and cultural diversity was somehow present in all the different summits and unites us all.

Since we’re here in your host country, allow me to hand over the mic to you to talk about the ICANN collaboration. And also to share with us, like, why did Bajini been funded? you had presented your work this morning to me and Alex. It was very impressive how it translated immediately live from Hindi to German to English. But please share with us maybe the next five minutes what your work is, what has it been, and where you’re going. Thank

Amitabh Nag

you. Yeah, thank you very much, Nina, and thanks for inviting me here. Bhashini stands for Bhasha Interface for India, so it is basically looking at 22 languages which are enshrined in our Eighth Schedule of Constitution, which basically says that we will conduct or we will have these languages as the languages to start off with for our work in the regions. We started off as a program for transcending the language barrier using artificial intelligence. In these 22 languages, we have been able to do a lot of work and we’ve been able to do a lot of work So it’s been a very the first place. We had our own challenges but the methodology which we followed was to collaborate with 70 research institutes across the country and the problem statement was actually divided in between all the 70 research institutes.

We were solving five problems to work on. First was automatic speech recognition that means the digital systems should be able to understand what we are speaking in all 22 languages. Then we are looking at the second piece of it which is text to text translation and again bidirectionally in all 22 languages. Third was text to speech which was basically again that the digital system should be able to speak to you that is again in 22 languages. And then we are looking at optical character recognition in 22 languages and also our digital, our dictionary. Which is basically the vocabulary in all 22 languages are not digital. So there was an attempt to digitize all the vocabulary which is around. That includes names of places, people, companies, etc., etc.

We have till now achieved 22 languages in all the modalities. We also have increased the number of languages. Incidentally, in India, there are 100 languages which are spoken or written by at least 100 ,000 plus people. So our journey is not complete when we do 22 languages. We are moving ahead with more languages. So we now have 36 languages on text and we are going to add more languages to move forward. We also have languages which don’t have script and those are basically in the tribal area. So we are attempting to digitize that also and that is being digitized. One of them has been digitized and will be launched in next few days. We. We also have. So in all of this, we had one basic challenge, which was non -availability of digital data.

So the non -availability of digital data, which is oil to the AI models, was basically done for the first time in the world as a brute force data, digital data collection. So what we had done was that we had about 200 -odd people who would go down on the field and, you know, speak to the people on a certain subject. Pick up a picture or any other things so that it becomes the topic of discussion. We will create the monolingual corpus by requesting them to write the same thing or bilingual corpus if they are, you know, having two languages. And that is how we build the bare minimal digital data. Obviously, when we have done these things, the model is like a child.

It only read 100 books, so it will be as intelligent as those 100 books. So we realized that… over a period of time we need to collect more data that means give the child thousand books so is more intelligent and more and that journey continues so we have taken AI as a journey but we haven’t waited for some things to become perfect so that we are in a position to launch them as a product we launched them and built a narrow use cases a narrow use cases in the sense that okay let’s build something for the farmers I will try to give two examples for want of time is one is that we have built up an interface for the farmers where farmers in their own language can ask a question about agricultural advisory and he is he or she is answered in that particular language so it’s a voice first and voice journey so that means I will be talking in voice and you know the answers will be coming on voice so that’s a voice journey sequence the other thing which we had actually experimented on our working is this is a deployed system so it is actually a very large system we are now we are working which is one of the things which have been displayed here is a project called Gyan Bharatam where you know the manuscripts have been made interactive.

Plus we have multiple other use cases perhaps I will come to them during the discussion but means we have about 20 odd of them displayed in

Nina Frey

Thank you Amitabh. Thank you so much and I somehow assumed everyone knows it but obviously I should introduce you as well so apologies for that. Mr. Amitabh Nag, he is the CEO of Bajini, the national language initiative and we will be collaborating Alex will be mentioning more later on on that. But before that, I would turn a year back to Paris, where obviously Current AI was started and came out of the Public Interest AI Working Group, if I think. So, Mrs. Aya Bedir, you say? Bedir, sorry for that. She’s the CEO, quite recent CEO of Current AI, a very, very important initiative that amongst others also wants to thank for the topic that we’re talking about.

But please, Aya, I know you come from a wrong background also in hardware. You are launching, I think, this afternoon something very impressive. That also helps… the importance of language diversity. Could you share with us some of your key focus interests and also why you so focus on hardware? Thank you.

Aya Bedir

Thank you so much for having me. So, my name is Aya Bedir. I, yes, did join recently, about a month and a half ago. Exactly. So really feeling the very warm welcome in India. Current AI was an initiative that came out of the French AI Summit. The founder, Martin Disney, was the special envoy of President Macron at the summit, and the initiative essentially has a vision for AI that is global, that is collaborative, and that is collective. And so the idea is that if we acknowledge that some of the biggest tech companies that are really governing our lives and really governing AI and the way we consume it in day -to -day, they are a handful of these companies, they are big, they have scale, they have a lot of financial resources, and they are very ambitious.

And so the initiative acknowledges that to be able to stand a chance to be an alternative, and to be able to do something about it, and to be able to do something about it, and to be able to do something about it, and to be able to do something about it, and to be able to do something about it, and to be able to do something about it, and a counterpart to these large companies, we must fight scale with scale. And so obviously there is lots of interesting work happening in public interest AI around the world, but oftentimes the work is distributed, the work is decentralized, and sometimes it’s duplicative, and it’s not always additive.

And so as a result, current AI has this vision that we need to sort of bring together and bring more collaboration into the space, but also raise the level of ambition and of financial scale that is taken on. So current AI is a public -private partnership between philanthropy, between the private sector and government. It has initial commitments of about $400 million, but the ambition is to get to $2 .5 billion and hopefully more. The initial… commitments are from the French government. There are also partners, multiple other governments, including the Indian government, the Kenyan government, Moroccan, and many others, as well as from MacArthur Foundation, Ford Foundation, McGovern, and a few others, and the private sector, so Google DeepMind, Salesforce, and others.

So it really is a public -private partnership with the intention of kind of bringing everybody around the table that has sort of the same commitment to public interest AI, to AI that works for individuals and for the public good, and one of the main vehicles of doing that is really investing in open source. Language has been a priority for current AI ever since its inception. The initiative was called Multilingual Diversity, which I know is something everybody here is committed to, and we’ve been hearing a lot about over the past few days. I joined about a month ago, and I’m myself very passionate about the topic, and I sort of expanded the topic to be about culture, diversity, and culture preservation.

So it’s really not just about language. It’s also about acknowledging that culture exists in many facets. Language is one of them, but there are also behaviors, there are norms, there are also artifacts, physical and digital artifacts, and there are many things that are digitized and non -digitized. And so we now talk about culture preservation as one of our big priorities, and it’s something that we’ll be doing a lot of work in. As part of the culture preservation work, also when I came in, there had already been conversations between Current AI and Bashni about doing a collaboration together for the summit. And to be honest, I fell in love with the work that Amitabh and his team were doing and the care that they were taking with their diaries.

And I was like, oh, my gosh, this is so cool. and really the fact that they were going to sort of the source and getting a lot of this knowledge, not just data, this knowledge about the language from individuals and from communities themselves, no matter how small they were. And so we ended up collaborating on a device that will launch later today at 3 .30 in Room 10. I hope you all can attend. I’m not going to say much about it because there’s a drumroll situation that will happen. So you guys can come see. You all can come see for yourselves. But the intention of the device is to really get as far, as close as possible to the individuals and the communities themselves.

There is one concern I have that could be kind of a negative repercussion, I think, of having so much attention on multilingual diversity in a society. And I think that’s a really important thing. And I think that’s a really important thing. And I think that’s a really important thing. And I think that’s a really important thing. And I think that’s a really important thing. And I think that’s a really important thing. And I think that’s a really important thing. that a lot of the big companies and big players have to do all the work. And so, you know, it’s interesting and positive that, you know, the big tech companies are saying we’re going to make commitments to more multilingual diversity and more languages.

That’s good. But oftentimes when they are kind of in the leadership taking these positions, there’s a brute force kind of methodology that they deploy because of the scale at which they operate in. And so oftentimes it’s about scraping data. Oftentimes it’s about taking data without licensing it. It’s about treating individuals and communities as data, whereas they are people and they are not data. And so that’s sort of my concern in this area, and I believe that we have to get as close as possible to the communities themselves and invite them and support them in doing that kind of work themselves. So it’s really about them preserving their own. Their own cultures and languages. and not about us doing it for them in this sort of like somewhat condescending way.

I’ll also say one last thing, which is I myself grew up in Beirut in Lebanon, a very tiny country, but that everybody has heard of sometimes for good and not good reasons. But, you know, Arab language is also very concerned about AI and representation in AI, and we have thousands of different cultures and dialects within Arab culture, and we also have varying degrees of resource availability across Arab countries. Some countries are very resourced financially from a government perspective. Others have very scarce access to resources. So I’m also very concerned about thinking about AI that is more resilient, that operates from scarcity. operates from frugality and operates from a limited amount of resources and looking at that as a positive as opposed to a negative.

So that’s something that current AI will be prioritizing in a big way and we’ll hope to do more of. So hope to see you all at 3 .30 and hope

Nina Frey

Thank you so much, Aya. Let me hand over because you mentioned obviously the many announcements that were made as well from private companies to start collecting data. I think it’s fantastic to see that governments can do that as well and that you also invest in this PPP so far. And allow me to hand over to my colleague sitting to my left because I think you can also showcase how also public institutions like universities can also train a model multilingual from scratch. Scratch, not stretch. It was probably a stretch sometimes. Let me introduce you to Dr. Alex Ilich. He founded and is the executive director of the AI Center, a co -founder also of ICANN. And please, could you share your experiences with Apertus, which is this multilingual model, and maybe also mention something on Swiss AI and how the Indian languages we can maybe then present next year in Apertus.

Alex, please.

Alex Ilic

but basically we were able to build this model and one of the key bottlenecks that we also identified is it’s not just the infrastructure where currently a lot of money is going in but also the talent. Outside of big tech, you have maybe 100 people on the planet who have the experience and capabilities to build such foundation models and that’s not enough. And I think that’s something where academia can change it and I think that’s why it’s important we not just need supercomputer and data centers for the companies, we need it to empower academia. This is very, very critical that we also push this very, very strongly. We named the model Apertus, Latin for open, because we want it to be a foundation where everyone can take it and build on top of it.

So it’s not something that we force up on someone but it’s something that can be a thriving community where each university, each project, each country gets a step further. And I think we will hear later a little bit from the perspective of the Apertus Foundation and also from Singapore, from India. We already heard… There are not many countries that recognize how important that is as a public infrastructure that you really take it serious to develop your own benchmarks and your own data sources as well. Because today, still, if you read LinkedIn, whatever, the majority is driven by benchmarks that the big companies are publishing. And surprise, in every benchmark they publish, they are, of course, the best because they pick whatever metric is usable.

And I think this metric should be driven by what do we want it to be in the cultures and the regions to empower this. And so we have 1 ,000 languages included because we trained it with data on the Internet. As you know, the Internet is not the most diverse data source there is. 60 % of the data in our training set is English. 40 % is non -English. And so what we’re thinking about now strategically is how can we… Increase… the number of languages that are close to the performance we see in English, step by step for the next hundred languages and so on. And this is, I think, like important because many companies that are going in that area and say, oh, we sponsor a data collection effort, they just do it on best effort.

Like you, let’s do something and you don’t know does it actually move the needle. So the next step for us is that, you know, with all the experiences, you know, in Boschini and other parts, I think we can find out now very strategically how much does it cost us to raise the bar significantly, not just make a check mark out of that. So that will be also the hope for connecting forward through the mission of ICAIN and also Geneva next year that we can present also, you know, how far of a progress could we make, like where do we stand today that is really usable and economically usable and to elevate this. I think that’s super critical on that side.

And, yeah, we’re also very happy to be here. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. of sea lion that’s already built on Apertus, and we want to extend the collaborations now globally. For the researchers, we also have a very strong international program where we share basically our compute infrastructure. So that’s also very unique, and we would like to see also other countries to do that because we know that for where we stand with AI today, we’re maybe at 5 % or 10 % of the potential to train the next models that are, you know, including more data, becoming more aware of the physical world and so on.

We need more compute. We need to team up, and I think that’s also a question of how can we collaborate more and share more. And with ICANN, I think, like, in the beginning, we had this, like, the bottlenecks, you know, our compute, that’s why we have strong compute representation. It’s data and benchmarks, and it’s talent. And I think on these three capabilities, we need to jointly increase, and whoever doesn’t have it should be able to easily get the data and the benchmarks. And I think that’s where we’re headed from. sides to do it themselves basically. Thank you.

Markus Reubi

Thank you so much Alex and also for emphasizing the talent need and I think if I can just add that you mentioned the talent or the capabilities in knowing how to train a model something that and again I’m looking at the supercomputing representatives I mean it’s also a talent in knowing how to build up such an HPC so I think that’s something we could also add to the table but allow me to come back to the models themselves and the really very concrete application. Now I’m turning first left to the north to the Finns. Petri you’re here as obviously as a founding member of ICANN but those representing ELIS Network but also the team Finland if I can say that but you were also member of the age lab of the UN Secretary General so where you one of the recommendation was also exactly this that we collaborate and could you mention maybe more from the perspective of the Nordics, you had already your own language models, but maybe also you can share some thoughts on why you recommended that to the world, if I can

Petri Myllymäki

Yes, thank you. So happy to be here. So indeed, I mean, as you all know, Nordic languages are not the biggest major languages in the world. So obviously we take the kind of the preservation of our languages and cultures very seriously. Talking about the H -Lab of the UN, there was just upstairs a kind of a handover to the new International Independent Scientific Panel on AI. So maybe one thing I learned in this UN advisory body was that, I didn’t know this, but like one thing I learned was that access to language and culture is a human right, one of the human rights that all the countries in the world have. And I think that’s something that we have to accept it.

So to me this was a surprise and pleasant surprise because as also like language is already important because we operate with language. But like what Aja was saying, like even more important is the culture behind the language. We have different value frameworks and norms in different countries. So if there’s one size fits all English version AI that we all start to use, what is the value framework behind that? So that’s kind of I think this is a very critical issue. Another thing I learned in UN was that I mean like and that’s why is that there are several global initiatives towards like making this more accessible to all countries. Seven of the UN member states, 193 countries are included in all these initiatives.

119 countries are included in none. So initiatives like ICANN or current AI. So all these. this summit are very important to make this more inclusive. So I now shamelessly steal a quote from Joshua Benjio, who was just upstairs, saying, like, we need to make sure that all the countries in the world are invited to the dinner table, but not part of the menu, but they are dining guests. So I think this was hilarious.

Markus Reubi

Thank you so much. Thank you so much for sharing. I didn’t hear, but I think it’s a good thing that we can take up. Because also, obviously, food is very cultural diverse. Thank you. So let me turn from the north first more to the south. And to Singapore, you were also quite recently, I think, at the NTU Singapore, which is also the newest member of ICANN. You had already developed, and I think you will share something on the sea lion model, which is obviously, for the ASEAN region, the famous language model. but you also had already collaboration with Apertus. If time allows, because I would also allow, I need to speak, but if time allows, you could also mention something on the importance of sovereignty and language.

Please.

Participant

It’s wonderful. At NTU Singapore, we’re the newest members of ICAIN, but it’s fantastic that the… And I’ve only been at NTU Singapore for six months, but the conversation that we’re having here is the same conversation that we’re having, about the importance of multilingual diversity, the importance of getting close to the ground, the importance of culture as well as tech. And I’m the dean of a college of humanities, arts and social sciences. I’m a historian. And it’s where my college is in the lead, collaborating with computer science at NTU and engaging with CLI and thinking about AI in the context we’re talking about. So… So, you know, I just point to very, very important… that it’s about culture and thinking about cultural diversity and how AI models, et cetera, reflect culture, how we engage with culture and history, et cetera, as well as simply technology.

And I think that’s something that’s very evident in this conversation. So it’s great to be part of this club. So C -Line is a language model that reflects 13 languages across Southeast Asia. In fact, it includes Tamil, because Tamil is a Southeast Asian language because it’s an important national language within Singapore, with aspirations actually to kind of expand, potentially connect beyond to other parts of Asia. It’s a nationally funded initiative, but it connects from Singapore. So it’s part of Singapore’s public infrastructure, but then it connects regionally and is used by, has good connections with private sector providers across Asia. Southeast Asia platform. in Indonesia on various things, etc. So it’s, and you know, as we were hearing a moment ago, I mean there’s a number of different versions of it, one of them is built on Apertus, so there’s a real synergy here.

And I think, you know, I just want to flag the connection between Singapore and Switzerland, they’re both multilingual, multicultural, you know, kind of relatively small societies, so there’s a very obvious kind of, there’s a very obvious collaboration there. And I think another, echoing again, something that I was saying is, when we’re thinking about AI and we’re thinking about the relationship between culture and language, we’re also interested in frugality, we’re also interested in using resources effectively, and in thinking about how we can you know, draw on sort of deep truths about language and culture without vast amounts of data, you know, kind of with relatively small amounts of data. I mean, one of the, you know, we have languages like Laotian, Khmer, etc.

within C -Line, and so, you know, colleagues are really thinking hard about how you leverage relatively small amounts of language to then produce an effective model in the discussions. Just a couple of additional points, and I’m looking at the clock. Sovereignty is the big word within the AI summit. I’m a historian actually in some ways of sovereignty at the moment. Sovereignty means power. It’s a power that we want for ourselves, for our communities, for our nations and states. But in a sense it’s also about individuals as well, and there’s a kind of complicated relationship between those two things. And so I just wanted to reflect on the importance of sovereignty in that we’re talking about the sovereignty of societies that are neither the US or China in this discussion.

The two big superpowers maybe. And this discussion is about how we can think about a world that is multipolar and is multicolored. and reflects the fact that sovereignty actually is dispersed in the world in which we live in, and that’s very important. And that’s Indian principles of non -alignment that go back to the 1940s and so on. So I don’t know if I’m allowed to use that phrase in today’s India. But anyway, it’s a similar set of principles that we’re talking about. So the dispersal of sovereignty that we’re talking about here and power is important, but it’s also as part of that, I think, to reflect the limits of the nation state, I suppose, and the limits of national approaches to language.

In that, we all live in environments in which people speak complicated… They’re multilingual societies in a minute -by -minute way. People code -switch. They’re speaking Hindi in one minute. They’re speaking English the next, Swiss, German, etc. Similarly, in Singapore, people speak Mandarin, Chinese, then they speak a Chinese dialect, and then they’ll speak English. And so the… Sovereignty is crucial, but we need to… If we’re interested in the… sovereignty of the individual and the power of individuals, then we need to have a more nuanced account of language that allows for things like code switching, and dialects, etc. And that’s something that we’re very much interested in in NTU.

Markus Reubi

Thank you so much, and thank you to all the speakers. Also allow Annie to speak, because I think you’re obviously from South Africa, but now living in Switzerland and the US, and you lead these linkages between medicine and the AI, and I always think you explain very concretely in your work what if you just take an English -speaking language and train it in a tiny set of local data lakes, how you experience that in the medical field in reality. So if you could share something on that, and obviously also your role in ICANN. Thank you, Annie. Professor Annie, she’s at EPFL at the moment in the interview. She’s in Yale. Thank you.

Annie Hartley

Yeah, so thank you very much, and I think I’ll take it down to the ground then about the consequences about what happened. happens like really when you are at a patient’s bedside and you ask questions that are high stakes. And something that I do to just test these models in different places because we are rolling out these tools in different hospitals around the world, I ask the same question which is a very high stakes question of how to treat diabetic ketoacidosis which is a diabetic crisis in a child. And I did this recently in Ethiopia in a language that’s not very well known, Afanaromo, and it responded to me, thou shalt not eat insulin on a Tuesday.

And I did share this advice with the, because I thought it was actually very good advice, you should not eat insulin any day actually. But I did share this advice. But it comes to something that’s really, really important. I’m stating the obvious. But it means that if you do not, because it’s obviously only trained on the Bible, right? That’s something that’s very available. That’s the one. book that is available in every single language in the world and so you have these biblical kind of terms but the Bible isn’t like very necessarily very accurate in medicine or other things but but depending on where you’re coming from but the thing is that you can’t rely on these models to make these decisions because they are inequitably inaccurate in the places that need it most so we know that they’ll be inaccurate but the point is that we actually have to if we collecting this kind of information we have to make an effort to collect it in the highest stakes environments and in those contexts so if you have use cases for collecting language it’s interesting to collect it in maybe like historical texts or to represent culture of course but I think something that has a much bigger urgency are the urgent questions these are high -stakes decisions that we are making and people will believe that the model performs well if it only speaks the language but they might get sense of security if we don’t really train it to be accurate in the questions that people are relying on these tools for the most.

And so this is why we actually have to, when we collect languages and when we are trying to test these tools in reality, we have to make sure that we represent those kinds of contexts. And that’s what we are doing. So I lead a lab called LIGHTS. It’s the Laboratory for Intelligent Global Health and Humanitarian Response Technology. So obviously I’m interested in these high -stakes environments and these cultures that are so underrepresented that they will never be represented with any kind of large commercial enterprise, right? No commercial entity has ever said there’s a great place to make money and it’s that war zone. Okay, unfortunately they have, I suppose. But the point is that people don’t want to represent that kind of place because it’s not in their interest.

And this means that it is so important for academia to play a role. We don’t just play a role because we’ve got expertise. We don’t just play a role because we have expertise. We play a role because we do something that commercial… entities cannot provide, it is, we are neutral and we create a neutral space for this kind of collection of data to represent the needs of people and also to make sure that we can test it in reality, right? This is why we can do open science, it’s because we don’t have like any money in the game to lose, right? And the most important thing to do is actually to see when we do represent these languages, not just to represent them and be happy about it, which is the first step, but to go the extra mile to actually test whether the languages are being represented as you expect them to be.

So some of like, like some of my patients for example, it might speak their language but does it speak their language in the way that they expect and do they follow the advice or don’t they? And this is a really important thing to test in these high -stakes environments. My patients will come to me in South Africa. In South Africa we speak 11 official languages, and in Kaga, a way of explaining certain things, it’s very different, and it gets translated into English in a strange way sometimes. And so one of my patients came to tell me, you know, I’ve got elephants running in my head, right? I know exactly how to respond because that’s my culture.

I’m South African. But what would an AI respond, right? And I have a pregnancy in my knee, right? I’m pregnant in my knee. That’s what the patient came to tell me. And actually it doesn’t come from a mistranslation. It comes from the way that people understand how their bodies work. And this is very, very cultural. What is the next most likely word after pregnancy in my knee, right? So it’s really important that we understand how it works when it’s in our body. And we understand how it works when it’s in our body. And we understand how it works when it’s in our body. and making sure that we get feedback from reality, this is what we’re trying to do.

So we have, starting with ICAIN, a flagship project that we made. It’s called MOVE. It stands for Massive Open Online Validation and Evaluation. And it’s about getting these real -world signals from real people in high -stakes decision -making processes, from our doctors, from the people on the ground in different countries around the world, and to get that information from how they are using any tool, because we are neutral. If any tool comes out, any new model, we can test it. And then we get how it works, and then when it breaks, we don’t just say, oh, this model is bad in the setting and this model is good. We really try to get that information and put it back into the model to continuously improve it.

And so learning from reality, learning from the real workflows of how people use models. And I think that’s important, to represent reality. And not just the language. but the reality that the language functions in. So last thing I’d like to say about this is this does cost a little bit more money and it’s not the traditional kind of way of working in science, and people don’t appreciate that implementation science is science. And it’s such a fantastic opportunity where we can actually do impact, like actually measure it, the impact of the models that we are making, we can measure it and feed that back into our models and really create impact driven models. And to run these trials, it’s ambitious, but we do need to start asking like different kinds of funding and being more ambitious, and I think academia does need to be more ambitious because we are representing something that’s actually very important these days, which is, and very rare, which is this neutrality.

When OpenAI updates it from 4 .5 to 5 or more, it’s a very, very good thing. And it’s a very, very good thing. And it’s a very, very good thing. And it’s a very, very good thing. And it’s a very, very good thing. And it’s a very, very good thing. And it’s a very, very good thing. And it’s a very, very good thing. And it’s a very, very good thing. And it’s a very, very good thing. And it’s a very, very good thing. And it’s a very, very good thing. And it’s a very, very good thing. And it’s a very, very good thing. And it’s a very, very good thing. point one, did they ask your commission? No, right?

Did they ask the doctors who had validated those models for their context? No. We need control. We need to know how these tools work in reality and we need to be able to control the tools and so sovereignty for me is control of tools and control of the environment and to understand how these models work in reality so that we

Markus Reubi

Thank you. Thank you so much. Thank you, honestly. Thank you, everyone. Thank you, everyone, for keeping the time and for making sure that we are actually creating the menu and controlling the menu to also steal Professor Benjo’s words and for contributing here and I think we will be more than happy to update you hopefully next year on our more important work. Thank you, everyone, for joining Collaborative Verge. Thank you, everyone, for coming. Thank you for coming, for staying with us, and on the speakers. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (13)
Factual NotesClaims verified against the Diplo knowledge base (7)
Confirmedhigh

“AI can only serve the public good if it serves all languages and all cultures.”

The knowledge base explicitly states that “AI can only serve the public good if it serves all language” confirming the claim.

Additional Contextmedium

“Linguistic exclusion is one of the most persistent barriers to digital participation.”

Language barriers are identified as a persistent challenge limiting participation in digital governance contexts, providing supporting context for the claim.

Confirmedhigh

“The Geneva AI Summit will take place in 2027.”

The knowledge base references the Geneva AI Summit in 2027, confirming the date.

Confirmedhigh

“The India AI Summit is scheduled for 2026.”

The India AI Impact Summit 2026 is mentioned in the knowledge base, confirming the year.

Additional Contextmedium

“The Paris 2025 public‑interest AI process is part of the international trajectory.”

A Paris AI Action Summit is referenced, though the knowledge base does not specify the 2025 date; it provides contextual support for a Paris‑based AI summit.

Confirmedhigh

“Switzerland’s contribution of the open‑source multilingual model Apertus, developed by ETH Zurich and EPFL, underpins inclusive digital public services.”

Apertus is described as a Swiss “radically open” multilingual model, confirming its existence and Swiss origin; the source adds detail about its open development process.

Additional Contextmedium

“ICANN provides equitable access to compute, data and multilingual models.”

ICANN’s role in fostering an inclusive and accessible internet infrastructure is highlighted, offering contextual support for its broader equitable‑access mandate, though the source does not mention compute or multilingual models specifically.

External Sources (76)
S1
How Multilingual AI Bridges the Gap to Inclusive Access — – Alex Ilic- Annie Hartley- Nina Frey – Markus Reubi- Amitabh Nag- Alex Ilic- Petri Myllymäki- Participant – Markus Re…
S2
Democratizing AI: Open foundations and shared resources for global impact — Bernard Maissen, Mary-Anne (“Annie”) Hartley, Mennatallah El-Assady Hartley specifically noted the challenge of competi…
S3
How Multilingual AI Bridges the Gap to Inclusive Access — – Alex Ilic- Annie Hartley – Aya Bedir- Annie Hartley
S4
How Multilingual AI Bridges the Gap to Inclusive Access — Thank you Amitabh. Thank you so much and I somehow assumed everyone knows it but obviously I should introduce you as wel…
S5
https://dig.watch/event/india-ai-impact-summit-2026/how-multilingual-ai-bridges-the-gap-to-inclusive-access — And I was like, oh, my gosh, this is so cool. and really the fact that they were going to sort of the source and getting…
S6
How Multilingual AI Bridges the Gap to Inclusive Access — – Markus Reubi- Amitabh Nag- Alex Ilic- Petri Myllymäki- Participant – Markus Reubi- Torsten Schwede- Aya Bedir- Alex I…
S7
IGF Retrospective – Past, Present, and Future — – **Markus Kummer** – Role/Title: Former MAG chair, head of the Secretariat from 2006-2010 | Area of expertise: Governme…
S8
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Participant** – (Role/title not specified – appears to be Dr. Esther Yarmitsky based on context)
S9
Leaders TalkX: Moral pixels: painting an ethical landscape in the information society — – **Participant**: Role/Title: Not specified, Area of expertise: Not specified
S10
Leaders TalkX: ICT application to unlock the full potential of digital – Part II — – **Participant**: Role/Title not specified, Area of expertise not specified
S11
Inclusive AI_ Why Linguistic Diversity Matters — -Amitabh Nag- CEO of Bhashini
S12
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — – Kritika K.R.- Amitabh Nag – Prasanta Ghosh- Amitabh Nag
S13
https://dig.watch/event/india-ai-impact-summit-2026/how-multilingual-ai-bridges-the-gap-to-inclusive-access — I will introduce you and hand over the mic to you. In a minute. Please have a seat. turned out there was a seat for me y…
S14
How Multilingual AI Bridges the Gap to Inclusive Access — As we prepare for Geneva 2027, Switzerland views multilingual AI as a foundation for inclusive digital public services a…
S15
How Multilingual AI Bridges the Gap to Inclusive Access — Petri Myllymäki from the Finnish Supercomputing Centre and ELIS Network emphasized that access to language and culture i…
S16
How Multilingual AI Bridges the Gap to Inclusive Access — -Torsten Schwede- President of the Swiss National Science Foundation, involved in Indo-Swiss research collaboration
S17
https://dig.watch/event/india-ai-impact-summit-2026/how-multilingual-ai-bridges-the-gap-to-inclusive-access — As we prepare for Geneva 2027, Switzerland views multilingual AI as a foundation for inclusive digital public services a…
S18
AI for Democracy_ Reimagining Governance in the Age of Intelligence — Global governance of AI is a precursor for a democratic development and evolution. And we need to continue to develop an…
S19
From High-Performance Computing to High-Performance Problem Solving / Davos 2025 — A key point of discussion was the need for ecosystem development around quantum computing, involving collaboration betwe…
S20
Multilingual Internet: a Key Catalyst for Access & Inclusion | IGF 2023 Town Hall #75 — Education, government support, and enhanced infrastructure are also necessary to promote inclusivity and diversity in in…
S21
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — . . . . . . . . . . . . . . one of our keynote speakers, they said autonomous weapons are going to AI -based autonomous …
S22
What is it about AI that we need to regulate? — The discussions across multiple IGF 2025 sessions revealed that over-reliance on AI-powered content moderation systems p…
S23
Responsible AI for Shared Prosperity — The balance between open-source development and community sovereignty presents ongoing challenges. While open-source app…
S24
WSIS Action Line C8: Multilingualism in the Digital Age: Inclusive Strategies for a People-Centered Information Society — Low to moderate disagreement level with high strategic significance. While speakers agreed on fundamental goals of lingu…
S25
WS #144 Bridging the Digital Divide Language Inclusion As a Pillar — The speakers demonstrated remarkably high consensus across multiple dimensions: the need for paradigm shift from English…
S26
Leaders TalkX: Local to global: preserving culture and language in a digital era — Government-led national strategies are essential for language preservation Goyal presents India’s Bhasani program as a …
S27
Global Perspectives on Openness and Trust in AI — “It was this project that brought together over a thousand researchers … to try and create an open source large langua…
S28
Main Topic 2 –  European approach on data governance  — – The intricacies of data ownership in medical and biomedical research, with Merquiol discussing the current ambiguities…
S29
AI, Data Governance, and Innovation for Development — A key challenge identified was the lack of locally relevant datasets, with panelists stressing the importance of develop…
S30
Ethical principles for the use of AI in cybersecurity | IGF 2023 WS #33 — Increased transparency in software and AI-based solution composition is supported. The initiative of a “software bill of…
S31
WS #110 AI Innovation Responsible Development Ethical Imperatives — – Addressing data ownership and concentration issues
S32
How Multilingual AI Bridges the Gap to Inclusive Access — The discussion shows remarkable consensus on goals (multilingual AI, cultural preservation, community empowerment) but r…
S33
How Small AI Solutions Are Creating Big Social Change — Employ both text and speech-based approaches to address low-resource languages, recognizing that many languages may be b…
S34
S35
Responsible AI for Shared Prosperity — Hybrid approach combining open-source model development with community-governed deployment to balance innovation with lo…
S36
Building Scalable AI Through Global South Partnerships — The institute’s breakthrough came through systematic re-evaluation, leading to three critical insights. First, governmen…
S37
Open Forum #33 Building an International AI Cooperation Ecosystem — Development | Economic | Capacity development Innovation Ecosystems and Practical Implementation The speaker argues th…
S38
Democratizing AI Building Trustworthy Systems for Everyone — Financial mechanisms | Artificial intelligence | Capacity development Natasha describes a collaborative initiative with…
S39
WS #119 AI for Multilingual Inclusion — Jesse Nathan Kalange: Okay, all right, thank you very much. And PAIAG, we also promote gender equality. So we make su…
S40
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Tomiwa Ilori:Thank you very much, Michael. And quickly to my presentation, I’ll be focusing more on the regional initiat…
S41
Main Session on Artificial Intelligence | IGF 2023 — Policy influence often comes from multilateral systems. They strive to improve their AI tools through an iterative appr…
S42
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Demands on policy exist without the building blocks to support its implementation Lack of infrastructure, skills, compu…
S43
WS #150 Language and inclusion – multilingual names — The experts agreed that while progress has been made, significant work remains to be done in areas like improving user e…
S44
Multilingualism — The promotion of multilingualism requires appropriate governance frameworks. The initial elements of such frameworks hav…
S45
Green AI and the battle between progress and sustainability — AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. Th…
S46
Comprehensive Report: China’s AI Plus Economy Initiative – A Strategic Discussion on Artificial Intelligence Development and Implementation — One of the most striking revelations came from Yutong Zhang’s discussion of Moonshot AI’s resource efficiency in develop…
S47
Interdisciplinary approaches — AI-related issues are being discussed in various international spaces. In addition to the EU, OECD, and UNESCO, organisa…
S48
Artificial Intelligence & Emerging Tech — In conclusion, the meeting underscored the importance of AI in societal development and how it can address various chall…
S49
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — Multi-stakeholder cooperation and inclusive governance frameworks are essential
S50
Can we test for trust? The verification challenge in AI — ## Rapid-Fire Policy Recommendations Adams emphasized that current testing paradigms fail to account for how AI systems…
S51
When language models fabricate truth: AI hallucinations and the limits of trust — AI has come far from rule-based systems and chatbots with preset answers.Large language models (LLMs), powered by vast a…
S52
How Multilingual AI Bridges the Gap to Inclusive Access — In that, we all live in environments in which people speak complicated… They’re multilingual societies in a minute -by…
S53
ElevenLabs Voice AI Session & NCRB/NPMFireside Chat — Language barrier and need for multilingual inclusion
S54
Pre 11: Freedom Online Coalition’s Principles on Rights-Respecting Digital Public Infrastructure — Transparency and public participation are essential for democratic DPI
S55
Switzerland launches Apertus, an open multilingual AI model — Switzerlandhaslaunchedits first large-scale open-source language model, Apertus, developed byEPFL,ETH Zurich, and theSwi…
S56
Democratizing AI: Open foundations and shared resources for global impact — – Mary-Anne Hartley- Leslie Teo Development | Legal and regulatory | Sociocultural The Swiss-made LLM represents the l…
S57
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — 4. Establish research programmes and joint funding initiatives
S58
https://dig.watch/event/india-ai-impact-summit-2026/how-multilingual-ai-bridges-the-gap-to-inclusive-access — And two weeks ago, the Swiss National Science Foundation, together with the Indian Department of Biotechnology and the I…
S59
Open Forum #36 Challenges & Opportunities for a Multilingual Internet — Government Initiatives for Promoting Multilingualism Need for government initiatives to promote multilingualism Pradee…
S60
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — The Bharat GPT consortium exemplifies this approach, bringing together nine academic institutions through a Section 8 no…
S61
WS #119 AI for Multilingual Inclusion — Athanase Bahizire: Thank you so much. Very good question. Actually it’s very quick. You know what, these big AI models w…
S62
Open Forum #33 Building an International AI Cooperation Ecosystem — **Professor Dai Li Na** from the Shanghai Academy of Social Sciences presented a comprehensive case study of Shanghai’s …
S63
IGF 2025: Africa charts a sovereign path for AI governance — African leaders at theInternet Governance Forum (IGF) 2025 in Oslocalled for urgent action to build sovereign and ethica…
S64
Discussion Report: Sovereign AI in Defence and National Security — Examples include the lack of transparency in ChatGPT’s training data and alignment process, with multibillion dollar law…
S65
[Parliamentary Session 6] Leading the digital transformation journey: Dialogue with youth leaders — Dansa Kourouma: Thank you, Honorable Moderator. For my part, I would like to start by thanking the Saudi authorities …
S66
Multistakeholder digital governance beyond 2025 — Language barriers were identified as a persistent challenge limiting participation. Ahmed Farag noted the complexity of …
S67
How can Artificial Intelligence (AI) improve digital accessibility for persons with disabilities? — In summary, the speaker underscored the need for a commitment to universal design in technological innovations, a cultur…
S68
Paris AI Action Summit shifts focus to innovation, employment, and public good in AI governance — The recentAI Action Summit in Parismarked a turning point in global AI governance, shifting the focus from long-term exi…
S69
India unveils MANAV Vision as new global pathway for ethical AI — Narendra Modipresentedthe new MANAV Vision during the India AI Impact Summit 2026 in New Delhi, setting out a human-cent…
S70
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — And that complements Micron’s manufacturing plan. in the U.S. Actually, as you look at our manufacturing plants in the U…
S71
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — And there was two examples during the gathering before. I want to give you, we have in the country the Center for Creati…
S72
Digital Embassies for Sovereign AI — Fasel highlighted Switzerland’s positioning, citing the country’s “neutrality, stability, data capabilities, and scienti…
S73
Opening remarks — Despite the fact that the principles and strategic path established at NetMundial in 2014 remain crucial, guiding curren…
S74
Leaders TalkX: Partnership Pivot: Innovating International Cooperation to Scale Digital Inclusion — Tripti Sinha:Thank you very much for the question. It’s a delight to be here. the panel. The fundamental power of the gl…
S75
Artificial intelligence (AI) – UN Security Council — Another critical area highlighted was the need forcreating inclusive platforms for global collaboration. This involves i…
S76
National Strategy for Artificial Intelligence — leverage our position as a nation with a digitally advanced population and business sector in order to take the lead in …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
M
Markus Reubi
2 arguments122 words per minute932 words458 seconds
Argument 1
AI must serve all languages and cultures to ensure democratic participation (Markus Reubi)
EXPLANATION
Markus Reubi argues that artificial intelligence can only benefit the public good if it is inclusive of every language and culture. He frames linguistic exclusion as a persistent barrier to digital participation, making multilingual access a democratic imperative.
EVIDENCE
He states that AI can only serve the public good if it serves all languages and all cultures [4] and emphasizes that linguistic exclusion remains a persistent barrier, so ensuring multilingual access is not just a technical challenge but a democratic imperative [5].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for AI to serve all languages and cultures as a democratic imperative is highlighted in [S1] and reinforced by inclusive AI discussions in [S11] and [S18].
MAJOR DISCUSSION POINT
AI must serve all languages and cultures to ensure democratic participation
AGREED WITH
Nina Frey, Petri Myllymäki, Aya Bedir, Alex Ilic, Participant, Annie Hartley, Amitabh Nag
Argument 2
Building multilingual models also requires expertise in high‑performance computing and talent development (Markus Reubi)
EXPLANATION
Reubi highlights that creating multilingual AI models demands specialized talent not only in model training but also in building and operating high‑performance computing infrastructure. He suggests that this expertise should be added to the discussion alongside model development.
EVIDENCE
He notes that talent is needed to know how to train a model and also to build supercomputing resources, pointing out the importance of HPC talent in addition to model expertise [220-222].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of high-performance computing talent for multilingual models is discussed in [S19] and the need for shared compute infrastructure is noted in [S1].
MAJOR DISCUSSION POINT
Building multilingual models also requires expertise in high‑performance computing and talent development
AGREED WITH
Alex Ilic, Nina Frey
DISAGREED WITH
Aya Bedir, Alex Ilic
N
Nina Frey
1 argument125 words per minute827 words394 seconds
Argument 1
ICAAN network links institutions worldwide to promote language diversity in AI (Nina Frey)
EXPLANATION
Nina Frey describes ICAAN as a global network that connects academic partners from Europe, Africa, and Singapore, facilitating collaboration on language diversity in AI. She underscores the breadth of representation on the panel as evidence of this worldwide linkage.
EVIDENCE
She explains that ICAAN is a network linking academic partners from Europe, Africa, and Singapore [57] and notes the presence of many representatives from the network on the panel, including a board member from the Finnish Supercomputing Center [58].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
ICAAN’s role in linking global academic partners and sharing resources for multilingual AI is mentioned in [S1].
MAJOR DISCUSSION POINT
ICAAN network links institutions worldwide to promote language diversity in AI
AGREED WITH
Torsten Schwede, Markus Reubi, Alex Ilic, Aya Bedir, Amitabh Nag
T
Torsten Schwede
1 argument141 words per minute800 words338 seconds
Argument 1
Launch of three joint research calls (geosciences, social sciences, One Health) and a new Indo‑Swiss Research Framework Program to deepen bilateral science cooperation (Torsten Schwede)
EXPLANATION
Schwede announces three new Indo‑Swiss joint research calls covering geosciences, social sciences, and One Health, and introduces a longer‑term Indo‑Swiss Research Framework Program to support collaborative research across disciplines. He frames these initiatives as a milestone in strengthening bilateral scientific ties.
EVIDENCE
He announces three new joint research calls in geosciences, social sciences, and One Health [27-34] and presents the Indo-Swiss Research Framework Program as a strategic long-term collaboration, noting AI as a high-priority topic within it [41-44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The announcement of three joint Indo-Swiss research calls and the new framework programme is documented in [S1].
MAJOR DISCUSSION POINT
Launch of three joint research calls (geosciences, social sciences, One Health) and a new Indo‑Swiss Research Framework Program to deepen bilateral science cooperation
AGREED WITH
Nina Frey, Markus Reubi, Alex Ilic, Aya Bedir, Amitabh Nag
A
Amitabh Nag
1 argument162 words per minute811 words300 seconds
Argument 1
Bhashini creates AI capabilities across 22 (now 36) Indian languages through grassroots data collection and delivers concrete applications such as farmer advisory and manuscript digitisation (Amitabh Nag)
EXPLANATION
Nag outlines the Bhashini initiative, which builds AI tools for 22 (expanding to 36) constitutionally recognized Indian languages by mobilising 70 research institutes for data collection. He cites practical use‑cases like a voice‑first farmer advisory service and the Gyan Bharatam manuscript digitisation project.
EVIDENCE
He explains that Bhashini targets 22 languages, covering ASR, text-to-text translation, TTS, OCR and a digital dictionary, with work coordinated across 70 research institutes [78-86]; the programme has already expanded to 36 languages and is adding tribal languages without scripts [89-95]; he describes the challenge of lacking digital data and the field-based data-collection effort that created monolingual and bilingual corpora [99-108]; finally, he gives examples of a farmer advisory voice interface and the Gyan Bharatam manuscript project as deployed applications [108].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bhashini’s multilingual AI development, grassroots data collection, and applications like farmer advisory are described in [S1].
MAJOR DISCUSSION POINT
Bhashini creates AI capabilities across 22 (now 36) Indian languages through grassroots data collection and delivers concrete applications such as farmer advisory and manuscript digitisation
AGREED WITH
Markus Reubi, Nina Frey, Petri Myllymäki, Aya Bedir, Alex Ilic, Participant, Annie Hartley
A
Aya Bedir
1 argument161 words per minute1197 words445 seconds
Argument 1
Scaling AI must be community‑led, avoid exploitative data scraping, and prioritize cultural preservation and frugal, resource‑efficient solutions (Aya Bedir)
EXPLANATION
Bedir argues that the dominant big‑tech model of scaling AI through massive data scraping is problematic; instead, AI should be scaled through community‑led public‑private partnerships that respect cultural heritage and operate with limited resources. She stresses the need for frugal, resilient solutions especially for under‑resourced regions.
EVIDENCE
She critiques big-tech’s brute-force scaling and calls for fighting scale with scale, emphasizing community-led approaches [126-132]; she raises concerns about companies scraping data without licences and treating communities as mere data points [158-164]; she highlights the importance of frugal, scarcity-aware AI design for regions with limited resources [169-172].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Community-led, frugal AI and concerns about data scraping are addressed in [S23] and the need for culturally respectful AI is echoed in [S22].
MAJOR DISCUSSION POINT
Scaling AI must be community‑led, avoid exploitative data scraping, and prioritize cultural preservation and frugal, resource‑efficient solutions
AGREED WITH
Torsten Schwede, Nina Frey, Markus Reubi, Alex Ilic, Amitabh Nag
DISAGREED WITH
Alex Ilic, Torsten Schwede
A
Alex Ilic
1 argument188 words per minute770 words245 seconds
Argument 1
Apertus demonstrates the need for talent, compute resources, and community‑driven benchmarks; academia should lead the development of open multilingual models (Alex Ilic)
EXPLANATION
Ilic presents Apertus as an open, multilingual foundation model and points out bottlenecks in talent, high‑performance compute, and benchmark creation. He argues that academia, rather than only big‑tech, must be empowered to develop and share such models, and outlines plans to expand language coverage and assess cost‑effectiveness.
EVIDENCE
He identifies talent scarcity as a bottleneck, noting only about 100 people worldwide have the expertise to build foundation models [183-186]; he describes Apertus as an open model intended for community use [187-189]; he discusses current benchmark dominance by big-tech and the need for culturally relevant metrics, noting the training data is 60 % English and 40 % non-English [190-196]; he outlines strategic steps to increase language performance, evaluate cost, and share compute infrastructure through ICAAN collaborations [197-218].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Apertus as an open multilingual model, talent scarcity, compute needs, and benchmark challenges are detailed in [S1].
MAJOR DISCUSSION POINT
Apertus demonstrates the need for talent, compute resources, and community‑driven benchmarks; academia should lead the development of open multilingual models
AGREED WITH
Markus Reubi, Nina Frey
DISAGREED WITH
Annie Hartley
P
Petri Myllymäki
1 argument150 words per minute332 words132 seconds
Argument 1
Access to language and culture is a human right; AI initiatives must involve all nations, not just dominant players (Petri Myllymäki)
EXPLANATION
Myllymäki stresses that language and cultural access are fundamental human rights, referencing UN findings. He calls for inclusive AI initiatives that invite every country to the discussion table, warning against a one‑size‑fits‑all English‑centric approach.
EVIDENCE
He notes that Nordic languages are small but their preservation is taken seriously, and that access to language and culture is a human right according to UN insights [224-229]; he adds that culture behind language matters, cites the need for inclusive global initiatives, and quotes Joshua Benjio about inviting all countries as guests rather than menu items [230-238].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Access to language and culture as a human right and the call for inclusive AI initiatives are emphasized in [S1] and [S11].
MAJOR DISCUSSION POINT
Access to language and culture is a human right; AI initiatives must involve all nations, not just dominant players
AGREED WITH
Markus Reubi, Nina Frey, Aya Bedir, Alex Ilic, Participant, Annie Hartley, Amitabh Nag
P
Participant
1 argument158 words per minute797 words301 seconds
Argument 1
The C‑Line model shows how resource‑efficient multilingual AI can respect national sovereignty, handle code‑switching, and serve diverse Southeast Asian societies (Participant)
EXPLANATION
The participant describes the C‑Line model, a multilingual system covering 13 Southeast Asian languages, built with limited data and designed to respect national sovereignty. He links the model to frugal AI, code‑switching realities, and the broader goal of multipolar, multicolored digital sovereignty.
EVIDENCE
He explains that C-Line reflects 13 Southeast Asian languages, is nationally funded, and built in synergy with Apertus, emphasizing cultural and resource-efficient design [250-263]; he then discusses sovereignty as power for societies and individuals, the need to accommodate code-switching and dialects, and the multipolar context of AI governance [265-281].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The C-Line model’s resource-efficient multilingual design and sovereignty focus are presented in [S1].
MAJOR DISCUSSION POINT
The C‑Line model shows how resource‑efficient multilingual AI can respect national sovereignty, handle code‑switching, and serve diverse Southeast Asian societies
AGREED WITH
Markus Reubi, Nina Frey, Petri Myllymäki, Aya Bedir, Alex Ilic, Annie Hartley, Amitabh Nag
A
Annie Hartley
1 argument185 words per minute1419 words459 seconds
Argument 1
Multilingual AI must be rigorously tested in critical health contexts; neutral, open‑science platforms like the MOVE project provide real‑world validation and feedback loops (Annie Hartley)
EXPLANATION
Hartley illustrates the risks of deploying multilingual AI in high‑stakes medical settings, citing a mis‑advice case in Ethiopia. She advocates for neutral, open‑science validation through the MOVE project, which gathers real‑world feedback from clinicians to continuously improve models.
EVIDENCE
She recounts testing an AI model on diabetic ketoacidosis in Ethiopia, where the model gave an incorrect, Bible-derived recommendation [288-295]; she describes leading the LIGHTS lab focused on high-stakes environments and the need for rigorous testing [300-311]; she emphasizes cultural nuances affecting medical advice [322-327]; and she introduces the MOVE (Massive Open Online Validation and Evaluation) project that collects real-world signals to validate and iteratively improve models [328-335].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The MOVE project for real-world validation of multilingual AI in health contexts is described in [S1].
MAJOR DISCUSSION POINT
Multilingual AI must be rigorously tested in critical health contexts; neutral, open‑science platforms like the MOVE project provide real‑world validation and feedback loops
AGREED WITH
Amitabh Nag, Participant, Alex Ilic
DISAGREED WITH
Alex Ilic
Agreements
Agreement Points
Multilingual AI is essential for democratic participation, human rights, and inclusive digital development
Speakers: Markus Reubi, Nina Frey, Petri Myllymäki, Aya Bedir, Alex Ilic, Participant, Annie Hartley, Amitabh Nag
AI must serve all languages and cultures to ensure democratic participation (Markus Reubi) ICAAN network links institutions worldwide to promote language diversity in AI (Nina Frey) Access to language and culture is a human right; AI initiatives must involve all nations, not just dominant players (Petri Myllymäki) Scaling AI must be community‑led, avoid exploitative data scraping, and prioritize cultural preservation and frugal, resource‑efficient solutions (Aya Bedir) Apertus demonstrates the need for talent, compute resources, and community‑driven benchmarks; academia should lead the development of open multilingual models (Alex Ilic) The C‑Line model shows how resource‑efficient multilingual AI can respect national sovereignty, handle code‑switching, and serve diverse Southeast Asian societies (Participant) Multilingual AI must be rigorously tested in critical health contexts; neutral, open‑science platforms like the MOVE project provide real‑world validation and feedback loops (Annie Hartley) Bhashini creates AI capabilities across 22 (now 36) Indian languages through grassroots data collection and delivers concrete applications such as farmer advisory and manuscript digitisation (Amitabh Nag)
All speakers stress that AI must support every language and culture, framing linguistic inclusion as a democratic imperative, a human right, and a prerequisite for equitable digital participation and culturally appropriate services. They cite initiatives ranging from ICAAN’s global network, the Apertus open model, the C-Line regional system, Bhashini’s Indian language platform, to health-sector validation, underscoring a shared belief that multilingual AI is foundational for inclusive societies. [4-5][57-58][224-229][126-132][158-164][190-196][250-263][288-295][78-86][89-95]
POLICY CONTEXT (KNOWLEDGE BASE)
This view aligns with UNESCO’s multilingualism governance framework and reflects calls for inclusive digital development in AI policy discussions such as AI for Development panels [S44][S29].
Building multilingual models requires specialised talent, high‑performance computing, and shared infrastructure
Speakers: Markus Reubi, Alex Ilic, Nina Frey
Building multilingual models also requires expertise in high‑performance computing and talent development (Markus Reubi) Apertus demonstrates the need for talent, compute resources, and community‑driven benchmarks; academia should lead the development of open multilingual models (Alex Ilic) ICAAN network links institutions worldwide to promote language diversity in AI (Nina Frey)
Both Markus and Alex highlight the scarcity of expertise needed to train foundation models and the parallel need for super-computing resources, while Nina points to ICAAN’s role in linking compute infrastructure across institutions, indicating consensus that talent and HPC are critical bottlenecks that must be addressed collaboratively. [220-222][183-186][197-218][57-58]
POLICY CONTEXT (KNOWLEDGE BASE)
The need for specialised talent and high-performance compute is highlighted in reports on infrastructure gaps and sustainability concerns, e.g., the Green AI debate and the identified lack of compute resources in the Global South [S45][S42].
Collaborative, multistakeholder frameworks and funding mechanisms are essential to advance multilingual AI
Speakers: Torsten Schwede, Nina Frey, Markus Reubi, Alex Ilic, Aya Bedir, Amitabh Nag
Launch of three joint research calls (geosciences, social sciences, One Health) and a new Indo‑Swiss Research Framework Program to deepen bilateral science cooperation (Torsten Schwede) ICAAN network links institutions worldwide to promote language diversity in AI (Nina Frey) Announcement of three new joint calls under the Indo‑Swiss Joint Research Programme and upcoming Geneva AI Summit (Markus Reubi) We have a very strong international program where we share basically our compute infrastructure (Alex Ilic) Scaling AI must be community‑led, avoid exploitative data scraping, and prioritize cultural preservation and frugal, resource‑efficient solutions (Aya Bedir) Bhashini’s grassroots data‑collection effort involving 70 research institutes and real‑world applications (Amitabh Nag)
All listed speakers converge on the need for joint research programmes, public-private partnerships, and multistakeholder networks (ICAAN, Indo-Swiss framework, Bhashini consortium) to fund, coordinate, and share resources for multilingual AI development. This reflects a shared belief that coordinated financing and governance structures are vital for progress. [27-34][41-44][57-58][17-19][22][211-218][130-133][81-86][99-108]
POLICY CONTEXT (KNOWLEDGE BASE)
Multistakeholder frameworks and joint funding are advocated in multiple policy fora, including IGF multistakeholder cooperation recommendations and the AI Innovation Responsible Development agenda [S32][S49][S35].
Real‑world validation and culturally appropriate testing are critical before AI deployment
Speakers: Annie Hartley, Amitabh Nag, Participant, Alex Ilic
Multilingual AI must be rigorously tested in critical health contexts; neutral, open‑science platforms like the MOVE project provide real‑world validation and feedback loops (Annie Hartley) Ground‑up data collection and pilot applications (farmer advisory, manuscript digitisation) demonstrate practical deployment (Amitabh Nag) Discussion of code‑switching, sovereignty, and real‑world usage in Southeast Asian societies (Participant) Need for culturally relevant benchmarks and evaluation of language performance (Alex Ilic)
Speakers agree that AI models should be evaluated in actual use-cases-health, agriculture, regional contexts-and that community-driven validation (e.g., MOVE) and culturally relevant benchmarks are necessary to ensure safety, relevance, and trust. [288-295][300-311][328-335][99-108][265-281][190-196][197-200]
POLICY CONTEXT (KNOWLEDGE BASE)
Real-world validation and culturally appropriate testing are emphasized in data-governance guidelines and testing-trust recommendations, noting the shortcomings of current testing paradigms across diverse contexts [S29][S50][S51].
Similar Viewpoints
Both stress that scarcity of specialised talent and access to high‑performance computing are major bottlenecks for multilingual AI, and that academia must be empowered to address them. [220-222][183-186][197-218]
Speakers: Markus Reubi, Alex Ilic
Building multilingual models also requires expertise in high‑performance computing and talent development (Markus Reubi) Apertus demonstrates the need for talent, compute resources, and community‑driven benchmarks; academia should lead the development of open multilingual models (Alex Ilic)
Both frame language and cultural access as a fundamental human right and argue for inclusive, community‑driven AI development that resists top‑down, exploitative practices. [126-132][158-164][224-229]
Speakers: Aya Bedir, Petri Myllymäki
Scaling AI must be community‑led, avoid exploitative data scraping, and prioritize cultural preservation and frugal, resource‑efficient solutions (Aya Bedir) Access to language and culture is a human right; AI initiatives must involve all nations, not just dominant players (Petri Myllymäki)
Both highlight ICAAN’s role in connecting global partners and sharing compute resources to enable multilingual AI research. [57-58][211-218]
Speakers: Nina Frey, Alex Ilic
ICAAN network links institutions worldwide to promote language diversity in AI (Nina Frey) We have a very strong international program where we share basically our compute infrastructure (Alex Ilic)
Both present concrete, region‑specific multilingual AI systems that are built with limited resources and aim to serve local communities while respecting cultural and sovereign contexts. [99-108][250-263]
Speakers: Amitabh Nag, Participant
Bhashini creates AI capabilities across 22 (now 36) Indian languages… practical applications (Amitabh Nag) C‑Line model shows resource‑efficient multilingual AI respecting sovereignty and handling code‑switching (Participant)
Both announce and endorse the Indo‑Swiss joint research initiatives as a mechanism to deepen bilateral scientific cooperation in AI and related fields. [27-34][41-44][17-19][22]
Speakers: Torsten Schwede, Markus Reubi
Launch of three joint research calls… Indo‑Swiss Research Framework Program (Torsten Schwede) Announcement of three new joint calls under the Indo‑Swiss Joint Research Programme and upcoming Geneva AI Summit (Markus Reubi)
Unexpected Consensus
Public‑private partnership and government‑led funding are both seen as essential to scale multilingual AI responsibly
Speakers: Aya Bedir, Torsten Schwede
Scaling AI must be community‑led, avoid exploitative data scraping, and prioritize cultural preservation and frugal, resource‑efficient solutions (Aya Bedir) Launch of three joint research calls… Indo‑Swiss Research Framework Program (Torsten Schwede)
Despite coming from different sectors-Aya representing a public-private initiative and Torsten representing a government-funded research programme-they converge on the view that collaborative funding models combining public resources and private expertise are necessary to advance multilingual AI, a convergence not explicitly anticipated earlier in the discussion. [130-133][41-44]
POLICY CONTEXT (KNOWLEDGE BASE)
Public-private partnerships and government-led funding are repeatedly cited as essential for scaling multilingual AI, as seen in Global South partnership models and private-sector collaborations like the Gates Foundation initiative [S36][S38][S37].
Overall Assessment

The discussion reveals a strong, cross‑regional consensus that multilingual AI is a democratic and human‑rights imperative, that talent and compute resources are critical bottlenecks, that collaborative funding and multistakeholder networks are essential, and that real‑world, culturally aware validation must precede deployment.

High consensus across technical, ethical, and policy dimensions, indicating a unified momentum toward coordinated, inclusive, and responsibly funded multilingual AI initiatives.

Differences
Different Viewpoints
How to scale multilingual AI – community‑led frugal approaches versus high‑performance computing and talent‑intensive approaches
Speakers: Aya Bedir, Markus Reubi, Alex Ilic
Scaling AI must be community‑led, avoid exploitative data scraping, and prioritize cultural preservation and frugal, resource‑efficient solutions (Aya Bedir) Building multilingual models also requires expertise in high‑performance computing and talent development (Markus Reubi) Apertus demonstrates the need for talent, compute resources, and community‑driven benchmarks; academia should lead the development of open multilingual models (Alex Ilic)
Aya argues that AI should be scaled through community-led public-private partnerships that avoid big-tech data-scraping and use frugal solutions, while Markus stresses that multilingual model development needs specialized talent for both model training and supercomputing infrastructure, and Alex highlights the scarcity of talent and compute and calls for academia-driven open models, showing a clash of preferred scaling strategies [126-164][220-222][183-186].
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between community-led frugal scaling and resource-intensive high-performance approaches mirrors debates in the Multilingual AI Bridge report and Green AI literature, with examples of low-resource strategies and efficient resource use in China [S32][S45][S46][S33].
Role of big‑tech versus academia/public sector in driving multilingual AI development
Speakers: Aya Bedir, Alex Ilic, Torsten Schwede
Scaling AI must be community‑led, avoid exploitative data scraping, and prioritize cultural preservation and frugal, resource‑efficient solutions (Aya Bedir) Apertus demonstrates the need for talent, compute resources, and community‑driven benchmarks; academia should lead the development of open multilingual models (Alex Ilic) Launch of three joint research calls (geosciences, social sciences, One Health) and a new Indo‑Swiss Research Framework Program to deepen bilateral science cooperation, with AI as a high‑priority topic (Torsten Schwede)
Aya warns against big-tech’s brute-force data-scraping and promotes community-led PPPs, Alex calls for academia to take the lead while noting big-tech dominance in benchmarks, and Torsten focuses on government-funded research programmes that do not address big-tech practices, revealing differing views on who should steer AI development [126-164][183-186][27-34][41-44].
POLICY CONTEXT (KNOWLEDGE BASE)
The role of big-tech versus academia/public sector is a recurring point of disagreement in multistakeholder discussions, with calls for balanced contributions from industry, academia, and governments [S32][S41][S48].
Approach to validation of multilingual AI in high‑stakes health contexts
Speakers: Annie Hartley, Alex Ilic
Multilingual AI must be rigorously tested in critical health contexts; neutral, open‑science platforms like the MOVE project provide real‑world validation and feedback loops (Annie Hartley) Apertus demonstrates the need for talent, compute resources, and community‑driven benchmarks; academia should lead the development of open multilingual models (Alex Ilic)
Annie stresses the necessity of real-world, high-stakes validation of AI models in healthcare, while Alex focuses on model development, talent, and benchmark creation without explicit emphasis on health-specific validation, indicating a methodological divergence [288-295][183-186].
POLICY CONTEXT (KNOWLEDGE BASE)
Validation in high-stakes health contexts is governed by European data-governance frameworks that balance GDPR with medical research needs, underscoring the need for rigorous testing standards [S28][S50].
Unexpected Differences
Resource‑intensive high‑performance computing versus frugal, low‑resource AI scaling
Speakers: Markus Reubi, Aya Bedir
Building multilingual models also requires expertise in high‑performance computing and talent development (Markus Reubi) Scaling AI must be community‑led, avoid exploitative data scraping, and prioritize cultural preservation and frugal, resource‑efficient solutions (Aya Bedir)
It is unexpected that a speaker from a high-resource nation (Switzerland) emphasizes the need for supercomputing talent, while another speaker advocates for low-resource, community-driven approaches, revealing a tension between resource-rich and resource-constrained visions for multilingual AI (see [220-222][126-164]).
POLICY CONTEXT (KNOWLEDGE BASE)
Resource-intensive versus frugal AI scaling is addressed in sustainability debates, highlighting the environmental impact of large models and the emergence of efficient training methods [S45][S46][S33].
Data ownership and ethical collection methods
Speakers: Aya Bedir, Amitabh Nag
Scaling AI must be community‑led… avoid treating individuals and communities as data (Aya Bedir) We built the monolingual and bilingual corpora by field workers collecting data from people; this was the first brute‑force digital data collection effort (Amitabh Nag)
While both aim to gather data for multilingual AI, Aya warns against treating communities merely as data sources, whereas Amitabh describes a large-scale field data collection that could be perceived as treating people as data points, highlighting an unexpected ethical tension (see [158-164][99-105]).
POLICY CONTEXT (KNOWLEDGE BASE)
Data ownership and ethical collection are central to AI governance, with EU GDPR considerations, ethical data-collection principles, and calls to document data provenance in AI systems [S28][S29][S30][S31].
Overall Assessment

The discussion shows strong consensus on the importance of multilingual AI for democratic participation, cultural preservation, and human rights. However, speakers diverge sharply on the means to achieve this—ranging from high‑performance, talent‑intensive, and compute‑heavy strategies to frugal, community‑led, and ethically cautious approaches. Additional disagreements concern the role of big‑tech versus academia/public sector and the necessity of rigorous health‑sector validation.

Moderate to high methodological disagreement. While goals are aligned, the contrasting visions on scaling, resource allocation, and ethical data practices could impede coordinated action unless a hybrid framework is adopted that balances high‑tech capabilities with community‑driven, frugal solutions.

Partial Agreements
All speakers agree that multilingual AI is essential for democratic participation, cultural preservation, and human rights, but they diverge on implementation pathways—field data collection (Amitabh), community‑led PPPs (Aya), open‑source academic models (Alex), resource‑efficient national models (Participant), and high‑performance compute (Markus) (see [4][5][78-86][126-164][250-263][224-229]).
Speakers: Markus Reubi, Amitabh Nag, Aya Bedir, Participant, Alex Ilic, Petri Myllymäki
AI must serve all languages and cultures to ensure democratic participation (Markus Reubi) Bhashini creates AI capabilities across 22 (now 36) Indian languages through grassroots data collection and delivers concrete applications such as farmer advisory and manuscript digitisation (Amitabh Nag) Scaling AI must be community‑led, avoid exploitative data scraping, and prioritize cultural preservation and frugal, resource‑efficient solutions (Aya Bedir) The C‑Line model shows how resource‑efficient multilingual AI can respect national sovereignty, handle code‑switching, and serve diverse Southeast Asian societies (Participant) Apertus demonstrates the need for talent, compute resources, and community‑driven benchmarks; academia should lead the development of open multilingual models (Alex Ilic) Access to language and culture is a human right; AI initiatives must involve all nations, not just dominant players (Petri Myllymäki)
Both speakers support strengthening bilateral cooperation and continuous collaboration, but Torsten focuses on discipline‑specific research funding, whereas Markus emphasizes a broader, globally inclusive AI governance agenda (see [27-34][41-44][8][9]).
Speakers: Torsten Schwede, Markus Reubi
Launch of three joint research calls (geosciences, social sciences, One Health) and a new Indo‑Swiss Research Framework Program to deepen bilateral science cooperation (Torsten Schwede) Our shared objective is continuity, cooperation and genuinely global approach to AI governance (Markus Reubi)
Takeaways
Key takeaways
Multilingual AI is framed as a democratic imperative; AI must serve all languages and cultures to ensure inclusive participation (Markus Reubi, Petri Myllymäki). The Indo‑Swiss partnership is deepening with three new joint research calls (geosciences, social sciences, One Health) and the launch of an Indo‑Swiss Research Framework Program, plus new Explore/Experiment/Expand grants (Torsten Schwede). India’s Bhashini initiative demonstrates a large‑scale, grassroots effort to create speech, translation, text‑to‑speech, OCR and lexical resources across 22 (now 36) languages, delivering concrete services such as farmer advisory and manuscript digitisation (Amitabh Nag). Current AI exemplifies a public‑private partnership that aims to scale responsibly, emphasizing cultural preservation, community‑led data collection, and frugal, resource‑efficient solutions while warning against exploitative data scraping (Aya Bedir). Open, academic‑driven multilingual foundation models like Apertus are needed; challenges include limited talent, compute, and community‑defined benchmarks (Alex Ilic, Markus Reubi). Language access is a human right; global AI initiatives must involve all nations, not just dominant tech players (Petri Myllymäki). Regional models such as Singapore’s C‑Line illustrate how multilingual AI can respect national sovereignty, handle code‑switching, and be built with limited data resources (Participant). High‑stakes applications, especially in health, require rigorous real‑world validation; neutral open‑science platforms like the MOVE project provide feedback loops to ensure safety and cultural relevance (Annie Hartley). Future milestones include the Geneva AI Summit 2027 and continued collaboration among ICAIN, Swiss institutions (ETH, EPFL), Indian partners, and other global stakeholders.
Resolutions and action items
Launch of three Indo‑Swiss joint research calls (geosciences, social sciences, One Health). Establishment of the Indo‑Swiss Research Framework Program for ongoing bilateral collaboration. Introduction of Explore, Experiment, and Expand grant schemes to foster new and existing collaborations. Increase of mobility funding for researchers within Indo‑Swiss projects. Announcement of upcoming flagship events in Switzerland and India to maintain network engagement. Deployment of Bhashini’s multilingual AI services (farmer advisory, Gyan Bharatam manuscript platform). Current AI to unveil a collaborative device (scheduled for 15:30, Room 10) with Bhashini. Commitment to develop and share open multilingual foundation model Apertus, including benchmarks and compute resources. Plan to expand Apertus to additional languages and improve performance beyond English baseline. MOVE project to collect real‑world validation data from high‑stakes medical settings and feed back into model improvement. Participants encouraged to follow websites and social media for updates and to submit proposals to the new calls.
Unresolved issues
Sustainable financing and cost‑effectiveness of scaling multilingual data collection beyond pilot languages. How to ensure community‑led data gathering without resorting to large‑scale, unlicensed scraping by big tech firms. Methods for achieving high model performance with limited data (frugal AI) across low‑resource languages. Standardisation of benchmarks that reflect cultural and contextual relevance rather than generic English‑centric metrics. Balancing national sovereignty with the need for interoperable, globally useful multilingual models. Technical solutions for handling code‑switching and dialectal variation in real‑time applications. Long‑term governance structure for open‑source multilingual models and the role of academia versus industry. Mechanisms to continuously validate and monitor AI safety in high‑stakes domains such as healthcare.
Suggested compromises
Adopt a public‑private partnership model (Current AI) that combines philanthropic, governmental, and industry resources to achieve scale while maintaining community control. Use the Explore/Experiment/Expand grant framework to allow both blue‑sky, high‑risk projects and incremental expansion of proven collaborations. Leverage existing open models (Apertus) as a shared foundation, enabling regional partners to fine‑tune for local languages without rebuilding from scratch. Implement frugal AI approaches that maximise impact with minimal data and compute, addressing resource constraints of low‑income regions. Encourage neutral, open‑science validation platforms (MOVE) to provide real‑world feedback, balancing rapid deployment with safety and cultural accuracy.
Thought Provoking Comments
AI can only serve the public good if it serves all languages and all cultures. Linguistic exclusion is a democratic imperative.
Frames multilingual AI not just as a technical challenge but as a fundamental democratic right, setting the ethical baseline for the whole discussion.
Established the overarching theme of the summit, prompting subsequent speakers to justify their projects in terms of inclusion and democratic access rather than pure innovation.
Speaker: Markus Reubi
One Health reflects a holistic approach needed to understand the interconnected health of humans, animals, and the environment. The challenges we face know no borders, and international collaboration is indispensable.
Introduces a concrete, cross‑disciplinary research area where AI can have global impact, linking health, ecology, and data sharing.
Shifted the conversation from abstract policy to a tangible research agenda, leading participants to consider how multilingual AI can support such interdisciplinary work.
Speaker: Torsten Schwede
We had about 200‑odd people go into the field, collect speech, pictures, and create monolingual or bilingual corpora – a brute‑force data collection because digital data simply didn’t exist.
Highlights the practical difficulty of building language resources from scratch and the innovative grassroots methodology used to overcome data scarcity.
Prompted recognition of the labor‑intensive nature of multilingual AI, influencing later remarks about the need for community‑driven data and validation.
Speaker: Amitabh Nag
Big tech often treats individuals and communities as data, a condescending approach. We must get as close as possible to the communities themselves and support them to preserve their own cultures and languages.
Challenges the prevailing corporate model of data collection, raising ethical concerns about agency, consent, and cultural respect.
Created a turning point toward ethical scrutiny; subsequent speakers (e.g., Alex Ilic, Annie Hartley) emphasized community involvement, open‑source, and neutral validation.
Speaker: Aya Bedir
Only about a hundred people worldwide have the expertise to build foundation models. Academia must empower talent, not just provide compute.
Identifies a critical talent bottleneck and calls for academic empowerment, expanding the discussion beyond infrastructure to human capital.
Deepened the analysis of barriers to multilingual AI, leading to calls for shared benchmarks, talent development, and collaborative compute resources.
Speaker: Alex Ilic
Access to language and culture is a human right. We must invite all countries to the dinner table as guests, not just as part of the menu.
Elevates multilingual AI to a rights‑based framework, reinforcing inclusion as a moral imperative rather than a technical optionality.
Reinforced the democratic framing introduced by Markus, and inspired later remarks on sovereignty and code‑switching from the NTU participant.
Speaker: Petri Myllymäki
Sovereignty is about power for societies and individuals; we need nuanced models that handle code‑switching and dialects, not just monolithic national languages.
Introduces the complex notion of linguistic sovereignty and the technical challenge of modeling fluid, multilingual realities.
Shifted the tone toward technical nuance and political context, prompting Markus to segue into medical implications and broader governance concerns.
Speaker: Participant (NTU Singapore)
When an AI model told me ‘thou shalt not eat insulin on a Tuesday’ in a low‑resource language, it showed how dangerous inaccurate models are in high‑stakes medical settings. We need real‑world validation (MOVE) and neutral, open‑science approaches.
Provides a vivid, concrete example of AI failure in a critical domain, underscoring the necessity of validation, neutrality, and community‑driven data.
Served as a climax of the discussion, moving the conversation from policy and research calls to immediate real‑world risk, reinforcing earlier ethical concerns and prompting the final wrap‑up.
Speaker: Annie Hartley
Overall Assessment

The discussion was shaped by a series of escalating insights that moved from a high‑level democratic framing of multilingual AI to concrete ethical, technical, and societal challenges. Markus Reubi’s opening set the tone of inclusion as a right; Aya Bedir’s critique of big‑tech data practices and Annie Hartley’s medical failure story acted as turning points that deepened the ethical dimension. Contributions from Amitabh Nag, Alex Ilic, and the NTU participant highlighted practical data‑scarcity solutions, talent bottlenecks, and sovereignty complexities, while Petri Myllymäki’s human‑rights articulation reinforced the moral urgency. Together, these comments redirected the conversation from abstract announcements to a nuanced, rights‑based, and implementation‑focused dialogue, culminating in a consensus that collaborative, community‑centered, and validated multilingual AI is essential for equitable global impact.

Follow-up Questions
How can we increase the number of languages with performance comparable to English in multilingual models?
Improving language coverage is essential for truly inclusive AI that serves all linguistic communities.
Speaker: Alex Ilic
What is the cost and resource requirements to raise language performance significantly for low‑resource languages?
Understanding financial and computational needs is crucial for planning scalable multilingual initiatives.
Speaker: Alex Ilic
How can benchmarks be designed to reflect cultural and regional needs rather than corporate metrics?
Culturally relevant evaluation criteria ensure models are useful and fair for diverse societies.
Speaker: Alex Ilic
How can we ensure ethical data collection that respects communities, avoiding treating them as mere data sources?
Ethical sourcing protects individual rights and builds trust with language‑speaking communities.
Speaker: Aya Bedir
How can AI be developed to operate effectively under scarcity and frugality?
Frugal AI enables deployment in low‑resource settings, expanding benefits beyond wealthy regions.
Speaker: Aya Bedir
How can communities be directly involved in preserving their own languages and cultures, avoiding top‑down approaches?
Community‑led preservation ensures authenticity and empowerment rather than paternalistic interventions.
Speaker: Aya Bedir
How can multilingual models be evaluated and validated in high‑stakes medical contexts to ensure safety and accuracy?
Medical applications demand rigorous testing to prevent harmful errors in patient care.
Speaker: Annie Hartley
What methods can be used to collect and validate real‑world usage data (e.g., via the MOVE project) to continuously improve models?
A feedback loop from actual deployments helps refine models and measure real impact.
Speaker: Annie Hartley
How should AI systems handle code‑switching and dialectal variation in multilingual societies?
Real‑world language use often mixes languages; models must reflect this to be usable.
Speaker: Participant from Singapore (NTU)
How does sovereignty intersect with multilingual AI governance and individual control over AI tools?
Understanding sovereignty issues is key for equitable power distribution and respecting national and individual autonomy.
Speaker: Participant from Singapore (NTU)
What strategies can increase the talent pool for building foundation models beyond big‑tech companies?
Addressing talent shortages is necessary for broader academic and regional participation in AI development.
Speaker: Alex Ilic, Markus Reubi
How can academia acquire compute resources comparable to industry for training large models?
Access to high‑performance computing is a bottleneck for university‑led AI research.
Speaker: Alex Ilic
How can the Bhasha Initiative scale to cover all 100+ Indian languages, especially those without scripts?
Extending coverage ensures linguistic inclusion for the full diversity of India’s population.
Speaker: Amitabh Nag
What effective methods exist for data collection in low‑resource languages (e.g., field work, community engagement)?
Robust data pipelines are foundational for building accurate multilingual models.
Speaker: Amitabh Nag
How can the impact of multilingual AI on agricultural advisory for farmers be measured and improved?
Demonstrating tangible benefits validates the utility of language technologies in livelihoods.
Speaker: Amitabh Nag
How can long‑term Indo‑Swiss AI collaborations be structured and funded, especially for AI research topics?
Sustained bilateral programs are needed to maintain momentum and address shared challenges.
Speaker: Torsten Schwede
How can public‑private partnership models balance public‑interest goals with private‑sector scale?
Effective PPPs can marshal resources while safeguarding societal values.
Speaker: Aya Bedir
How can we ensure that multilingual AI initiatives move beyond symbolic check‑marks to substantive improvements?
Measuring real performance gains prevents tokenism and drives meaningful progress.
Speaker: Alex Ilic
How can open‑source multilingual models like Apertus be made economically usable for diverse stakeholders?
Affordability and accessibility are vital for widespread adoption across academia and industry.
Speaker: Alex Ilic
How can AI incorporate broader cultural preservation (behaviors, artifacts, norms) beyond just language?
A holistic approach respects the full spectrum of cultural heritage.
Speaker: Aya Bedir
Why does Current AI focus on hardware as a strategic priority?
Understanding the emphasis on hardware clarifies the initiative’s infrastructure roadmap.
Speaker: Aya Bedir
What experiences does Alex Ilic have with Apertus and how can Indian languages be incorporated in the next year?
Sharing practical insights will guide future collaborative extensions of the model.
Speaker: Nina Frey
What are the Nordic recommendations for multilingual AI and why were they made?
Nordic policy perspectives can inform global governance and language‑preservation strategies.
Speaker: Petri Myllymäki
How does sovereignty relate to language models and AI governance?
Exploring sovereignty helps address power dynamics and control over AI technologies.
Speaker: Participant from Singapore (NTU)
How can multilingual AI be applied in high‑stakes medical scenarios, and what is Annie Hartley’s role in ICANN?
Linking medical use‑cases with governance roles clarifies pathways for responsible deployment.
Speaker: Markus Reubi

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.