Smaller Footprint Bigger Impact Building Sustainable AI for the Future
20 Feb 2026 18:00h - 19:00h
Smaller Footprint Bigger Impact Building Sustainable AI for the Future
Summary
The event opened with an introduction and a keynote by France’s Minister Delegate for AI and Digitalisation, Anne Le Henanf, who framed sustainable AI as an urgent global priority [1][5-10]. She described the Sustainable AI Coalition’s rapid growth to over 220 million people, its three-pillar strategy of research, measurement and action, and announced the Resilient AI Challenge as a concrete step toward energy-efficient models [16-23][24-27][28-30][31-33].
Dr. Tafik Delassie emphasized that the energy and resource footprint of large generative models threatens low-income regions, argued that the next breakthrough must be leaner, resilient systems, and officially launched the Resilient AI Challenge to move from principle to practice [40-46][51-55][60-62][69-74]. Moderator Anne Bouvreau then invited panelists, and Ambassador Philip Tigo explained Kenya’s 95 % renewable energy mix, the need for green-by-design AI use, and highlighted the role of international standards in governing AI’s environmental impact [104-112][119-120].
James Manyika outlined Google’s Gemini family, which uses mixture-of-experts architectures and aims for carbon-free data centres by 2035, illustrating how performance and efficiency can be pursued together [131-144][150-158]. Arthur Mensch added that sparse-expert models, open-source releases, localisation of training to low-carbon grids, and diverse low-power chips dramatically reduce AI’s carbon intensity, and he called for public-procurement policies to accelerate these gains [167-176][182-190][192-197][252-262].
Abhishek Singh described India’s focus on inference efficiency, off-grid and modular reactor solutions, and policy measures that open small-model development to the private sector, arguing that sustainable AI is essential for scaling public-sector services cost-effectively [216-224][226-236][238-242][310-319]. The panel agreed that AI can support grid management, agriculture and material science, turning the technology into a climate-mitigation tool [202-208], and that governments can further progress by incentivising open-source research, setting procurement criteria, and investing in renewable off-grid power for AI compute [252-262][267-270].
Both speakers and panelists highlighted that model compression and task-specific architectures can cut AI’s energy use by up to 90 % without harming performance [65-66]. They concluded that coordinated international action, standards, and initiatives such as the Resilient AI Challenge are essential to embed resilience, fairness and sustainability into the future of AI [31-33][328-334].
Keypoints
Major discussion points
– Sustainable and resilient AI as an urgent global imperative – The speakers framed AI’s future around energy efficiency, environmental limits, and fairness, warning that AI’s energy needs are outpacing green-energy progress and that large models risk widening global divides [7-15].
– Coordinated actions and standards to drive “green” AI – France’s Sustainable AI Coalition is scaling up research, publishing a second-generation global standard for AI environmental sustainability, and launching the Resilient AI Challenge to move from principle to practice [24-27][69-74].
– Industry-led technical approaches to reduce AI’s carbon footprint – Google’s James Manyika described the Gemini family, mixture-of-experts architectures, and a 24/7 carbon-free compute goal, while Mistral’s Arthur Mensch highlighted sparse-expert models, caching, open-source model release, locality-based data-center choices, and energy-efficient chips as key levers [133-158][168-194].
– Policy levers and government involvement – Kenya’s ambassador emphasized a “green-by-design” energy mix, education on responsible AI use, and participation in international standards [107-119]; other speakers called for public-procurement criteria, incentives for off-grid renewable power, and clear environmental standards to guide AI deployment [252-262][267-272].
– Collaboration across sectors as the path forward – Throughout the session, participants from UNESCO, France, India, Kenya, and leading AI firms stressed that multi-stakeholder cooperation, open-source sharing, and joint research are essential to achieve inclusive, low-impact AI [45-48][70-74][285-291].
Overall purpose / goal of the discussion
The session was convened to mobilise governments, international organisations, and the AI industry around the development and deployment of sustainable, resilient AI that can meet climate-related targets while remaining inclusive. It aimed to showcase concrete initiatives (standardisation, research funding, the Resilient AI Challenge) and to solicit concrete commitments from policymakers and companies to embed energy-efficiency and fairness into AI practice.
Overall tone
The conversation began with a formal, diplomatic tone (opening remarks by the French minister) that quickly shifted to a technical and solution-focused dialogue as industry leaders detailed model-level innovations. Mid-session the tone became collaborative and optimistic, highlighting shared commitments and concrete actions. The closing returned to a hopeful and rallying tone, urging participants to join the challenge and reinforcing that environmental stewardship is now a competitive advantage for AI stakeholders.
Speakers
– Dr. Tafik Delassie – Area of expertise: UNESCO communications, technology sector, AI policy and sustainability; Role/Title: Assistant Director General for Communication and Technology Sector, UNESCO [S1].
– Anne Le Henanf – Area of expertise: AI policy, digitalisation, sustainable AI; Role/Title: Minister Delegate for AI and Digitalisation Affairs, France.
– Ambassador Philip Tigo – Area of expertise: Technology policy, AI for development in Africa; Role/Title: Ambassador and Special Technology Envoy for Kenya [S7].
– James Manyika – Area of expertise: AI research, large-scale models, sustainability, cloud infrastructure; Role/Title: Senior Vice President, Google-Alphabet (Alphabet Inc.) [S10].
– Arthur Mensch – Area of expertise: AI model development, efficient architectures, open-source AI; Role/Title: Co-founder and Chief Executive Officer, Mistral AI [S13].
– Anne Bouvreau – Area of expertise: AI policy and diplomacy for France; Role/Title: Special Envoy on AI for France, panel moderator.
– Speaker 1 – Area of expertise: Event facilitation/moderation; Role/Title: Host/Moderator of the session (no specific title provided).
– Abhishek Singh – Area of expertise: AI policy, government AI strategy, AI for public sector services; Role/Title: Lead organizer of the summit; Under-Secretary, Ministry of Electronics and Information Technology, Government of India [S22].
Additional speakers:
– Hélène – Area of expertise: Not specified; Role/Title: Likely co-host/moderator (mentioned briefly in the panel introduction, no formal title provided).
The host opened the session with a brief welcome and outlined the agenda before introducing the first distinguished speaker, Mrs Anne Le Henanf, France Minister Delegate for AI and Digitalisation Affairs [1-4]. In her keynote, Le Henanf reframed the debate from “how can AI work for us” to “how can we ensure AI works efficiently, responsibly and fairly for people and for our planet” [7-9]. She warned that AI’s energy demands already outpace the growth of green-energy capacity [10-13] and that massive, unsustainable models risk creating a new fairness crisis by excluding regions with limited resources [14-16].
Le Henanf then presented the Sustainable AI Coalition, noting its growth from 90 founding members to a network that reaches over 220 million people and now includes fifteen countries, eight international organisations, and a broad mix of tech firms, utilities, NGOs and research institutions [18-20][21-22]. The coalition follows a three-pillar approach-research (2026 AI-research pitch sessions) [18-20], measurement (second-generation global standard for AI environmental sustainability) [23-25], and action (low-carbon, renewable-powered data centres and the Resilient AI Challenge) [26-28][31-33]. The coalition is embedded in the UN Global Digital Compact and a UN Environment Assembly resolution [31-33].
After the keynote, the host thanked Le Henanf and introduced Dr Tafik Delassie, Assistant Director-General for Communication and Technology Sector, UNESCO [1-4]. Delassie quantified the scale of the problem: generative-AI inference already consumes hundreds of gigawatt-hours per year-comparable to the annual electricity use of millions of people in low-income countries-and training a single frontier model can require more than 1 000 MWh, enough to power Indian villages for a year [52-55][56-58]. He argued that the next breakthrough will come from “leaner, more resilient systems” that can operate under strict energy constraints [59-60][61-62]. To move from principle to practice, he announced the Resilient AI Challenge, which will benchmark open-source models on accuracy and energy efficiency, with results to be presented at the AI for Good Summit in July in Geneva [65-66][69-74].
The host then transitioned to the panel, introducing Anne Bouvreau as moderator [1-4]. The first panellist, Ambassador Philip Tigo, Tech Envoy for Kenya, explained that Kenya enjoys a 95 % renewable-energy mix-geothermal, wind, hydro, solar and water-providing a “green-by-design” foundation for AI workloads [107-110]. He highlighted Kenya’s contribution to the first AI environmental-sustainability resolution [111-115] and called for a broader AI-safety research agenda that explicitly includes environmental concerns [280-284]. He also noted the importance of user behaviour and participation in international standards work [119-120].
Mr James Manyika, Senior Vice-President, Google Alphabet, described Google’s Gemini family as an illustration of industry-led technical progress. The Gemini portfolio spans high-performance “Pro” models to ultra-efficient “Flash” variants, all built on mixture-of-experts architectures that activate only a fraction of parameters, thereby reducing FLOPs per token [133-144][145-148]. Manyika outlined Google’s commitment to carbon-free compute, with investments in nuclear, geothermal, hydro, wind and solar that aim for 24/7 carbon-free operation by 2035 [151-158][154-158]. He stressed that efficiency is both an environmental and a business imperative: lower per-token energy use directly improves return on investment at scale [151-153][152-153]. He also mentioned the potential of fusion energy, noting AI’s role in plasma containment research [267-270].
Mr Arthur Mensch, CEO, Mistral AI, complemented Google’s approach by detailing additional levers. Mistral employs sparse-expert models that activate only about 5 % of parameters, coupled with sophisticated caching systems that avoid redundant computation, achieving substantial reductions in energy per token [169-171][172-176]. He emphasized that open-sourcing large pretrained models amortises the carbon cost of training across the community, preventing ten separate labs from duplicating the same high-energy work [172-178]. Mensch highlighted localisation strategies-training in low-carbon regions such as nuclear-heavy France or hydro-rich Sweden-and the use of diverse, low-power chips to further cut emissions [182-190][191-194]. He advocated for public-procurement criteria that embed sustainability metrics, arguing that market pressure combined with policy can accelerate efficiency gains [190-194].
Representing India, Mr Abhishek Singh, Lead Organizer, AI Impact Summit, outlined a national strategy focused on inference efficiency and grid optimisation. He noted that AI-driven projects with the Ministry of Power have already reduced transmission and distribution losses by 10-15 % [236-237]. Singh stressed that India will not chase trillion-parameter models; instead, the emphasis is on sector-specific, small-language models that keep per-query costs low, a necessity for public-sector services funded by taxpayers [221-224][226-236]. To meet the massive projected inference demand, India is exploring off-grid renewable solutions [267-270] and small modular reactors to avoid overloading the national grid [314-316].
Across the discussion, the speakers agreed that AI’s growing energy consumption threatens climate goals and widens the digital divide, and that improving efficiency-through greener energy mixes, mixture-of-experts architectures, open-source sharing and localisation-is essential for both equity and business viability [7-9][52-55][151-153][107-115]. They also concurred that robust measurement and standardisation are prerequisites for progress; Le Henanf announced a second-generation global standard [24-25], Mensch called for third-party carbon-intensity audits [193-194], and Manyika urged governments to support off-grid renewable power and detailed footprint assessments [151-158]. Finally, the panel highlighted AI as a climate-mitigation tool, citing high-leverage applications in grid management, agriculture, material science and chemistry [203-208].
The discussion revealed nuanced disagreements. On model size, Le Henanf warned that massive models exacerbate inequality [14-16], while Delassie argued that future breakthroughs must come from leaner systems [59-60]; Manyika, however, defended continued investment in large models within the Gemini family, relying on efficiency tricks rather than abandoning scale [133-148]. Regarding energy strategy, Tigo cautioned that off-grid solutions may be unrealistic for many emerging economies [107-112], whereas Manyika and Singh advocated dedicated off-grid solar, wind, geothermal and even small modular reactors to relieve pressure on national grids [267-270][314-316]. On policy levers, Mensch promoted public-procurement mandates [190-194], while Manyika emphasized broader incentives and standards, suggesting a more flexible approach [151-158].
Key outcomes
* The Resilient AI Challenge is now open for submissions until 15 March; winners will be announced at the AI for Good Summit in July in Geneva [69-74].
* The coalition’s Version 2 standard for AI environmental sustainability has been published jointly by ITU, IEEE and ESO [24-25].
* France pledged to implement low-carbon AI policies, green data centres and the three-pillar research-measurement-action framework [26-27][31-33].
* India committed to continue inference-efficiency projects, including grid-loss reduction pilots and policies that open AI infrastructure to private investment [236-237][267-270][314-316].
* Kenya reaffirmed its 95 % renewable-energy mix, user-education programmes and active participation in international standards work [107-115][119-120].
In her closing remarks, Bouvreau reiterated that environmental impact is now a core competitive factor for AI providers and a prerequisite for equitable development [323-326]. She reminded the audience of the registration deadline for the Resilient AI Challenge [329-331] and thanked the panel for demonstrating that sustainable, resilient AI can become the global baseline for future innovation [69-74]. The event positioned sustainable AI as an urgent, collaborative agenda that bridges policy, industry and research to align technological progress with planetary boundaries.
And this is what we will explore at this event. To introduce the topic, we will first have two distinguished speakers. First, I have the honor to welcome Mrs. Anne Le Henanf, France Minister Delegate for AI and Digitalization Affairs. Welcome, Madam Minister.
Excellencies, distinguished guests, ladies and gentlemen, it’s an honor to address you at Smaller Footprints, Bigger Impact, co -organized by France, UNESCO, and the Sustainable AI Coalition. This event is a continuation of the work co -chaired by India and France in preparation of this AI Impact Summit. putting resiliency, sustainability and efficiency at the heart of the global agenda. The question we face is no longer how can AI work for us, but how can we ensure AI works efficiently, responsibly and fairly for people and for our planet. Resilient and sustainable is the key to unlocking digital transformation, environmental protection and inclusive development. Sustainable AI is not an option, it’s an imperative. First, it’s an energy and environment imperative as governments decarbonize.
AI’s energy demands. Threaten to outpace green energy progress. Model providers face a stark reality. AI’s energy needs are growing faster than supply. Second, it’s a fairness crisis. Massive AI models without sustainability create new divides and can exclude regions and communities lacking resources. That is why France, at the AI Action Summit, made sustainable AI a priority through the Sustainable AI Coalition, launched with UNEP, ITU and India as founding members. Our goal? Leverage AI to solve environmental challenges without exceeding planetary boundaries. From 90 initial partners, we have grown to over 220 million people. We are the first to have a sustainable AI. including tech firms, startups, utilities, NGOs, and research institutions backed by eight international organizations and 15 countries with the Netherlands joining this year.
Sustainable AI is now a global priority. Embedded in the UN Global Digital Compact and a UN Environment Assembly resolution. To turn vision to action, we focus on three pillars. First, research. In 2026, the coalition will launch AI research pitch sessions to connect university projects with funding and industry partners. Second, measurement. You can’t improve what you can’t measure. Today, I’m proud to announce on behalf of the coalition ITU, the Institute of Electrical and Electronics Engineers and ESO that we published the second version of the global approach on standardization for AI environmental sustainability to promote consistency in AI environmental sustainability standardization and third, action. France is implementing policies for low carbon efficient AI, powered by renewable energy hosted in green data centers and designed to be leaner and smarter this approach boosts competitiveness and discovery with minimal environmental costs that’s why as an AI Impact Summit outcome, India, France and UNESCO launched the Resilient AI Challenge, a global challenge to advance compressed, more energy -efficient AI models.
This initiative supports innovation aligned with our shared goals. Sustainable and resilient AI must be the global baseline. The only path to equitable development that services people and the planet. France and India have led this effort from Paris to New Delhi by focusing on people, planets and progress. Now we must deliver together. I look forward to our panelists’ insights and now invite to continue. Thank you.
Thank you. Many thanks, Madam Minister, for this insightful introduction and the pioneering role of France in Sociable AI. I have now the pleasure to welcome Dr. Tafik Delassie, Assistant Director General for Communication and Technology Sector at UNESCO, whose landmark report on smaller models was published in July last year. Thank you.
Madame la Ministre de l ‘IA du Numérique, Madame l ‘Envoyé Spécial pour l ‘IA, distinguished participants, esteemed colleagues, dear partners and ladies and gentlemen. I’m very pleased on behalf of UNESCO to be with you this afternoon for this important session. But allow me first to raise a question. What if the next breakthrough in AI is a breakthrough in AI? is not about building other larger models, but about building leaner, more resilient systems, systems that can solve whole world problems and real world constraints, including in low resource environments. Before turning to the resilient AI challenge, I would like to warmly thank the government of India for its leadership in convening this timely, strategic, and important forward -looking summit.
I also would like to acknowledge the co -chairs of the Working Group on Resilience, Innovation, and Efficiency, the Ministry of Power of India, and the Ministry of Ecological Transition of France for their strong commitment, engagement, and stewardship. My sincere thanks also go to our technical and ecosystem partners, including Mistral, Google, Hugging Face, Alkosh, Sarvam AI, and the broader Sustainable AI Coalition. alongside many academic experts who have contributed to this collective effort. UNESCO is proud to serve as a key knowledge partner for this initiative and to support the vision of India regarding AI that truly serves the people, the planet and prosperity. I would like to convey briefly three messages. First, the future of AI will not be defined by scale alone, but rather by resilience.
Second, resource -efficient AI is not a trade -off. It is a path to inclusion and access. Thirdly, delivering impact at scale requires global collaboration that is truly grounded in real -world validation. We are at a critical inflection point. Generative AI tools are now used by more than 1 billion people on a daily basis. Yet, behind every prompt lies a growing energy and resource footprint. Inference already amounts to hundreds of gigawatt hours per year, and this is comparable to the annual electricity use of millions of people in low -income countries. Training frontier models is even more energy intensive. A single large AI model can consume over 1 ,000 megawatt hours of electricity, enough to power villages across India for a whole year, placing increasing pressure on energy systems and reinforcing inequalities in access to compute and infrastructure.
These challenges are not theoretical. They are real. They directly affect whether AI can be deployed. In public services, also by small, medium -sized enterprises, the technology is used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a in rural health systems and low connectivity environments, both in developing countries but also in advanced economies facing growing energy constraints.
This is why the next breakthrough in AI will not come from building ever -larger models. It will come from building smarter, leaner, and more resilient systems that can deliver impact under energy constraints rather than exacerbate them. A proverb says, a good life is for everyone. It captures the spirit of living well together, in community, inclusively, and in harmony with our planet. In the same spirit, AI must be designed not only for those with the greatest computing power, but for all communities. It is everywhere around the world. The work of UNESCO shows that small but conscious design choices, such as model compression, task -specific architectures, and optimized inference can reduce AI energy consumption by up to 90 % without compromising performance.
Resilient AI is therefore not only greener, it is more inclusive, more affordable, and more adaptable. It lowers barriers for researchers, empowers local ecosystems, and enables AI solutions to reach communities too often left at the margins of the digital transformation. This brings me to why we are here today. It is my pleasure to officially announce the launch of the Resilient AI Challenge, which is a flagship initiative under the India AI Impact Summit Working Group on Resilience, innovation, and efficiency. This challenge moves us decisively from principles to action. It brings together model providers, researchers, startups, and academic teams to demonstrate how open -source AI models can be optimized, compressed, and deployed to achieve strong performance while significantly reducing the use of energy.
Rather than comparing entirely different models, the challenge focuses on improving one base model per task, ensuring transparency, fairness, and rigorous benchmarking. Submissions will be evaluated on shared infrastructure and ranked on both accuracy and energy efficiency, generating clear and actionable evidence. The winners of the challenge will be announced at the AI for Good Summit this coming July in Geneva, but the real success will be, of course, much broader than that.
Thank you. before we delve into the panel I will invite the keynote speaker and the panelists to go up front for a picture now that we have the final line up and then we start the panel thank you Thank you. Thank you very much. Thank you very much. So now let me welcome our distinguished panelists and Mrs. Anne Bouvreau, Special Envoy on AI for France, moderator of this panel, to discuss how to make these models work and deploy in real life to the benefit of all. Thank you so much.
Thank you very much, Hélène. Thanks to the… the two keynote speeches that we just had first. Without further ado, I think what we want is to head into the discussion, so I will not make long introductions. I’m delighted to welcome our distinguished guests, James Manika, Senior Vice President, Google Alphabet, Arthur Mensch, CEO of Mistral AI, Abhishek Singh, lead organizer of this summit. A round of applause for him, please. Thank you. And Ambassador Philip Tigo, Ambassador and Tech Envoy for Kenya. Thank you. So the AI industry, according to the International Energy Agency, will probably consume 3 % of worldwide electricity production by 2030. This is not the end of the world, but this is a huge expansion.
The world’s largest energy source is the United States. The world’s largest energy source is the United States. The world’s largest energy source is the United States. The world’s largest energy source is the United States. And therefore, there are environmental costs and impacts that we need to mitigate. AI, of course, at the same time also creates opportunity to optimize resources, including energy. So how can we ensure that AI’s development, in particular in developing countries but everywhere as well, is something that comes together with a focus on the planet? I’ll start with a question for Ambassador Philip Tigo. Let me turn to you first. You’re an attractive proponent of, active proponent of a more efficient and sustainable AI.
Africa is one of the most energy -constrained regions. It’s also a continent where adoption is becoming very frequent. We saw that with mobile phone payment. We saw that with other technologies. technologies. How is Kenya approaching efficient AI? What can you share with
Thank you so much. And I’ll be very quick because I can see the ticker. There are a couple of things. One is that we’re very lucky as a country that our energy mix is already 95 percent. And we keep on investing into that. So we have geothermal, we have wind, we have water, we have solar, and we have hydro. So that’s the first kind of framework that we have, that it really must be green by design. The second part, of course, is that where the green comes in, it’s always not necessarily on the efficient data centers or how they’re energy efficient, but also on the use of it. So part of our green by design is also kind of wide scale of education around how people use these resources.
For example, you shouldn’t be looking for the next Starbucks, for example, when you’re using AI. You should really be using Google as an option. So people need to have those choices in their heads by design. The third part, of course, is protecting Kenya alone is not enough. You can put a green shield around the country. but AI is global. So the third part quickly is working in the international framework. So as you know, we worked with the Coalition for Sustainable AI to champion the first ever AI resolution environmental sustainability, and part of it had the four parts, right? The energy, the life cycle, the sustainability piece, but also the improving the set of the science to continue to understand the energy efficiency component of AI.
Excellent. Thank you so much. And we’ll try to keep this lively. My next question will be for James, for James Manika. Google is one of the key players, of course Mistral as well and Hugging Face, but you’re a key player in publishing transparent data on environmental impact of AI. And you develop both very large frontier models and also smaller, very efficient models. It’s the Gemini and the Gemma. Thank you. So I’ll start with the Gemini family. From a business and an engineering standpoint, I think it’s a very interesting family. Where is the real frontier? Is it scaling up or scaling down?
Well, thank you. Pleasure to be here at the summit with you, Anne. I think just to get to the question, we’re actually looking at this on multiple fronts. On the one hand, if you look at, for example, our Gemini models, it’s not one model. We have a whole model family, which starts with the Gemini Pro, goes to the Gemini Flash models, which are some of the most efficient models. So we’re trying to make sure with our models, our Gemini family, we cover the performance efficiency frontier of these models. You may have noticed that recently no one really talks a lot about model size. Remember, two, three years ago… It used to be the big craze.
It used to be the big question, how many are, how many parameters. And that’s because even with our Pro models, we’re now pursuing this mixture of experts’ architectures, where the activation of the model doesn’t activate. The entire model. No one activates the dense models anymore. People are activating and reactivating our mixture of experts. So on the Gemini models, we’re trying to cover the performance. performance and efficiency frontier. Then we also have our GEMA models. Our GEMA models are our most efficient open source, open weights models. In fact, here in India, on AI Kosh, which is the platform in India, we actually have on there 23 GEMA models. And that’s because we’ve optimized them for different sizes.
Some of them are efficient and run on a single GPU because we know that the needs on the edge, people want a variety of model choices. To make sure we drive efficiency. I’ll say two more quick things very quickly. Every year we focus on efficiency because it’s both from an energy point of view, from a computer efficiency point of view, even from a business standpoint, it’s the right thing to do. Because as you start to serve many more people, you want the most efficient systems. I’ll say one last thing finally, which is we are making probably extraordinary, probably the most investments of any… anybody into using green energy, clean energy for our energy, for our compute.
In fact, we’ve made this audacious goal that some point in the 2020, 2030, 2035 era, we want to be 24 -7 carbon free. So we’ve made investments in nuclear, in geothermal, we actually have several operational data centers in geothermal. We’re using hydro, we’re using wind and solar. So we’re making, we’re trying to get to a point where all our energy uses for our compute is carbon free. That’s our kind of our moonshot goal.
Excellent, thank you so much. I’d like to move to Archer Mensch, to Archer. Mistral is developing very large models, but really also being very good at high performance compact models. And I know your engineers and you as a co -founder and CEO also strongly believe in the environmental impact of AI and what can be done there. So what can you share with us on that and with your both business and engineering experience, where does model efficiency have the highest return? So I would say the second one. You can. Thank you. Wonderful. Tim Mark.
So I would start with a couple of technical aspects. So to James’ point, the model size is indeed not only the only thing that we should be looking at. Effectively, we are using sparse mixture of experts because those are models which have a lot of parameters to store knowledge, but where you only activate 5 % of them. So that has been a key way of reducing the number of flops you do to generate one token, which is the one thing that matters for energy and therefore for carbon intensity. It’s one of the multipliers, actually. So the sparse… city matters and then you the other thing that matter is the systems on top I would say the the caching systems that you can put the way you’re managing the context so that you’re not reprocessing information and beyond just releasing the model weights that is something that we’ve always done we’re also heavy contributors to inference frameworks that are doing more and more advanced that are using more and more advanced technology to handle the caching systems in a way that where we are actually removing the wasteful computations that we used to do so it’s a it’s an algorithmic problem it’s actually very interesting it’s also a machine learning problem because depending on the request that you’re getting you can actually route the request to a small model or to a large model and so to James point it’s actually very important for any company doing models to actually have small models all the way to large models in particular because the large ones can be used to make specialized models after that so very important that’s an important point But I would say if you look at the carbon footprint today of artificial intelligence, because most of the GPUs are currently being used for training, I would say most of the weight comes from the fact that you have around 10 labs in the world that are training models that at the end look very similar.
And so for us, if I look at our biggest leverage there, the fact that we’ve been open sourcing models that are very large and we’ve been open sourcing our best models really, has been a major way of reducing the externality cost that you’re producing. Because we’re investing and it costs a lot of carbon to actually train a model, but then we give it for free to everyone else. And what that means is that people can build on top. And that’s amortized costs. Suddenly you don’t have 10 companies doing and training the same kind of models, but this thing is out there and you don’t need to reinvest. So I think that’s the big part. So that’s really on the training front.
And today training is the thing that takes most of the cost. when it comes to training. Now, when it comes to our own approach to sustainability, and I think I agree with James, one of the multipliers is the carbon intensity of your energy. And so there is a locality aspect to it, and we’ve been building our data centers and training our models recently. We’ve been training our models recently on our own hardware, which sits in France, which France is heavily nuclear, so the carbon intensity is low. Also 95%. Yes. Philippe, sorry. And in Sweden, it’s not 95%. Still very good, still very good. But in Sweden, and in Sweden where you have hydro. So choosing the locality is important because it’s one of the multipliers that you want to optimize for.
And finally, the one thing to worry about is, I mean, model size is one thing, carbon intensity is one thing, and then chips are also another thing. So being able to use the diversity of chips is huge. It’s super important. And we are in the… on using new kind of chips that are much more efficient from an energy perspective. Now to James’ point I would like to add the good thing about AI is that we are energy constrained and so suddenly it means that efficiency is actually driven by business. So I mean I would say transparency is super important for us and matters for our customers so we give, we’ve done like a very deep study on how that works and the carbon intensity of our training, we’ve done it with Mistral Large too with third party auditors etc.
But the business is also driving the, it’s also a reason why we’re going toward more efficient models because we don’t have enough energy, we need to have things that run on smaller hardware and it depends on the countries as well. Like there’s actually in the US the constraint is higher than in Europe and I think it’s going to be very high as well in Africa and in India and down the line. So it’s always good when business aligns Yes. Of course you can. And I think it would be valuable for public procurement in particular to put more pressure on sustainability as a way to accelerate the industry because that raises the stake and so that also pushes us toward more efficiency.
Wonderful. Thank you so much. I think that was really… Do you want to react quickly, James? No, no. Before we go to Abhishek?
I was going to agree with Arthur, but I’ll maybe add a couple more components. One of the things that is also important in this conversation is what you actually apply AI to. So there’s a whole range of applications of AI that actually are helpful for sustainability, grid management, managing with the adaptation and effects of climate change. And we’re seeing a lot of those kinds of applications at scale in ways that make an enormous difference to the sustainability question.
So adding to that, you have agriculture as well where you have a lot of leverage. You have material science and chemistry. So we work with vertical AI companies to try and make that happen.
Great to see this. Thank you. I think we have a very high -quality exchange in this panel. Abhishek, I’d like to move to you and, yeah, and the microphone as well. And Archer actually introduced the fact that energy constraints are real, and they’re real in India, of course, and you have such a high population and wide market and also, of course, infrastructure constraints. How do you approach this? How does the AI mission in India approach this? And what are you doing on this front?
the AI factories, with the hope that ultimately this investment will pay out. But when we ultimately look at how it will pay out, it will come out through inferencing. And we are doing inferencing at scale, ultimately users will have to pay. So until and unless you have focused on efficiency and sustainability, actual ROI on the investments will not work out. So it will be in the interest of everyone and only those players will survive who actually ensure that per token energy use is the minimal. So it will require innovation at multiple levels. It will require innovation at how do you do the algorithms, how do you do the inferencing, how do you use it. And therein, the value of small language models will come in.
While it’s fashionable to go for a trillion parameter model and more, but ultimately if you are building use cases in key sectors like healthcare or education or agriculture, you’ll need to go through smaller models which will be consuming less energy and which will be able to cost less. So sustainability is something that is given. So what we are doing, of course, in India… mission and in India is, number one, we are not chasing the trillion parameter models. We are not in the parameter game, number one. Number two, we are not even right now at the stage in which our companies are. I don’t think anyone of us is chasing AGI, which is like glamorized by some of the frontier AI models.
We are trying to think of what are the solutions which can be built by using current level of models which are available, which can solve societal problems in various sectors. To have real impact. Real impact. It’s a plug for you. Yeah, exactly. And when we do that, the cost per inference, the cost per query is something that becomes material because many of the public sector applications, especially in sectors like agriculture or healthcare, education, for some time will have to be funded by government, which will mean the taxpayers’ money. So we cannot be extravagant in doing that. So ensuring that the PUEs or data centers are lesser, ensuring that grid efficiency, we have, in fact, we are doing a project with the Ministry of Power, which I think finds a mention in the resilient inter…
committee’s report also, wherein we are using AI for improving grid efficiency, reducing the transmission distribution losses and what we have felt is that doing it smartly and using technology for doing that brings down the T &D losses by almost 10 to 15%. That’s again a big, big gain. So we’ll have to look at the entire ecosystem right from what kind of chips you are using for what, if you are doing inferencing do you need the high -end chip for doing that. So classifying it, having a very sector -specific application, specific use case basis approach for designing your systems will ultimately be where the game is and those who are able to do that will be able to build more sustainable systems their cost per query will be lesser and they will be able to survive.
So we, as government we are trying to enable this but ultimately I feel that business sense will ensure that sustainability comes in. We cannot be, it cannot be like that we can consume as much energy as we want unmindful of the ramifications. We have the funds and the VCs will pay only till a particular time. It cannot be forever.
Excellent. Thank you. We’re unpacking a number of things and we’re unpacking training from inference and utilization. We’re unpacking large models with smaller models and actually you need to get the larger models ideally through open source to be able to do the smaller ones. We’re looking at how AI can further then loop back and help optimize. We’ve heard a number of super interesting things. We started, you started a little bit on this Artur, but let me ask this question of everyone quickly. What, first of all, we also heard that business interests and commercial interests are aligned with the desire to make AI more sustainable which is a very hopeful message but what can governments and institutions do to further help improve this?
Artur, you hinted at public procurement. Do you want to say a few more words on this?
Yes, it’s one of the ways in which we can build and make sure that efficiency is favored. Again, I think the market can solve it, but it can be accelerated, and the faster we can go, the better, because effectively we’re really building a lot of electricity at the moment for AI, and so if we can just make sure that efficiency is part of the consign, that’s good. It’s worth noting that for better or worse, artificial intelligence, generatively, is turning into being a utility company. Being an AI company is turning into being a utility company, in that you’re basically turning electricity into tokens. It’s highly competitive, so that means the margins are getting, I would say, thinner, and which means that things are also getting price sensitive, and so when it comes to being price, when things get price sensitive, efficiency really matters.
So that’s going to be partially solved, and that’s what we’re going to do. the market, but can be accelerated. And I’d say the way it can also lead the way is probably by sustaining open source projects that actually go beyond the models. The inference path, what we call agent harnessing, is also something that will eventually become common goods and can be used everywhere. And so good practices, incentivizing research as well, because the domain of routing, picking the right models, the domain of distillation, those models do not require you to have thousands of GPUs. And so you can do efficient research, so public research on that domain is very much possible, and we’d love to see more of it.
So I guess that’s the three things that I can mention.
Wonderful. Thank you. James, do you want to add a few words on that?
Yeah, first of all, I agree with the three things that Arthur mentioned. I would add a couple more. One of the things that’s actually quite interesting is the more government can actually incentivize and encourage… Come on. to use off -grid solutions is super important because that takes the burden off the public infrastructure that affects citizens. And so, for example, we’re spending a lot of time thinking about off -grid solar, off -grid wind, and we’re thinking about geothermal. We’ve even built in our own small modular reactors. And we’re also investing, to Arthur’s point, in breakthrough research. One of the most exciting areas, by the way, which is not as far away as people think it is, is actually fusion energy.
So we’ve made some of the biggest investments in fusion energy. And, by the way, AI is actually helping us make that progress because one of the things you worry about with fusion energy is how do you do what’s called plasma containment, where you can actually hold these high -energy particles and contain them. And AI has actually helped us do that. So even the use of AI in breakthrough research like that is pretty important. I’ll say one other quick thing very quickly because it reinforces, I think, something that Arthur and actually the minister said, which is… Inference is going to turn… to be the most important thing in many respects, far more than the training part of this.
And we’ve actually started to invest in that. So, for example, we’ve actually built, you know, we have our own chips, TPUs. We use TPUs and GPUs. In TPUs, we’ve actually built some inference -specific TPUs just for inference, to be able to do inference even more efficiently than what you would typically do with a general kind of GPU.
Wonderful. Thank you. Ambassador Philip Tigo, what can you… Maybe you can take the microphone from a neighbor, and then I’ll ask Abhishek to conclude.
No, very quick, because a lot of the solutions are for developed economies. I think we have to be a little bit realistic in terms of where emerging economies… I think, one, there’s a bigger question of sovereignty, right? And there are conversations around that. And there has to be trade -offs. Like, every country wants to have the entire stack in their country. So I think governments need to be very realistic around which parts of the stack they really want to keep in their country, especially if you have this… AI for green and green AI… conversation. I think the second part again is to look at, especially in emerging economies, is to look at sustainability across the stack.
So we may not have compute necessarily, but we have other parts of the stack. So how do you ensure that part of the training gets that done? The third part I think is to expand this definition of safety, because AI safety is very much around the models and not necessarily around the use and potential harms of the environment. I’ve not seen that research. So there could be an expansion of research around looking at AI safety, including environmental concerns. The other quick one, of course, is you can only know the environmental footprint from use cases, and it has to be specific. And these are deep dives, and I have a sense people need to invest in deep dives.
When I look at food systems, that’s an entire food system, so there’s potentially problems there if we do not necessarily have, and to my last point around the standards, we really have to invest in the standards. We’ve seen that in other electronics, right? So we need to see that. So everybody, everybody knows the kind of environmental standards that you do that, and that’s needs to be done at scale. Thank you so much.
Abhishek, what can governments do? You represent a government. You want the… It works?
Governments are doing… Every government is conscious of this. In India, in fact, recently we did kind of focus on the small model reactors, which James mentioned, is that we came out with a new policy under which the sector has been opened up for the private sector also to invest. What we do believe is that as inferencing needs go up and India, when we are talking inferencing, we are talking inferencing at scale. Say if 100 billion or 200 billion in the first phase and up to ultimately 500 million and more, people start using these services and the kind of back -end infrastructure that we need will be huge, which will consume a lot of energy. So to reduce the load on the existing grid, we will need to think of off -grid solutions.
We will need to think of dedicated small modular reactors, which can power the air applications. the world over what we are seeing is the more and more AI adoption is going up, energy costs go up. And if energy cost goes up, ultimately for elected governments it doesn’t be so well. So it has to be thought of, the entire strategy has to be thought of, how do we balance the needs between having more efficient and more intense AI solutions with the needs for sustainability, with the needs of reducing the carbon footprint, because we are also a few years away from 2030 SDG, Sustainable Development Goals. So ultimately we need to balance the both, the need for having more efficient AI and the need for reducing the impact on environment.
Otherwise we can’t solve one problem and create another. So that’s again something the governments are concerned of and I think augmenting the renewable energy sources, solar, wind and nuclear, the fusion thing will be the way to go forward.
Yeah, thank you very much. I think this has been a fascinating discussion. The we can we heard from all of the panelists that the environmental impact of AI is not an afterthought. It’s actually front and center. It’s part of the competitive advantage. It’s part of what companies and governments think about. This is a very strong and positive message that I think we can all be reassured with. Let me just close by mentioning the Resilient AI Challenge that was mentioned at the beginning. Registrations close on March 15th. So please submit your solution. Please join me in thanking this wonderful panel. Thank you, everyone, for joining us today and really hoping to see you engage into this Resilient AI Challenge.
This is first at the international level working on improving research on compressed models. So one of the… solution and tool that was presented in the panel so we really encourage you to register so thank you so much to our panelists another round of applause thank you Thank you.
AI’s energy demands. Threaten to outpace green energy progress. Model providers face a stark reality. AI’s energy needs are growing faster than supply. Second, it’s a fairness crisis. Massive AI model…
EventAdditionally, they highlight the importance of considering sustainable development goals and respecting human rights in the use of technology. Another speaker argues that digitalisation and technology…
EventThe discussion maintained an optimistic yet pragmatic tone throughout. While acknowledging significant challenges around infrastructure, energy, skills, and governance, speakers consistently emphasize…
Event## Sustainability and Environmental Integration in Standards – **Maike Luiken**: Chair of standard working group addressing sustainability, environmental stewardship and climate change in professiona…
EventAI is increasingly recognised for its transformative potential and growing environmental footprint across industries. The development and deployment of large-scaleAImodels require vast computational r…
UpdatesMensch mentions Mistral’s efforts in promoting open-source models and working with various countries, including Saudi Arabia.
EventAudience:Sure. Thank you. Thank you very much. Just, I think it is very hard to speak after Ambassador Kerr, who is the professor on this issue. And other people who speaks a lot about it. First of al…
EventIn the area of development, several key issues were highlighted regarding affordable financing, financial inclusion, and the role of technology in achieving the SDGs.Kenyaemphasised the need for affor…
EventTo create this plan, the government will convene an interagency AI task force comprised of National Government agencies, County Governments, higher education and private sector organization stakeholde…
ResourceAI governance requires a multi-stakeholder approach due to the diverse nature of opportunities, risks, and inclusivity challenges. This approach involves companies, researchers, NGOs, governments, and…
Event– Multi-stakeholder collaboration is essential across sectors and borders
EventFinally, there is strong agreement among the speakers for trust-based, multi-stakeholder partnerships in AI. They argue that the only path forward is through a collaborative approach that prioritizes …
EventMulti-stakeholder cooperation and inclusive governance frameworks are essential
EventThe discussion highlighted that addressing India’s informal workforce challenges requires sustained collaboration across sectors, with each stakeholder playing distinct but complementary roles. The pr…
Event“The host introduced Mrs Anne Le Henanf, France Minister Delegate for AI and Digitalisation Affairs as the first distinguished speaker.”
The knowledge base records the host welcoming Mrs Anne Le Henanf, France Minister Delegate for AI and Digitalization Affairs, confirming her role and introduction. [S34]
“Le Henanf warned that AI’s energy demands already outpace the growth of green‑energy capacity.”
A source explicitly states that AI’s energy demands threaten to outpace green-energy progress. [S1]
“Massive, unsustainable AI models risk creating a new fairness crisis by excluding regions with limited resources.”
The knowledge base mentions a fairness crisis where large, unsustainable AI models create new divides and can exclude regions and communities lacking resources. [S1]
“AI’s energy demands are growing faster than the supply of green energy, posing a major sustainability challenge.”
Broader analyses estimate that global AI-related electricity consumption could equal that of a whole country (e.g., Japan) by 2030 and that data-centre electricity use will more than double, underscoring the scale of the challenge. [S93] and [S102]
“Large AI models require vast computational resources, significant electricity, and extensive cooling infrastructure.”
A source describes how large-scale AI model development and deployment demand substantial compute power, electricity, and cooling, providing technical detail that supports the report’s statements about energy intensity. [S26]
The discussion revealed strong, cross‑sectoral agreement that AI’s energy footprint is a critical challenge, that measurement and standards are prerequisite, that the future lies in smaller, efficient models, and that governments and institutions must create policies, incentives, and procurement rules to drive sustainable AI. Participants also concurred that AI can be a tool for environmental and societal benefits.
High consensus across governments, industry, and academia, indicating a shared commitment to prioritize efficiency, measurement, and policy support, which bodes well for coordinated international action on sustainable AI.
The panel largely agrees that AI’s environmental impact must be curbed and that sustainable AI is a strategic priority. However, clear disagreements emerge around (1) whether the industry should prioritize shrinking models or continue developing large models with efficiency tricks; (2) the optimal energy strategy—off‑grid renewable installations versus leveraging existing national renewable mixes; and (3) the most effective policy lever—public procurement mandates versus broader incentives and standards. An unexpected clash over national sovereignty versus global collaboration also appears.
Moderate. The disagreements are substantive but do not fracture the overall consensus on the need for sustainable AI. They highlight divergent pathways that could affect policy design, industry investment, and international coordination, suggesting that achieving the shared sustainability goal will require negotiated compromises across model‑size strategies, energy sourcing, and governance mechanisms.
The discussion was shaped by a series of pivotal remarks that moved the conversation from a high‑level declaration of sustainable AI as an imperative to concrete technical, economic, and policy pathways. Anne Le Henanf’s framing set the agenda, while Dr. Delassie’s challenge to the ‘bigger‑is‑better’ paradigm redirected focus toward resilience and resource‑efficiency. Quantitative illustrations of energy use grounded the debate, prompting industry leaders like James Manyika and Arthur Mensch to showcase corporate commitments, open‑source strategies, and procurement levers as viable solutions. National perspectives from Kenya and India added layers of sovereignty, standards, and pragmatic implementation, turning abstract concepts into actionable roadmaps. Collectively, these comments created a dynamic flow that progressed from problem definition to solution design, highlighting the interdependence of technology, business models, and governance in achieving sustainable AI.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

