Smaller Footprint Bigger Impact Building Sustainable AI for the Future

20 Feb 2026 18:00h - 19:00h

Smaller Footprint Bigger Impact Building Sustainable AI for the Future

Session at a glance

Summary

This discussion focused on sustainable and resilient artificial intelligence development, particularly addressing AI’s growing energy consumption and environmental impact while ensuring global accessibility. The event, co-organized by France, UNESCO, and the Sustainable AI Coalition, brought together government officials and industry leaders to explore how AI can work efficiently and responsibly for both people and the planet.


French Minister Anne Le Henanf emphasized that sustainable AI is an imperative rather than an option, highlighting two critical challenges: AI’s rapidly growing energy demands that threaten to outpace green energy progress, and the fairness crisis where massive AI models create new divides by excluding resource-constrained regions. She announced the growth of the Sustainable AI Coalition from 90 to over 220 partners, focusing on three pillars: research, measurement through standardization, and action through policy implementation.


UNESCO’s Dr. Tafik Delassie argued that the next breakthrough in AI will come from building leaner, more resilient systems rather than larger models, noting that current AI inference consumes hundreds of gigawatt hours annually. He officially launched the Resilient AI Challenge, which aims to demonstrate how open-source AI models can be optimized and compressed while maintaining performance but significantly reducing energy consumption.


Industry representatives James Manyika from Google and Arthur Mensch from Mistral AI discussed practical approaches to efficiency, including mixture of experts architectures, model compression, and the importance of open-source models to reduce redundant training costs. They emphasized that business incentives align with sustainability goals since energy constraints make efficiency commercially essential.


Government representatives from India and Kenya shared their approaches to balancing AI development with sustainability concerns, focusing on renewable energy infrastructure and practical applications that solve real-world problems. The discussion concluded with the announcement that the Resilient AI Challenge represents a crucial step toward making sustainable AI a global standard rather than an aspiration.


Keypoints

Major Discussion Points:

Sustainable AI as a Global Imperative: The discussion emphasized that sustainable AI is not optional but essential, driven by AI’s rapidly growing energy demands that threaten to outpace green energy progress and create new digital divides between resource-rich and resource-poor regions.


Technical Solutions for AI Efficiency: Panelists explored various approaches to reduce AI’s environmental impact, including model compression, sparse mixture of experts architectures, task-specific optimization, improved inference frameworks, and the strategic use of smaller models rather than pursuing ever-larger parameter counts.


Business Alignment with Environmental Goals: A key theme was that commercial interests are naturally aligning with sustainability objectives, as energy constraints and cost pressures make efficiency a competitive advantage, with AI companies essentially becoming utility companies that convert electricity into tokens.


Government and Policy Role: Discussion covered how governments can accelerate sustainable AI through public procurement requirements, investment in renewable energy infrastructure (including small modular reactors and fusion research), support for open-source projects, and development of environmental standards for AI systems.


The Resilient AI Challenge Initiative: The announcement and promotion of a global challenge to advance compressed, energy-efficient AI models, representing a concrete step from principles to action in making AI more sustainable and accessible worldwide.


Overall Purpose:

The discussion aimed to address the critical challenge of making AI development environmentally sustainable while maintaining accessibility and performance. The event served to launch the Resilient AI Challenge and build international cooperation around sustainable AI practices, moving from theoretical frameworks to practical implementation strategies.


Overall Tone:

The tone was consistently optimistic and collaborative throughout the discussion. Speakers maintained a solution-oriented approach, emphasizing opportunities rather than dwelling on problems. The conversation was highly technical yet accessible, with panelists building on each other’s points constructively. There was a strong sense of urgency balanced with confidence that the challenges are solvable through international cooperation, technological innovation, and aligned business incentives. The tone remained professional and forward-looking from start to finish, with no significant shifts in mood or approach.


Speakers

Speakers from the provided list:


Speaker 1: Event moderator/host (role inferred from context)


Anne Le Henanf: France Minister Delegate for AI and Digitalization Affairs


Dr. Tafik Delassie: Assistant Director General for Communication and Technology Sector at UNESCO


Anne Bouvreau: Special Envoy on AI for France, panel moderator


Ambassador Philip Tigo: Ambassador and Tech Envoy for Kenya


Arthur Mensch: CEO of Mistral AI


Abhishek Singh: Lead organizer of the AI Impact Summit, represents India AI mission


James Manyika: Senior Vice President, Google Alphabet


Additional speakers:


None – all speakers mentioned in the transcript were included in the provided speakers names list.


Full session report

This comprehensive discussion on sustainable and resilient artificial intelligence development brought together government officials, industry leaders, and international organisations to address one of the most pressing challenges in AI: balancing rapid technological advancement with environmental responsibility and global accessibility. The event featured Anne Bouvreau, Special Envoy on AI for France, as moderator, alongside speakers including Dr. Tafik Delassie, Assistant Director General for Communication and Technology Sector at UNESCO, industry leaders from Google and Mistral AI, and government representatives from India and Kenya.


The Imperative for Sustainable AI

French Minister Anne Le Henanf established the foundational premise that sustainable AI has become an absolute imperative, articulating two critical challenges: AI’s energy demands are growing at a pace that threatens to outpace green energy progress, and the development of massive AI models without sustainability considerations is creating new digital divides that exclude regions and communities lacking adequate resources.


The Minister announced the Sustainable AI Coalition’s remarkable growth from 90 initial partners to over 220 partners, including technology firms, startups, utilities, NGOs, and research institutions. The Coalition’s three-pillar approach—research, measurement through standardisation, and action through policy implementation—provides a comprehensive framework for addressing sustainability challenges across the AI development lifecycle.


Redefining AI Progress: From Scale to Resilience

Dr. Tafik Delassie from UNESCO introduced a paradigm-shifting perspective that fundamentally challenged the prevailing industry narrative about AI advancement. His provocative question—”What if the next breakthrough in AI is not about building other larger models, but about building leaner, more resilient systems?”—reframed the entire discussion from defensive justifications of AI’s energy consumption to proactive strategies for efficiency-driven innovation.


Dr. Delassie provided stark context for the urgency of this shift, noting that current AI inference already consumes hundreds of gigawatt hours annually, with projections suggesting AI could account for 3% of worldwide electricity production by 2030 according to the International Energy Agency. His particularly striking comparison—that a single large AI model can consume over 1,000 megawatt hours of electricity, enough to power villages across India for an entire year—transformed abstract energy consumption figures into tangible inequity concerns.


The launch of the Resilient AI Challenge represents a concrete manifestation of this efficiency-first philosophy. Rather than comparing entirely different models, the challenge focuses on improving one base model per task, ensuring transparency, fairness, and rigorous benchmarking. Submissions will be evaluated on shared infrastructure and ranked on both accuracy and energy efficiency, generating clear evidence for the viability of compressed, optimised AI systems.


Industry Perspectives: Technical Innovation and Strategic Partnerships

James Manyika from Google emphasised that the company’s approach covers the entire performance-efficiency frontier through their Gemini model family, which ranges from high-performance Pro models to highly efficient Flash models. Significantly, he noted that the industry has moved away from focusing on parameter count, with companies now pursuing mixture of experts architectures where only a fraction of the model is activated for any given task.


Google’s commitment to sustainability extends to infrastructure investments, with Manyika outlining the company’s goal of achieving 24-7 carbon-free operations by 2030-2035, supported by investments in nuclear, geothermal, hydro, wind, and solar energy. He also highlighted their development of inference-specific TPU chips designed for efficient model deployment and their contribution to fusion energy research through AI-assisted plasma containment.


Manyika announced that 23 GEMA models are now available on India’s AI Kosh platform, demonstrating concrete international collaboration in making efficient AI models accessible globally.


Arthur Mensch from Mistral AI provided complementary insights, emphasising that sparse mixture of experts models can activate as little as 5% of their parameters whilst maintaining performance, dramatically reducing computational requirements. His observation that “being an AI company is turning into being a utility company” proved particularly influential, explaining why efficiency isn’t just an environmental consideration but an economic necessity in an increasingly competitive market with thinning margins.


Mensch highlighted the strategic advantage of training models in regions with clean energy, specifically mentioning training in France (nuclear energy) and Sweden (hydro energy) for lower carbon intensity. He also argued that open-source model development prevents duplication of carbon-intensive training efforts across multiple organisations, amortising environmental costs across the entire ecosystem whilst enabling broader innovation.


Government Strategies: Pragmatic Approaches to AI Development

Abhishek Singh from India’s AI mission articulated a distinctly pragmatic approach, explicitly stating that India is “not chasing the trillion parameter models” and is “not in the parameter game.” Instead, India focuses on developing solutions using current-level models that can address societal problems across healthcare, education, and agriculture sectors.


This approach is driven by practical considerations: when deploying AI at the scale India envisions—potentially serving hundreds of millions of users—the cost per inference becomes material, particularly for public sector applications funded by taxpayer money. Singh provided a concrete example of collaboration with the Ministry of Power to use AI for improving grid efficiency, achieving 10-15% reductions in transmission and distribution losses.


India’s infrastructure strategy includes recent policy changes opening the nuclear sector to private investment for small modular reactors and developing off-grid solutions to reduce load on existing electrical grids, recognising that massive AI deployment requires dedicated energy infrastructure.


Ambassador Philip Tigo from Kenya provided the perspective of an energy-constrained but renewable-rich developing nation. Kenya’s energy mix is already 95% renewable, incorporating geothermal, wind, water, solar, and hydro sources, providing a green foundation for AI development. However, he emphasised the need for realistic approaches to AI sovereignty, noting that whilst every country desires to control the entire AI technology stack domestically, practical sustainability considerations require strategic choices about which components to develop locally versus accessing through international collaboration.


Ambassador Tigo raised important points about expanding the definition of AI safety beyond model behaviour to include environmental concerns, and highlighted the need for user education—teaching people when to use AI versus traditional search methods—as sustainability requires behavioural change alongside technical optimisation.


International Cooperation and Standards Development

The discussion highlighted significant progress in international cooperation, including the announcement of the second version of the global approach on standardisation for AI environmental sustainability, published collaboratively by ITU, the Institute of Electrical and Electronics Engineers, and ESO. This work builds on foundations established through the UN Global Digital Compact and UN Environment Assembly resolution.


Technical partnerships are emerging across the ecosystem, with collaborations mentioned between Mistral, Google, Hugging Face, Alkosh, and Sarvam AI demonstrating how companies are working together to advance sustainable AI development. The Sustainable AI Coalition plans to launch AI research pitch sessions in 2026 to connect university projects with funding and industry partners, creating mechanisms for translating academic research into practical applications.


Convergence of Business and Environmental Interests

A particularly encouraging theme throughout the discussion was the natural alignment of commercial incentives with sustainability goals. Multiple speakers emphasised that energy constraints and cost pressures are making efficiency a competitive necessity rather than just an environmental consideration. As Arthur Mensch noted, the commoditisation of AI services means that companies with the most efficient operations will have significant competitive advantages.


This alignment extends to market access, as companies that can deploy effective AI solutions in resource-constrained environments—whether in developing countries or energy-limited regions of developed nations—can serve larger, more diverse markets whilst creating more inclusive business models.


Challenges and Future Directions

Despite strong consensus on core principles, several significant challenges remain. The tension between AI sovereignty aspirations and practical sustainability constraints requires ongoing negotiation, particularly for developing nations seeking to balance domestic capability development with environmental responsibility.


The need for comprehensive environmental impact assessment across different AI use cases and sectors remains largely unaddressed. As Ambassador Tigo noted, understanding environmental footprint requires deep analysis of specific applications, and current AI safety research has not adequately incorporated environmental concerns beyond model safety.


The fundamental test will be whether efficiency-focused development strategies can deliver at the scale required to meet growing inference demands from billions of users whilst maintaining performance standards and accessibility.


Conclusion: Practical Steps Toward Sustainable AI

This discussion marked a significant evolution in the global conversation about AI development, moving from defensive justifications of environmental impact to proactive strategies for efficiency-driven innovation. The strong consensus among diverse stakeholders—from major technology companies to developing nation governments—that sustainable AI is both necessary and achievable provides a foundation for coordinated global action.


The launch of the Resilient AI Challenge represents a concrete step from principles to implementation, providing a practical framework for demonstrating that sustainability and capability can advance together. The challenge’s focus on optimising open-source models for energy efficiency whilst maintaining performance will generate crucial evidence for the viability of efficiency-focused development approaches.


The discussion’s most significant contribution may be its reframing of sustainability from a constraint on AI development to a driver of innovation. By establishing that the future of AI will be defined by resilience rather than scale alone, and demonstrating that business interests naturally align with environmental goals due to energy constraints, the participants created a compelling case for why sustainable AI development is not just environmentally responsible but economically inevitable.


Success will be measured not just by the environmental efficiency of AI systems, but by their ability to deliver meaningful benefits to underserved communities and resource-constrained regions, ensuring that AI development truly serves people, planet, and prosperity in an integrated and sustainable manner. The collaborative partnerships, technical innovations, and policy frameworks discussed provide a roadmap for achieving these goals through coordinated international action.


Session transcript

Speaker 1

And this is what we will explore at this event. To introduce the topic, we will first have two distinguished speakers. First, I have the honor to welcome Mrs. Anne Le Henanf, France Minister Delegate for AI and Digitalization Affairs. Welcome, Madam Minister.

Anne Le Henanf

Excellencies, distinguished guests, ladies and gentlemen, it’s an honor to address you at Smaller Footprints, Bigger Impact, co -organized by France, UNESCO, and the Sustainable AI Coalition. This event is a continuation of the work co -chaired by India and France in preparation of this AI Impact Summit. putting resiliency, sustainability and efficiency at the heart of the global agenda. The question we face is no longer how can AI work for us, but how can we ensure AI works efficiently, responsibly and fairly for people and for our planet. Resilient and sustainable is the key to unlocking digital transformation, environmental protection and inclusive development. Sustainable AI is not an option, it’s an imperative. First, it’s an energy and environment imperative as governments decarbonize.

AI’s energy demands. Threaten to outpace green energy progress. Model providers face a stark reality. AI’s energy needs are growing faster than supply. Second, it’s a fairness crisis. Massive AI models without sustainability create new divides and can exclude regions and communities lacking resources. That is why France, at the AI Action Summit, made sustainable AI a priority through the Sustainable AI Coalition, launched with UNEP, ITU and India as founding members. Our goal? Leverage AI to solve environmental challenges without exceeding planetary boundaries. From 90 initial partners, we have grown to over 220 million people. We are the first to have a sustainable AI. including tech firms, startups, utilities, NGOs, and research institutions backed by eight international organizations and 15 countries with the Netherlands joining this year.

Sustainable AI is now a global priority. Embedded in the UN Global Digital Compact and a UN Environment Assembly resolution. To turn vision to action, we focus on three pillars. First, research. In 2026, the coalition will launch AI research pitch sessions to connect university projects with funding and industry partners. Second, measurement. You can’t improve what you can’t measure. Today, I’m proud to announce on behalf of the coalition ITU, the Institute of Electrical and Electronics Engineers and ESO that we published the second version of the global approach on standardization for AI environmental sustainability to promote consistency in AI environmental sustainability standardization and third, action. France is implementing policies for low carbon efficient AI, powered by renewable energy hosted in green data centers and designed to be leaner and smarter this approach boosts competitiveness and discovery with minimal environmental costs that’s why as an AI Impact Summit outcome, India, France and UNESCO launched the Resilient AI Challenge, a global challenge to advance compressed, more energy -efficient AI models.

This initiative supports innovation aligned with our shared goals. Sustainable and resilient AI must be the global baseline. The only path to equitable development that services people and the planet. France and India have led this effort from Paris to New Delhi by focusing on people, planets and progress. Now we must deliver together. I look forward to our panelists’ insights and now invite to continue. Thank you.

Speaker 1

Thank you. Many thanks, Madam Minister, for this insightful introduction and the pioneering role of France in Sociable AI. I have now the pleasure to welcome Dr. Tafik Delassie, Assistant Director General for Communication and Technology Sector at UNESCO, whose landmark report on smaller models was published in July last year. Thank you.

Dr. Tafik Delassie

Madame la Ministre de l ‘IA du NumĂ©rique, Madame l ‘EnvoyĂ© SpĂ©cial pour l ‘IA, distinguished participants, esteemed colleagues, dear partners and ladies and gentlemen. I’m very pleased on behalf of UNESCO to be with you this afternoon for this important session. But allow me first to raise a question. What if the next breakthrough in AI is a breakthrough in AI? is not about building other larger models, but about building leaner, more resilient systems, systems that can solve whole world problems and real world constraints, including in low resource environments. Before turning to the resilient AI challenge, I would like to warmly thank the government of India for its leadership in convening this timely, strategic, and important forward -looking summit.

I also would like to acknowledge the co -chairs of the Working Group on Resilience, Innovation, and Efficiency, the Ministry of Power of India, and the Ministry of Ecological Transition of France for their strong commitment, engagement, and stewardship. My sincere thanks also go to our technical and ecosystem partners, including Mistral, Google, Hugging Face, Alkosh, Sarvam AI, and the broader Sustainable AI Coalition. alongside many academic experts who have contributed to this collective effort. UNESCO is proud to serve as a key knowledge partner for this initiative and to support the vision of India regarding AI that truly serves the people, the planet and prosperity. I would like to convey briefly three messages. First, the future of AI will not be defined by scale alone, but rather by resilience.

Second, resource -efficient AI is not a trade -off. It is a path to inclusion and access. Thirdly, delivering impact at scale requires global collaboration that is truly grounded in real -world validation. We are at a critical inflection point. Generative AI tools are now used by more than 1 billion people on a daily basis. Yet, behind every prompt lies a growing energy and resource footprint. Inference already amounts to hundreds of gigawatt hours per year, and this is comparable to the annual electricity use of millions of people in low -income countries. Training frontier models is even more energy intensive. A single large AI model can consume over 1 ,000 megawatt hours of electricity, enough to power villages across India for a whole year, placing increasing pressure on energy systems and reinforcing inequalities in access to compute and infrastructure.

These challenges are not theoretical. They are real. They directly affect whether AI can be deployed. In public services, also by small, medium -sized enterprises, the technology is used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a in rural health systems and low connectivity environments, both in developing countries but also in advanced economies facing growing energy constraints.

This is why the next breakthrough in AI will not come from building ever -larger models. It will come from building smarter, leaner, and more resilient systems that can deliver impact under energy constraints rather than exacerbate them. A proverb says, a good life is for everyone. It captures the spirit of living well together, in community, inclusively, and in harmony with our planet. In the same spirit, AI must be designed not only for those with the greatest computing power, but for all communities. It is everywhere around the world. The work of UNESCO shows that small but conscious design choices, such as model compression, task -specific architectures, and optimized inference can reduce AI energy consumption by up to 90 % without compromising performance.

Resilient AI is therefore not only greener, it is more inclusive, more affordable, and more adaptable. It lowers barriers for researchers, empowers local ecosystems, and enables AI solutions to reach communities too often left at the margins of the digital transformation. This brings me to why we are here today. It is my pleasure to officially announce the launch of the Resilient AI Challenge, which is a flagship initiative under the India AI Impact Summit Working Group on Resilience, innovation, and efficiency. This challenge moves us decisively from principles to action. It brings together model providers, researchers, startups, and academic teams to demonstrate how open -source AI models can be optimized, compressed, and deployed to achieve strong performance while significantly reducing the use of energy.

Rather than comparing entirely different models, the challenge focuses on improving one base model per task, ensuring transparency, fairness, and rigorous benchmarking. Submissions will be evaluated on shared infrastructure and ranked on both accuracy and energy efficiency, generating clear and actionable evidence. The winners of the challenge will be announced at the AI for Good Summit this coming July in Geneva, but the real success will be, of course, much broader than that.

Speaker 1

Thank you. before we delve into the panel I will invite the keynote speaker and the panelists to go up front for a picture now that we have the final line up and then we start the panel thank you Thank you. Thank you very much. Thank you very much. So now let me welcome our distinguished panelists and Mrs. Anne Bouvreau, Special Envoy on AI for France, moderator of this panel, to discuss how to make these models work and deploy in real life to the benefit of all. Thank you so much.

Anne Bouvreau

Thank you very much, HĂ©lène. Thanks to the… the two keynote speeches that we just had first. Without further ado, I think what we want is to head into the discussion, so I will not make long introductions. I’m delighted to welcome our distinguished guests, James Manika, Senior Vice President, Google Alphabet, Arthur Mensch, CEO of Mistral AI, Abhishek Singh, lead organizer of this summit. A round of applause for him, please. Thank you. And Ambassador Philip Tigo, Ambassador and Tech Envoy for Kenya. Thank you. So the AI industry, according to the International Energy Agency, will probably consume 3 % of worldwide electricity production by 2030. This is not the end of the world, but this is a huge expansion.

The world’s largest energy source is the United States. The world’s largest energy source is the United States. The world’s largest energy source is the United States. The world’s largest energy source is the United States. And therefore, there are environmental costs and impacts that we need to mitigate. AI, of course, at the same time also creates opportunity to optimize resources, including energy. So how can we ensure that AI’s development, in particular in developing countries but everywhere as well, is something that comes together with a focus on the planet? I’ll start with a question for Ambassador Philip Tigo. Let me turn to you first. You’re an attractive proponent of, active proponent of a more efficient and sustainable AI.

Africa is one of the most energy -constrained regions. It’s also a continent where adoption is becoming very frequent. We saw that with mobile phone payment. We saw that with other technologies. technologies. How is Kenya approaching efficient AI? What can you share with

Ambassador Philip Tigo

Thank you so much. And I’ll be very quick because I can see the ticker. There are a couple of things. One is that we’re very lucky as a country that our energy mix is already 95 percent. And we keep on investing into that. So we have geothermal, we have wind, we have water, we have solar, and we have hydro. So that’s the first kind of framework that we have, that it really must be green by design. The second part, of course, is that where the green comes in, it’s always not necessarily on the efficient data centers or how they’re energy efficient, but also on the use of it. So part of our green by design is also kind of wide scale of education around how people use these resources.

For example, you shouldn’t be looking for the next Starbucks, for example, when you’re using AI. You should really be using Google as an option. So people need to have those choices in their heads by design. The third part, of course, is protecting Kenya alone is not enough. You can put a green shield around the country. but AI is global. So the third part quickly is working in the international framework. So as you know, we worked with the Coalition for Sustainable AI to champion the first ever AI resolution environmental sustainability, and part of it had the four parts, right? The energy, the life cycle, the sustainability piece, but also the improving the set of the science to continue to understand the energy efficiency component of AI.

Anne Bouvreau

Excellent. Thank you so much. And we’ll try to keep this lively. My next question will be for James, for James Manika. Google is one of the key players, of course Mistral as well and Hugging Face, but you’re a key player in publishing transparent data on environmental impact of AI. And you develop both very large frontier models and also smaller, very efficient models. It’s the Gemini and the Gemma. Thank you. So I’ll start with the Gemini family. From a business and an engineering standpoint, I think it’s a very interesting family. Where is the real frontier? Is it scaling up or scaling down?

James Manyika

Well, thank you. Pleasure to be here at the summit with you, Anne. I think just to get to the question, we’re actually looking at this on multiple fronts. On the one hand, if you look at, for example, our Gemini models, it’s not one model. We have a whole model family, which starts with the Gemini Pro, goes to the Gemini Flash models, which are some of the most efficient models. So we’re trying to make sure with our models, our Gemini family, we cover the performance efficiency frontier of these models. You may have noticed that recently no one really talks a lot about model size. Remember, two, three years ago… It used to be the big craze.

It used to be the big question, how many are, how many parameters. And that’s because even with our Pro models, we’re now pursuing this mixture of experts’ architectures, where the activation of the model doesn’t activate. The entire model. No one activates the dense models anymore. People are activating and reactivating our mixture of experts. So on the Gemini models, we’re trying to cover the performance. performance and efficiency frontier. Then we also have our GEMA models. Our GEMA models are our most efficient open source, open weights models. In fact, here in India, on AI Kosh, which is the platform in India, we actually have on there 23 GEMA models. And that’s because we’ve optimized them for different sizes.

Some of them are efficient and run on a single GPU because we know that the needs on the edge, people want a variety of model choices. To make sure we drive efficiency. I’ll say two more quick things very quickly. Every year we focus on efficiency because it’s both from an energy point of view, from a computer efficiency point of view, even from a business standpoint, it’s the right thing to do. Because as you start to serve many more people, you want the most efficient systems. I’ll say one last thing finally, which is we are making probably extraordinary, probably the most investments of any… anybody into using green energy, clean energy for our energy, for our compute.

In fact, we’ve made this audacious goal that some point in the 2020, 2030, 2035 era, we want to be 24 -7 carbon free. So we’ve made investments in nuclear, in geothermal, we actually have several operational data centers in geothermal. We’re using hydro, we’re using wind and solar. So we’re making, we’re trying to get to a point where all our energy uses for our compute is carbon free. That’s our kind of our moonshot goal.

Anne Bouvreau

Excellent, thank you so much. I’d like to move to Archer Mensch, to Archer. Mistral is developing very large models, but really also being very good at high performance compact models. And I know your engineers and you as a co -founder and CEO also strongly believe in the environmental impact of AI and what can be done there. So what can you share with us on that and with your both business and engineering experience, where does model efficiency have the highest return? So I would say the second one. You can. Thank you. Wonderful. Tim Mark.

Arthur Mensch

So I would start with a couple of technical aspects. So to James’ point, the model size is indeed not only the only thing that we should be looking at. Effectively, we are using sparse mixture of experts because those are models which have a lot of parameters to store knowledge, but where you only activate 5 % of them. So that has been a key way of reducing the number of flops you do to generate one token, which is the one thing that matters for energy and therefore for carbon intensity. It’s one of the multipliers, actually. So the sparse… city matters and then you the other thing that matter is the systems on top I would say the the caching systems that you can put the way you’re managing the context so that you’re not reprocessing information and beyond just releasing the model weights that is something that we’ve always done we’re also heavy contributors to inference frameworks that are doing more and more advanced that are using more and more advanced technology to handle the caching systems in a way that where we are actually removing the wasteful computations that we used to do so it’s a it’s an algorithmic problem it’s actually very interesting it’s also a machine learning problem because depending on the request that you’re getting you can actually route the request to a small model or to a large model and so to James point it’s actually very important for any company doing models to actually have small models all the way to large models in particular because the large ones can be used to make specialized models after that so very important that’s an important point But I would say if you look at the carbon footprint today of artificial intelligence, because most of the GPUs are currently being used for training, I would say most of the weight comes from the fact that you have around 10 labs in the world that are training models that at the end look very similar.

And so for us, if I look at our biggest leverage there, the fact that we’ve been open sourcing models that are very large and we’ve been open sourcing our best models really, has been a major way of reducing the externality cost that you’re producing. Because we’re investing and it costs a lot of carbon to actually train a model, but then we give it for free to everyone else. And what that means is that people can build on top. And that’s amortized costs. Suddenly you don’t have 10 companies doing and training the same kind of models, but this thing is out there and you don’t need to reinvest. So I think that’s the big part. So that’s really on the training front.

And today training is the thing that takes most of the cost. when it comes to training. Now, when it comes to our own approach to sustainability, and I think I agree with James, one of the multipliers is the carbon intensity of your energy. And so there is a locality aspect to it, and we’ve been building our data centers and training our models recently. We’ve been training our models recently on our own hardware, which sits in France, which France is heavily nuclear, so the carbon intensity is low. Also 95%. Yes. Philippe, sorry. And in Sweden, it’s not 95%. Still very good, still very good. But in Sweden, and in Sweden where you have hydro. So choosing the locality is important because it’s one of the multipliers that you want to optimize for.

And finally, the one thing to worry about is, I mean, model size is one thing, carbon intensity is one thing, and then chips are also another thing. So being able to use the diversity of chips is huge. It’s super important. And we are in the… on using new kind of chips that are much more efficient from an energy perspective. Now to James’ point I would like to add the good thing about AI is that we are energy constrained and so suddenly it means that efficiency is actually driven by business. So I mean I would say transparency is super important for us and matters for our customers so we give, we’ve done like a very deep study on how that works and the carbon intensity of our training, we’ve done it with Mistral Large too with third party auditors etc.

But the business is also driving the, it’s also a reason why we’re going toward more efficient models because we don’t have enough energy, we need to have things that run on smaller hardware and it depends on the countries as well. Like there’s actually in the US the constraint is higher than in Europe and I think it’s going to be very high as well in Africa and in India and down the line. So it’s always good when business aligns Yes. Of course you can. And I think it would be valuable for public procurement in particular to put more pressure on sustainability as a way to accelerate the industry because that raises the stake and so that also pushes us toward more efficiency.

Anne Bouvreau

Wonderful. Thank you so much. I think that was really… Do you want to react quickly, James? No, no. Before we go to Abhishek?

James Manyika

I was going to agree with Arthur, but I’ll maybe add a couple more components. One of the things that is also important in this conversation is what you actually apply AI to. So there’s a whole range of applications of AI that actually are helpful for sustainability, grid management, managing with the adaptation and effects of climate change. And we’re seeing a lot of those kinds of applications at scale in ways that make an enormous difference to the sustainability question.

Arthur Mensch

So adding to that, you have agriculture as well where you have a lot of leverage. You have material science and chemistry. So we work with vertical AI companies to try and make that happen.

Anne Bouvreau

Great to see this. Thank you. I think we have a very high -quality exchange in this panel. Abhishek, I’d like to move to you and, yeah, and the microphone as well. And Archer actually introduced the fact that energy constraints are real, and they’re real in India, of course, and you have such a high population and wide market and also, of course, infrastructure constraints. How do you approach this? How does the AI mission in India approach this? And what are you doing on this front?

Abhishek Singh

the AI factories, with the hope that ultimately this investment will pay out. But when we ultimately look at how it will pay out, it will come out through inferencing. And we are doing inferencing at scale, ultimately users will have to pay. So until and unless you have focused on efficiency and sustainability, actual ROI on the investments will not work out. So it will be in the interest of everyone and only those players will survive who actually ensure that per token energy use is the minimal. So it will require innovation at multiple levels. It will require innovation at how do you do the algorithms, how do you do the inferencing, how do you use it. And therein, the value of small language models will come in.

While it’s fashionable to go for a trillion parameter model and more, but ultimately if you are building use cases in key sectors like healthcare or education or agriculture, you’ll need to go through smaller models which will be consuming less energy and which will be able to cost less. So sustainability is something that is given. So what we are doing, of course, in India… mission and in India is, number one, we are not chasing the trillion parameter models. We are not in the parameter game, number one. Number two, we are not even right now at the stage in which our companies are. I don’t think anyone of us is chasing AGI, which is like glamorized by some of the frontier AI models.

We are trying to think of what are the solutions which can be built by using current level of models which are available, which can solve societal problems in various sectors. To have real impact. Real impact. It’s a plug for you. Yeah, exactly. And when we do that, the cost per inference, the cost per query is something that becomes material because many of the public sector applications, especially in sectors like agriculture or healthcare, education, for some time will have to be funded by government, which will mean the taxpayers’ money. So we cannot be extravagant in doing that. So ensuring that the PUEs or data centers are lesser, ensuring that grid efficiency, we have, in fact, we are doing a project with the Ministry of Power, which I think finds a mention in the resilient inter… committee’s report also, wherein we are using AI for improving grid efficiency, reducing the transmission distribution losses and what we have felt is that doing it smartly and using technology for doing that brings down the T &D losses by almost 10 to 15%.

That’s again a big, big gain. So we’ll have to look at the entire ecosystem right from what kind of chips you are using for what, if you are doing inferencing do you need the high -end chip for doing that. So classifying it, having a very sector -specific application, specific use case basis approach for designing your systems will ultimately be where the game is and those who are able to do that will be able to build more sustainable systems their cost per query will be lesser and they will be able to survive. So we, as government we are trying to enable this but ultimately I feel that business sense will ensure that sustainability comes in. We cannot be, it cannot be like that we can consume as much energy as we want unmindful of the ramifications.

We have the funds and the VCs will pay only till a particular time. It cannot be forever.

Anne Bouvreau

Excellent. Thank you. We’re unpacking a number of things and we’re unpacking training from inference and utilization. We’re unpacking large models with smaller models and actually you need to get the larger models ideally through open source to be able to do the smaller ones. We’re looking at how AI can further then loop back and help optimize. We’ve heard a number of super interesting things. We started, you started a little bit on this Artur, but let me ask this question of everyone quickly. What, first of all, we also heard that business interests and commercial interests are aligned with the desire to make AI more sustainable which is a very hopeful message but what can governments and institutions do to further help improve this?

Artur, you hinted at public procurement. Do you want to say a few more words on this?

Arthur Mensch

Yes, it’s one of the ways in which we can build and make sure that efficiency is favored. Again, I think the market can solve it, but it can be accelerated, and the faster we can go, the better, because effectively we’re really building a lot of electricity at the moment for AI, and so if we can just make sure that efficiency is part of the consign, that’s good. It’s worth noting that for better or worse, artificial intelligence, generatively, is turning into being a utility company. Being an AI company is turning into being a utility company, in that you’re basically turning electricity into tokens. It’s highly competitive, so that means the margins are getting, I would say, thinner, and which means that things are also getting price sensitive, and so when it comes to being price, when things get price sensitive, efficiency really matters.

So that’s going to be partially solved, and that’s what we’re going to do. the market, but can be accelerated. And I’d say the way it can also lead the way is probably by sustaining open source projects that actually go beyond the models. The inference path, what we call agent harnessing, is also something that will eventually become common goods and can be used everywhere. And so good practices, incentivizing research as well, because the domain of routing, picking the right models, the domain of distillation, those models do not require you to have thousands of GPUs. And so you can do efficient research, so public research on that domain is very much possible, and we’d love to see more of it.

So I guess that’s the three things that I can mention.

Anne Bouvreau

Wonderful. Thank you. James, do you want to add a few words on that?

James Manyika

Yeah, first of all, I agree with the three things that Arthur mentioned. I would add a couple more. One of the things that’s actually quite interesting is the more government can actually incentivize and encourage… Come on. to use off -grid solutions is super important because that takes the burden off the public infrastructure that affects citizens. And so, for example, we’re spending a lot of time thinking about off -grid solar, off -grid wind, and we’re thinking about geothermal. We’ve even built in our own small modular reactors. And we’re also investing, to Arthur’s point, in breakthrough research. One of the most exciting areas, by the way, which is not as far away as people think it is, is actually fusion energy.

So we’ve made some of the biggest investments in fusion energy. And, by the way, AI is actually helping us make that progress because one of the things you worry about with fusion energy is how do you do what’s called plasma containment, where you can actually hold these high -energy particles and contain them. And AI has actually helped us do that. So even the use of AI in breakthrough research like that is pretty important. I’ll say one other quick thing very quickly because it reinforces, I think, something that Arthur and actually the minister said, which is… Inference is going to turn… to be the most important thing in many respects, far more than the training part of this.

And we’ve actually started to invest in that. So, for example, we’ve actually built, you know, we have our own chips, TPUs. We use TPUs and GPUs. In TPUs, we’ve actually built some inference -specific TPUs just for inference, to be able to do inference even more efficiently than what you would typically do with a general kind of GPU.

Anne Bouvreau

Wonderful. Thank you. Ambassador Philip Tigo, what can you… Maybe you can take the microphone from a neighbor, and then I’ll ask Abhishek to conclude.

Ambassador Philip Tigo

No, very quick, because a lot of the solutions are for developed economies. I think we have to be a little bit realistic in terms of where emerging economies… I think, one, there’s a bigger question of sovereignty, right? And there are conversations around that. And there has to be trade -offs. Like, every country wants to have the entire stack in their country. So I think governments need to be very realistic around which parts of the stack they really want to keep in their country, especially if you have this… AI for green and green AI… conversation. I think the second part again is to look at, especially in emerging economies, is to look at sustainability across the stack.

So we may not have compute necessarily, but we have other parts of the stack. So how do you ensure that part of the training gets that done? The third part I think is to expand this definition of safety, because AI safety is very much around the models and not necessarily around the use and potential harms of the environment. I’ve not seen that research. So there could be an expansion of research around looking at AI safety, including environmental concerns. The other quick one, of course, is you can only know the environmental footprint from use cases, and it has to be specific. And these are deep dives, and I have a sense people need to invest in deep dives.

When I look at food systems, that’s an entire food system, so there’s potentially problems there if we do not necessarily have, and to my last point around the standards, we really have to invest in the standards. We’ve seen that in other electronics, right? So we need to see that. So everybody, everybody knows the kind of environmental standards that you do that, and that’s needs to be done at scale. Thank you so much.

Anne Bouvreau

Abhishek, what can governments do? You represent a government. You want the… It works?

Abhishek Singh

Governments are doing… Every government is conscious of this. In India, in fact, recently we did kind of focus on the small model reactors, which James mentioned, is that we came out with a new policy under which the sector has been opened up for the private sector also to invest. What we do believe is that as inferencing needs go up and India, when we are talking inferencing, we are talking inferencing at scale. Say if 100 billion or 200 billion in the first phase and up to ultimately 500 million and more, people start using these services and the kind of back -end infrastructure that we need will be huge, which will consume a lot of energy. So to reduce the load on the existing grid, we will need to think of off -grid solutions.

We will need to think of dedicated small modular reactors, which can power the air applications. the world over what we are seeing is the more and more AI adoption is going up, energy costs go up. And if energy cost goes up, ultimately for elected governments it doesn’t be so well. So it has to be thought of, the entire strategy has to be thought of, how do we balance the needs between having more efficient and more intense AI solutions with the needs for sustainability, with the needs of reducing the carbon footprint, because we are also a few years away from 2030 SDG, Sustainable Development Goals. So ultimately we need to balance the both, the need for having more efficient AI and the need for reducing the impact on environment.

Otherwise we can’t solve one problem and create another. So that’s again something the governments are concerned of and I think augmenting the renewable energy sources, solar, wind and nuclear, the fusion thing will be the way to go forward.

Anne Bouvreau

Yeah, thank you very much. I think this has been a fascinating discussion. The we can we heard from all of the panelists that the environmental impact of AI is not an afterthought. It’s actually front and center. It’s part of the competitive advantage. It’s part of what companies and governments think about. This is a very strong and positive message that I think we can all be reassured with. Let me just close by mentioning the Resilient AI Challenge that was mentioned at the beginning. Registrations close on March 15th. So please submit your solution. Please join me in thanking this wonderful panel. Thank you, everyone, for joining us today and really hoping to see you engage into this Resilient AI Challenge.

This is first at the international level working on improving research on compressed models. So one of the… solution and tool that was presented in the panel so we really encourage you to register so thank you so much to our panelists another round of applause thank you Thank you.

S

Speaker 1

Speech speed

67 words per minute

Speech length

190 words

Speech time

168 seconds

Summit brings stakeholders to address sustainable AI

Explanation

The opening remarks highlight that the event gathers governments, industry and civil society to focus on sustainable AI. It is positioned as a continuation of joint work by India and France to shape the AI Impact Summit agenda.


Evidence

“First, I have the honor to welcome Mrs. Anne Le Henanf, France Minister Delegate for AI and Digitalization Affairs” [106]. “This event is a continuation of the work co‑chaired by India and France in preparation of this AI Impact Summit” [108].


Major discussion point

The necessity of sustainable and resilient AI


Topics

The enabling environment for digital development | Artificial intelligence


A

Anne Le Henanf

Speech speed

90 words per minute

Speech length

490 words

Speech time

325 seconds

Sustainable AI is an imperative

Explanation

Anne stresses that sustainable AI is no longer optional but essential due to rapidly rising energy demand and a fairness crisis. She calls for sustainable and resilient AI to become the global baseline.


Evidence

“Sustainable AI is not an option, it’s an imperative” [1]. “Sustainable and resilient AI must be the global baseline” [4]. “First, it’s an energy and environment imperative as governments decarbonize” [13].


Major discussion point

The necessity of sustainable and resilient AI


Topics

Environmental impacts | Artificial intelligence | The enabling environment for digital development


France implements low‑carbon AI policies and green data centres

Explanation

France is advancing low‑carbon AI through renewable‑powered compute and green data centres, linking policy to competitiveness and discovery. The Sustainable AI Coalition was launched to embed these principles globally.


Evidence

“France is implementing policies for low carbon efficient AI, powered by renewable energy hosted in green data centers and designed to be leaner and smarter” [11]. “That is why France, at the AI Action Summit, made sustainable AI a priority through the Sustainable AI Coalition” [26].


Major discussion point

Government policies and incentives for sustainable AI


Topics

The enabling environment for digital development | Environmental impacts


Unsustainable AI exacerbates fairness crisis and digital divides

Explanation

Anne points out that without sustainability, massive AI models deepen inequities, creating new divides that exclude resource‑constrained regions and communities.


Evidence

“Second, it’s a fairness crisis” [7]. “Massive AI models without sustainability create new divides and can exclude regions and communities lacking resources” [23].


Major discussion point

The necessity of sustainable and resilient AI


Topics

Human rights and the ethical dimensions of the information society | Closing all digital divides | Environmental impacts


D

Dr. Tafik Delassie

Speech speed

153 words per minute

Speech length

985 words

Speech time

385 seconds

Future AI should prioritize resilience over scale

Explanation

Delassie argues that the next breakthroughs will come from lean, resilient systems that operate under energy constraints rather than from ever larger models.


Evidence

“First, the future of AI will not be defined by scale alone, but rather by resilience” [16]. “It will come from building smarter, leaner, and more resilient systems that can deliver impact under energy constraints rather than exacerbate them” [17].


Major discussion point

The necessity of sustainable and resilient AI


Topics

Artificial intelligence | Environmental impacts


Model compression and task‑specific design can cut energy use up to 90 %

Explanation

Using conscious design choices such as model compression, task‑specific architectures and optimized inference can slash AI energy consumption dramatically while preserving performance.


Evidence

“The work of UNESCO shows that small but conscious design choices, such as model compression, task‑specific architectures, and optimized inference can reduce AI energy consumption by up to 90 % without compromising performance” [33]. “It brings together model providers, researchers, startups, and academic teams to demonstrate how open‑source AI models can be optimized, compressed, and deployed to achieve strong performance while significantly reducing the use of energy” [34].


Major discussion point

Technical approaches for energy‑efficient AI


Topics

Artificial intelligence | Environmental impacts | Monitoring and measurement


Resilient AI Challenge benchmarks accuracy and energy use

Explanation

The Challenge moves from principles to action by requiring participants to improve a single base model per task, with transparent evaluation of both accuracy and energy efficiency.


Evidence

“It is my pleasure to officially announce the launch of the Resilient AI Challenge” [27]. “Rather than comparing entirely different models, the challenge focuses on improving one base model per task, ensuring transparency, fairness, and rigorous benchmarking” [61]. “Submissions will be evaluated on shared infrastructure and ranked on both accuracy and energy efficiency, generating clear and actionable evidence” [83].


Major discussion point

Standards, measurement, and collaborative initiatives


Topics

Artificial intelligence | Monitoring and measurement | The enabling environment for digital development


A

Anne Bouvreau

Speech speed

78 words per minute

Speech length

971 words

Speech time

738 seconds

AI projected to use 3 % of global electricity by 2030

Explanation

The panel cites IEA data that AI could consume a sizable share of worldwide electricity, underscoring the urgency of mitigation while also noting AI’s potential to optimize energy use.


Evidence

“the AI industry, according to the International Energy Agency, will probably consume 3 % of worldwide electricity production by 2030” [42]. “AI, of course, at the same time also creates opportunity to optimize resources, including energy” [12].


Major discussion point

Business alignment and market forces toward sustainability


Topics

Environmental impacts | Artificial intelligence | Monitoring and measurement


A

Ambassador Philip Tigo

Speech speed

206 words per minute

Speech length

583 words

Speech time

169 seconds

Kenya leverages 95 % renewable mix and green‑by‑design approach

Explanation

Kenya’s energy mix is already largely renewable, and its green‑by‑design policy includes widespread education on responsible resource use.


Evidence

“One is that we’re very lucky as a country that our energy mix is already 95 percent” [110]. “So part of our green by design is also kind of wide scale of education around how people use these resources” [112].


Major discussion point

Government policies and incentives for sustainable AI


Topics

The enabling environment for digital development | Environmental impacts


Call for sector‑specific standards and deep‑dive research on AI sustainability

Explanation

Philip urges the development of sector‑specific standards, deep‑dive assessments, and expanded safety research that incorporates environmental impact.


Evidence

“The energy, the life cycle, the sustainability piece, but also the improving the set of the science to continue to understand the energy efficiency component of AI” [84]. “And these are deep dives, and I have a sense people need to invest in deep dives” [86]. “we really have to invest in the standards” [90].


Major discussion point

Standards, measurement, and collaborative initiatives


Topics

Artificial intelligence | Environmental impacts | Monitoring and measurement


J

James Manyika

Speech speed

176 words per minute

Speech length

827 words

Speech time

280 seconds

Gemini models use mixture‑of‑experts for efficiency and carbon‑free compute

Explanation

Google’s Gemini family combines mixture‑of‑experts architectures with a commitment to carbon‑free compute, delivering high performance while minimizing energy use.


Evidence

“We have a whole model family, which starts with the Gemini Pro, goes to the Gemini Flash models, which are some of the most efficient models” [43]. “We are trying to get to a point where all our energy uses for our compute is carbon free” [47]. “People are activating and reactivating our mixture of experts” [48].


Major discussion point

Technical approaches for energy‑efficient AI


Topics

Artificial intelligence | Environmental impacts | The enabling environment for digital development


Corporate investment in off‑grid renewables and fusion backs low‑carbon AI

Explanation

Google is making large investments in green and clean energy, including fusion research and its own small modular reactors, to power AI compute sustainably.


Evidence

“we are making probably extraordinary, probably the most investments of any… anybody into using green energy, clean energy for our energy, for our compute” [109]. “So we’ve made some of the biggest investments in fusion energy” [128]. “We’ve even built in our own small modular reactors” [118].


Major discussion point

Government policies and incentives for sustainable AI


Topics

Financial mechanisms | Environmental impacts | Artificial intelligence


Efficiency seen as business advantage; Google pursues carbon‑free compute and inference hardware

Explanation

Efficiency is driven by market pressures; Google responds by developing carbon‑free compute and inference‑specific TPUs to lower energy consumption while staying competitive.


Evidence

“Now to James’ point I would like to add the good thing about AI is that we are energy constrained and so suddenly it means that efficiency is actually driven by business” [36]. “We are trying to get to a point where all our energy uses for our compute is carbon free” [47]. “In TPUs, we’ve actually built some inference‑specific TPUs just for inference, to be able to do inference even more efficiently” [74].


Major discussion point

Business alignment and market forces toward sustainability


Topics

Artificial intelligence | Financial mechanisms | Environmental impacts


A

Arthur Mensch

Speech speed

183 words per minute

Speech length

1190 words

Speech time

389 seconds

Sparse MoE, caching, and open‑sourcing cut training carbon

Explanation

Using sparse mixture‑of‑experts where only a fraction of parameters are active, together with caching and routing, reduces compute waste. Open‑sourcing large models further lowers duplicated training emissions.


Evidence

“Effectively, we are using sparse mixture of experts because those are models which have a lot of parameters to store knowledge, but where you only activate 5 % of them” [56]. “And so for us, … open sourcing models that are very large … has been a major way of reducing the externality cost that you’re producing” [57].


Major discussion point

Technical approaches for energy‑efficient AI


Topics

Artificial intelligence | Environmental impacts


Market and public procurement can accelerate efficiency

Explanation

While market forces naturally push toward efficiency, targeted acceleration—especially via public procurement—can speed up industry adoption of sustainable practices.


Evidence

“Again, I think the market can solve it, but it can be accelerated” [40]. “And I think it would be valuable for public procurement in particular to put more pressure on sustainability as a way to accelerate the industry” [91].


Major discussion point

Business alignment and market forces toward sustainability


Topics

Financial mechanisms | Artificial intelligence | Environmental impacts


Support open‑source and research on model distillation/routing to boost efficiency

Explanation

Incentivising research on routing and distillation enables models that require far fewer GPUs, reducing carbon footprints while keeping performance high.


Evidence

“And so good practices, incentivizing research as well, because the domain of routing, picking the right models, the domain of distillation, those models do not require you to have thousands of GPUs” [53]. “Because we’re investing and it costs a lot of carbon to actually train a model, but then we give it for free to everyone else” [62].


Major discussion point

Standards, measurement, and collaborative initiatives


Topics

Artificial intelligence | Environmental impacts | Financial mechanisms


A

Abhishek Singh

Speech speed

193 words per minute

Speech length

907 words

Speech time

281 seconds

India emphasizes small, sector‑specific models for lower inference energy

Explanation

India’s strategy focuses on deploying smaller, sector‑tailored models that consume less energy and cost, rather than pursuing trillion‑parameter models.


Evidence

“While it’s fashionable to go for a trillion parameter model and more, but ultimately if you are building use cases in key sectors… you’ll need to go through smaller models which will be consuming less energy and which will be able to cost less” [68]. “So classifying it, having a very sector‑specific application, specific use case basis approach for designing your systems will ultimately be where the game is” [70].


Major discussion point

Technical approaches for energy‑efficient AI


Topics

Artificial intelligence | Environmental impacts | The enabling environment for digital development


India policy opens AI sector to private investment and focuses on inference efficiency

Explanation

A new policy opens the AI sector to private investors and explicitly prioritises inference efficiency and off‑grid power solutions to meet SDG goals.


Evidence

“In India, … we came out with a new policy under which the sector has been opened up for the private sector also to invest” [66]. “While it’s fashionable to go for a trillion parameter model… you’ll need to go through smaller models which will be consuming less energy” [68]. “We will need to think of dedicated small modular reactors, which can power the AI factories” [116].


Major discussion point

Government policies and incentives for sustainable AI


Topics

The enabling environment for digital development | Environmental impacts | Financial mechanisms


Agreements

Agreement points

Business interests naturally align with sustainability goals in AI development

Speakers

– Arthur Mensch
– Abhishek Singh
– James Manyika

Arguments

Business interests align with sustainability as efficiency becomes crucial for competitive advantage


Cost per inference becomes material for public sector applications funded by taxpayer money


Google covers the performance-efficiency frontier with Gemini family models and uses mixture of experts architectures


Summary

All three speakers agree that economic incentives drive companies and governments toward more efficient AI systems, making sustainability a business necessity rather than just an environmental concern


Topics

Artificial intelligence | Environmental impacts | The digital economy


Technical approaches can dramatically reduce AI energy consumption without sacrificing performance

Speakers

– Dr. Tafik Delassie
– Arthur Mensch
– James Manyika

Arguments

Small but conscious design choices can reduce AI energy consumption by up to 90% without compromising performance


Sparse mixture of experts models activate only 5% of parameters, reducing computational requirements


Google covers the performance-efficiency frontier with Gemini family models and uses mixture of experts architectures


Summary

There is strong consensus that specific technical solutions like model compression, mixture of experts architectures, and optimized inference can achieve significant energy savings while maintaining AI performance


Topics

Artificial intelligence | Environmental impacts


Open source collaboration is essential for sustainable AI development

Speakers

– Dr. Tafik Delassie
– Arthur Mensch

Arguments

The Resilient AI Challenge focuses on optimizing open-source models for energy efficiency while maintaining performance


Open sourcing large models reduces carbon externalities by preventing multiple companies from training similar models


Summary

Both speakers emphasize that open source approaches prevent duplication of effort and carbon-intensive training while enabling broader access to efficient AI models


Topics

Artificial intelligence | Environmental impacts | The enabling environment for digital development


AI can contribute to environmental solutions while consuming energy

Speakers

– James Manyika
– Arthur Mensch
– Abhishek Singh

Arguments

AI applications in grid management, climate adaptation, agriculture, and material science provide sustainability benefits


Public research should focus on model routing and distillation techniques that don’t require massive GPU resources


AI can help optimize grid efficiency, reducing transmission and distribution losses by 10-15%


Summary

All speakers agree that AI’s environmental impact should be viewed holistically, considering both its energy consumption and its potential to optimize other systems and solve environmental challenges


Topics

Artificial intelligence | Environmental impacts | Social and economic development


Governments should actively promote sustainable AI through policy and procurement

Speakers

– Arthur Mensch
– Ambassador Philip Tigo
– Abhishek Singh

Arguments

Governments should include sustainability criteria in public procurement to accelerate industry efficiency


Environmental standards need to be developed and implemented at scale across the AI industry


India is developing off-grid solutions and small modular reactors to reduce load on existing grid


Summary

There is consensus that government intervention through procurement policies, standards, and infrastructure investment can accelerate the adoption of sustainable AI practices


Topics

The enabling environment for digital development | Environmental impacts | Artificial intelligence


Similar viewpoints

Both speakers emphasize that the current trajectory of AI development focused on scale is unsustainable and needs to shift toward resilience and efficiency

Speakers

– Anne Le Henanf
– Dr. Tafik Delassie

Arguments

AI’s energy demands threaten to outpace green energy progress, creating an energy and environment crisis


The future of AI will be defined by resilience rather than scale alone


Topics

Artificial intelligence | Environmental impacts


All three speakers agree that sustainable AI is fundamentally about inclusion and ensuring that AI benefits reach underserved communities and developing countries

Speakers

– Anne Le Henanf
– Dr. Tafik Delassie
– Ambassador Philip Tigo

Arguments

Massive AI models without sustainability create new divides and exclude resource-lacking regions


Resource-efficient AI enables deployment in public services, rural health systems, and low-connectivity environments


Emerging economies need realistic approaches to AI sovereignty and should focus on specific parts of the technology stack


Topics

Closing all digital divides | Artificial intelligence | Environmental impacts


All three speakers emphasize the critical importance of clean energy sources for AI infrastructure, including nuclear, renewable, and off-grid solutions

Speakers

– James Manyika
– Arthur Mensch
– Abhishek Singh

Arguments

Google invests in nuclear, geothermal, hydro, wind and solar energy with a goal of 24-7 carbon-free operations


Training location matters – France uses nuclear energy and Sweden uses hydro for lower carbon intensity


India is developing off-grid solutions and small modular reactors to reduce load on existing grid


Topics

Environmental impacts | Artificial intelligence | The enabling environment for digital development


Unexpected consensus

Nuclear energy as a solution for AI’s energy needs

Speakers

– James Manyika
– Abhishek Singh

Arguments

Google invests in nuclear, geothermal, hydro, wind and solar energy with a goal of 24-7 carbon-free operations


India is developing off-grid solutions and small modular reactors to reduce load on existing grid


Explanation

It’s notable that both a major tech company executive and a government official from different countries independently highlighted nuclear energy, including small modular reactors, as a key solution for AI’s energy demands. This suggests nuclear is gaining acceptance as a clean energy solution for AI infrastructure


Topics

Environmental impacts | Artificial intelligence | The enabling environment for digital development


AI safety should include environmental considerations

Speakers

– Ambassador Philip Tigo

Arguments

AI safety definitions should expand to include environmental concerns beyond just model safety


Explanation

This represents an unexpected broadening of the AI safety discourse beyond traditional concerns about model behavior to include environmental impacts, suggesting a more holistic view of AI risks is emerging


Topics

Building confidence and security in the use of ICTs | Environmental impacts | Artificial intelligence


User education is as important as technical efficiency

Speakers

– Ambassador Philip Tigo

Arguments

Education about efficient AI usage is crucial – users should make informed choices about when to use AI versus traditional search


Explanation

While most discussion focused on technical and policy solutions, the emphasis on user education and behavioral change as a sustainability strategy was unexpected but represents an important dimension often overlooked in technical discussions


Topics

Capacity development | Environmental impacts | Artificial intelligence


Overall assessment

Summary

There is remarkably strong consensus among all speakers that sustainable AI development is both necessary and achievable through technical innovation, policy intervention, and international collaboration. Key areas of agreement include the alignment of business incentives with sustainability goals, the effectiveness of technical approaches like model compression and mixture of experts, the importance of open source collaboration, and the need for government leadership through procurement and standards.


Consensus level

Very high level of consensus with no significant disagreements identified. This strong alignment suggests that sustainable AI has moved from being a niche concern to a mainstream priority across industry, government, and international organizations. The implications are positive for coordinated global action on sustainable AI development, as stakeholders appear aligned on both the problems and potential solutions.


Differences

Different viewpoints

Approach to AI sovereignty and technology stack control

Speakers

– Ambassador Philip Tigo
– Abhishek Singh

Arguments

Emerging economies need realistic approaches to AI sovereignty and should focus on specific parts of the technology stack


India focuses on smaller models for specific use cases rather than chasing trillion-parameter models


Summary

Ambassador Tigo argues that countries need to be realistic about which parts of the AI stack they can control domestically and make strategic trade-offs, while Singh describes India’s approach of focusing on practical applications without chasing large models, suggesting different philosophies about national AI development strategies


Topics

Artificial intelligence | The enabling environment for digital development | Closing all digital divides


Definition and scope of AI safety

Speakers

– Ambassador Philip Tigo

Arguments

AI safety definitions should expand to include environmental concerns beyond just model safety


Summary

Ambassador Tigo uniquely argues for expanding AI safety definitions to include environmental impacts, while other speakers focus on technical efficiency and sustainability without explicitly challenging current AI safety frameworks


Topics

Building confidence and security in the use of ICTs | Environmental impacts | Artificial intelligence


Unexpected differences

Role of business incentives versus government intervention

Speakers

– Arthur Mensch
– Ambassador Philip Tigo

Arguments

Business interests align with sustainability as efficiency becomes crucial for competitive advantage


Environmental standards need to be developed and implemented at scale across the AI industry


Explanation

While both speakers support sustainability, Mensch expresses confidence that market forces will naturally drive efficiency due to competitive pressures, whereas Tigo emphasizes the need for regulatory standards, suggesting different levels of trust in market-driven solutions


Topics

The digital economy | The enabling environment for digital development | Environmental impacts


Overall assessment

Summary

The discussion shows remarkably high consensus on the importance of sustainable AI development, with disagreements primarily focused on implementation strategies rather than fundamental goals. Key areas of difference include approaches to AI sovereignty for developing countries, the optimal balance between market forces and regulatory intervention, and specific technical strategies for achieving efficiency.


Disagreement level

Low to moderate disagreement level with high strategic alignment. The speakers demonstrate strong consensus on core principles (sustainability is essential, efficiency drives competitiveness, international cooperation is needed) but differ on tactical approaches. This suggests a mature discussion where fundamental disagreements have been resolved, leaving room for productive debate on implementation details. The implications are positive for sustainable AI development, as the alignment on goals provides a strong foundation for collaborative action despite tactical differences.


Partial agreements

Partial agreements

All speakers agree on the need for efficient AI models but disagree on the optimal approach – Mensch emphasizes open sourcing to reduce redundant training, Manyika focuses on creating model families with different efficiency profiles, while Singh advocates for avoiding large models entirely in favor of smaller, task-specific solutions

Speakers

– Arthur Mensch
– James Manyika
– Abhishek Singh

Arguments

Open sourcing large models reduces carbon externalities by preventing multiple companies from training similar models


Google covers the performance-efficiency frontier with Gemini family models and uses mixture of experts architectures


India focuses on smaller models for specific use cases rather than chasing trillion-parameter models


Topics

Artificial intelligence | Environmental impacts | The enabling environment for digital development


All speakers agree on the importance of clean energy for AI infrastructure but pursue different strategies – Mensch focuses on strategic geographic placement, Manyika on comprehensive renewable energy investments, and Singh on off-grid solutions and small modular reactors

Speakers

– Arthur Mensch
– James Manyika
– Abhishek Singh

Arguments

Training location matters – France uses nuclear energy and Sweden uses hydro for lower carbon intensity


Google invests in nuclear, geothermal, hydro, wind and solar energy with a goal of 24-7 carbon-free operations


India is developing off-grid solutions and small modular reactors to reduce load on existing grid


Topics

Environmental impacts | Infrastructure and Energy Solutions | The enabling environment for digital development


Both speakers agree on the need for government intervention to drive sustainability but differ in approach – Mensch focuses on procurement policies as market incentives, while Tigo emphasizes the need for comprehensive industry-wide environmental standards

Speakers

– Arthur Mensch
– Ambassador Philip Tigo

Arguments

Governments should include sustainability criteria in public procurement to accelerate industry efficiency


Environmental standards need to be developed and implemented at scale across the AI industry


Topics

The enabling environment for digital development | Environmental impacts | Artificial intelligence


Similar viewpoints

Both speakers emphasize that the current trajectory of AI development focused on scale is unsustainable and needs to shift toward resilience and efficiency

Speakers

– Anne Le Henanf
– Dr. Tafik Delassie

Arguments

AI’s energy demands threaten to outpace green energy progress, creating an energy and environment crisis


The future of AI will be defined by resilience rather than scale alone


Topics

Artificial intelligence | Environmental impacts


All three speakers agree that sustainable AI is fundamentally about inclusion and ensuring that AI benefits reach underserved communities and developing countries

Speakers

– Anne Le Henanf
– Dr. Tafik Delassie
– Ambassador Philip Tigo

Arguments

Massive AI models without sustainability create new divides and exclude resource-lacking regions


Resource-efficient AI enables deployment in public services, rural health systems, and low-connectivity environments


Emerging economies need realistic approaches to AI sovereignty and should focus on specific parts of the technology stack


Topics

Closing all digital divides | Artificial intelligence | Environmental impacts


All three speakers emphasize the critical importance of clean energy sources for AI infrastructure, including nuclear, renewable, and off-grid solutions

Speakers

– James Manyika
– Arthur Mensch
– Abhishek Singh

Arguments

Google invests in nuclear, geothermal, hydro, wind and solar energy with a goal of 24-7 carbon-free operations


Training location matters – France uses nuclear energy and Sweden uses hydro for lower carbon intensity


India is developing off-grid solutions and small modular reactors to reduce load on existing grid


Topics

Environmental impacts | Artificial intelligence | The enabling environment for digital development


Takeaways

Key takeaways

Sustainable AI is now a global imperative, not an option, due to AI’s rapidly growing energy demands that threaten to outpace green energy progress


The future of AI will be defined by resilience and efficiency rather than scale alone, with business interests naturally aligning with sustainability goals due to energy constraints


Small, optimized AI models can reduce energy consumption by up to 90% without compromising performance, making AI more inclusive and accessible to resource-constrained regions


Open source approaches significantly reduce carbon externalities by preventing multiple companies from training similar large models


AI can create positive environmental impact through applications in grid management, agriculture, climate adaptation, and material science


Energy efficiency is becoming a competitive advantage as AI companies essentially become utility companies converting electricity into tokens


Government policy should include sustainability criteria in procurement and support off-grid renewable energy solutions for AI infrastructure


Resolutions and action items

Launch of the Resilient AI Challenge with registration deadline of March 15th to advance compressed, energy-efficient AI models


Publication of the second version of global approach on standardization for AI environmental sustainability by ITU, IEEE, and ESO


Coalition will launch AI research pitch sessions in 2026 to connect university projects with funding and industry partners


Winners of the Resilient AI Challenge will be announced at the AI for Good Summit in July in Geneva


India to continue developing off-grid solutions and small modular reactors to reduce grid load


Continued investment in breakthrough research areas like fusion energy and inference-specific computing chips


Unresolved issues

How to balance AI sovereignty desires of emerging economies with practical sustainability constraints


Lack of comprehensive research on AI safety that includes environmental concerns beyond just model safety


Need for deep-dive studies on environmental footprints of specific AI use cases across different sectors


Development and implementation of industry-wide environmental standards for AI systems


How to effectively measure and standardize AI environmental impact across different applications and regions


Scaling sustainable AI solutions while meeting growing inference demands from billions of users


Suggested compromises

Emerging economies should be realistic about which parts of the AI technology stack to keep domestically versus leveraging international resources


Focus on sector-specific, use case-based approaches rather than pursuing general-purpose trillion-parameter models


Balance the need for efficient AI solutions with environmental sustainability goals and 2030 SDG commitments


Combine large frontier models with smaller, specialized models to optimize the performance-efficiency trade-off


Use AI for both direct applications and to optimize energy systems, creating a positive feedback loop for sustainability


Thought provoking comments

What if the next breakthrough in AI is not about building other larger models, but about building leaner, more resilient systems, systems that can solve whole world problems and real world constraints, including in low resource environments.

Speaker

Dr. Tafik Delassie


Reason

This comment fundamentally reframes the AI development paradigm from a ‘bigger is better’ mentality to efficiency-focused innovation. It challenges the prevailing industry narrative about scaling and introduces the concept that true advancement might come through constraint-driven design rather than resource abundance.


Impact

This comment set the philosophical foundation for the entire panel discussion. It shifted the conversation from defensive justifications of AI’s energy consumption to proactive discussions about how efficiency can drive innovation. All subsequent speakers referenced this efficiency-first mindset, with James Manyika noting that ‘no one really talks about model size anymore’ and Arthur Mensch emphasizing sparse architectures.


A single large AI model can consume over 1,000 megawatt hours of electricity, enough to power villages across India for a whole year, placing increasing pressure on energy systems and reinforcing inequalities in access to compute and infrastructure.

Speaker

Dr. Tafik Delassie


Reason

This stark comparison makes the abstract concept of AI energy consumption tangible and morally urgent by connecting it to real-world inequality. It transforms a technical discussion into an ethical imperative by highlighting how AI development could exacerbate global disparities.


Impact

This comment introduced the equity dimension to the sustainability discussion, which became a recurring theme. Ambassador Philip Tigo later built on this by discussing sovereignty and trade-offs for emerging economies, while Abhishek Singh emphasized the need for cost-effective solutions for public sector applications funded by taxpayers.


Being an AI company is turning into being a utility company, in that you’re basically turning electricity into tokens. It’s highly competitive, so that means the margins are getting thinner, and which means that things are also getting price sensitive, and so when it comes to being price sensitive, efficiency really matters.

Speaker

Arthur Mensch


Reason

This analogy brilliantly captures the commoditization of AI and explains why sustainability isn’t just an ethical choice but an economic necessity. It reframes AI companies as infrastructure providers rather than tech innovators, which has profound implications for how the industry should be regulated and operated.


Impact

This comment provided the economic logic that unified the panel’s arguments. It explained why business interests align with sustainability goals, supporting James Manyika’s investments in green energy and Abhishek Singh’s focus on cost-per-query optimization. It shifted the discussion from ‘should we be sustainable?’ to ‘how do we compete through sustainability?’


We are not chasing the trillion parameter models. We are not in the parameter game… We are trying to think of what are the solutions which can be built by using current level of models which are available, which can solve societal problems in various sectors.

Speaker

Abhishek Singh


Reason

This represents a fundamentally different national AI strategy that prioritizes practical impact over technological prestige. It challenges the assumption that countries must compete in the ‘AI arms race’ and instead proposes a more pragmatic, application-focused approach.


Impact

This comment introduced a concrete alternative to the frontier model race, demonstrating how developing nations can participate meaningfully in AI without massive infrastructure investments. It influenced the discussion toward sector-specific applications and validated the panel’s focus on efficiency over scale, while also highlighting the importance of the Resilient AI Challenge for countries following this approach.


Every country wants to have the entire stack in their country. So I think governments need to be very realistic around which parts of the stack they really want to keep in their country, especially if you have this AI for green and green AI conversation.

Speaker

Ambassador Philip Tigo


Reason

This comment introduces the complex reality of AI sovereignty versus sustainability trade-offs that developing nations face. It challenges the assumption that every country should or can develop complete AI capabilities domestically and suggests strategic choices are necessary.


Impact

This comment added geopolitical nuance to the technical discussion, highlighting that sustainability strategies must account for national sovereignty concerns. It complemented Abhishek Singh’s practical approach by acknowledging the political realities that shape AI development strategies, and influenced the conversation toward collaborative rather than competitive approaches to AI development.


Overall assessment

These key comments fundamentally transformed what could have been a superficial discussion about ‘green AI’ into a sophisticated analysis of how efficiency drives innovation, equity, and economic competitiveness. Dr. Delassie’s opening reframe established efficiency as the new frontier, while his inequality comparison added moral urgency. Arthur Mensch’s utility company analogy provided the economic logic that unified all arguments, explaining why sustainability is inevitable rather than optional. Abhishek Singh’s rejection of the parameter race offered a concrete alternative strategy, while Ambassador Tigo’s sovereignty concerns added necessary geopolitical realism. Together, these comments created a comprehensive framework showing that sustainable AI isn’t just environmentally responsible—it’s the most practical path to inclusive, economically viable AI development. The discussion evolved from defensive justifications to proactive strategies, from technical concerns to systemic solutions, and from competitive dynamics to collaborative imperatives.


Follow-up questions

How can AI be deployed in public services, small and medium-sized enterprises, rural health systems and low connectivity environments in developing countries and advanced economies facing energy constraints?

Speaker

Dr. Tafik Delassie


Explanation

This addresses the practical deployment challenges of AI in resource-constrained environments, which is crucial for ensuring equitable access to AI benefits globally.


How can we measure and standardize AI environmental sustainability across different implementations and use cases?

Speaker

Anne Le Henanf


Explanation

The Minister emphasized that ‘you can’t improve what you can’t measure’ and announced the second version of global standardization approaches, indicating ongoing need for better measurement frameworks.


What are the specific environmental impacts and carbon footprints of AI use cases across different sectors like food systems?

Speaker

Ambassador Philip Tigo


Explanation

He emphasized the need for deep dives into specific use cases to understand environmental footprints, noting that research on AI safety including environmental concerns is lacking.


How can AI safety frameworks be expanded to include environmental concerns beyond just model safety?

Speaker

Ambassador Philip Tigo


Explanation

Current AI safety research focuses primarily on models rather than environmental harms from use, representing a gap in comprehensive safety assessment.


What are the optimal strategies for emerging economies to balance AI sovereignty with sustainability constraints?

Speaker

Ambassador Philip Tigo


Explanation

He noted that every country wants the entire AI stack domestically, but this creates trade-offs with sustainability goals that need realistic government approaches.


How can public procurement policies be designed to incentivize AI efficiency and sustainability?

Speaker

Arthur Mensch


Explanation

He suggested that government procurement requirements could accelerate market-driven efficiency improvements by making sustainability part of the procurement criteria.


What research is needed in model routing, distillation, and agent harnessing that doesn’t require massive computational resources?

Speaker

Arthur Mensch


Explanation

He identified these as areas where efficient research can be conducted without thousands of GPUs, making them accessible for public research institutions.


How can off-grid energy solutions (solar, wind, geothermal, small modular reactors) be optimized specifically for AI infrastructure?

Speaker

James Manyika and Abhishek Singh


Explanation

Both speakers emphasized the importance of off-grid solutions to reduce burden on public infrastructure, with specific mention of fusion energy research and small modular reactors for AI applications.


What are the optimal approaches for using AI to improve grid efficiency and reduce transmission and distribution losses at scale?

Speaker

Abhishek Singh


Explanation

He mentioned a specific project showing 10-15% reduction in T&D losses, indicating potential for broader research and implementation of AI for grid optimization.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.