Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions

26 Jun 2025 10:15h - 11:15h

Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions

Session at a glance

Summary

This panel discussion focused on the intersection of artificial intelligence and environmental sustainability, exploring how AI can both contribute to and help solve climate challenges. The moderator introduced the central paradox that while AI offers opportunities to address environmental risks, it simultaneously contributes to the problem through high energy and water consumption.


Mario Nobile from Italy’s digital agency outlined his country’s four-pillar AI strategy emphasizing education, research, public administration, and enterprise applications. He highlighted efforts to transition from energy-intensive large models to smaller, vertical domain-specific models for sectors like manufacturing, health, and tourism. Marco Zennaro presented TinyML technology, which runs machine learning on extremely small, low-power devices costing under $10, enabling AI applications in remote areas without internet connectivity. He shared examples from a global network of 60 universities developing applications like disease detection in livestock and anemia screening in remote villages.


Adham Abouzied from Boston Consulting Group emphasized the value of open-source solutions, citing a Harvard study showing that recreating existing open-source intellectual property would cost $4 billion, but licensing it separately would cost $8 trillion. He advocated for system-level cooperation and data sharing across value chains to optimize AI’s impact on sectors like energy. Ioanna Ntinou discussed her work on the Raido project, demonstrating how knowledge distillation techniques reduced model parameters by 60% while maintaining accuracy in energy forecasting applications.


Mark Gachara from Mozilla Foundation highlighted the importance of measuring environmental impact, referencing projects like Code Carbon that help developers optimize their code’s energy consumption. The panelists agreed that sustainable AI requires active incentivization rather than emerging by default, emphasizing the need for transparency in energy reporting, open-source collaboration, and policies that promote smaller, task-specific models over large general-purpose ones. The discussion concluded that achieving sustainable AI requires coordinated efforts across education, policy, procurement, and international cooperation to balance AI’s benefits with its environmental costs.


Keypoints

## Major Discussion Points:


– **AI’s dual role in climate challenges**: The discussion explored how AI presents both opportunities to address environmental risks (through optimization and efficiency) and contributes to the problem through high energy and water consumption, creating a complex “foster opportunity while mitigating risk” scenario.


– **Small-scale and efficient AI solutions**: Multiple panelists emphasized moving away from large, energy-intensive models toward smaller, task-specific AI applications, including TinyML devices that cost under $10, consume minimal power, and can operate without internet connectivity in remote areas.


– **Open source collaboration and data sharing**: The conversation highlighted how open source approaches can significantly reduce costs and energy consumption by avoiding duplication of effort, with examples showing potential savings from $4 billion to $8 trillion through shared intellectual property.


– **Policy frameworks and governance models**: Panelists discussed the need for comprehensive governance including procurement policies, regulatory frameworks, transparency requirements, and incentive structures (both “carrots and sticks”) to promote sustainable AI development and deployment.


– **Transparency and measurement imperatives**: A key theme was the critical need for transparency in AI energy consumption, with calls for standardized reporting of energy use at the prompt level and better assessment tools to enable informed decision-making by developers, policymakers, and users.


## Overall Purpose:


The discussion aimed to explore practical solutions for balancing AI’s potential benefits in addressing climate change with its environmental costs, focusing on how different stakeholders (governments, researchers, civil society, and industry) can collaborate to develop and implement more sustainable AI technologies and governance frameworks.


## Overall Tone:


The discussion maintained a consistently optimistic and solution-oriented tone throughout. While acknowledging the serious challenges posed by AI’s environmental impact, panelists focused on concrete examples of successful implementations, practical policy recommendations, and collaborative approaches. The moderator set an upbeat tone from the beginning by emphasizing hope and “walking the talk” rather than dwelling on problems, and this constructive atmosphere persisted as panelists shared specific use cases, technical solutions, and policy frameworks that are already showing positive results.


Speakers

– **Mario Nobile** – Director General of the Agents for Digital Italy (AGIDS)


– **Leona Verdadero** – UNESCO colleague, in charge of the report “Smarter, Smaller, Stronger, Resource Efficient Generative AI in the Future of Digital Transformation”


– **Marco Zennaro** – Next Edge AI Expert at the Abdus Salam International Center of Theoretical Physics, also works for UNESCO, works extensively on TinyML and energy efficient AI applications


– **Audience** – Jan Lublinski from DW Academy in Media Development under Deutsche Welle, former science journalist


– **Adham Abouzied** – Managing Director and Partner at Boston Consulting Group, works at the intersection of AI, climate resilience and digital innovation with focus on open source AI solutions


– **Moderator** – Panel moderator (role/title not specified)


– **Ioanna Ntinou** – Postdoctoral researcher in computer vision and machine learning at Queen Mary University of London, works with Raido project focusing on reliable and optimized AI


– **Mark Gachara** – Senior advisor, Global Public Policy Engagements at the Mozilla Foundation


Additional speakers:


None identified beyond the provided speakers names list.


Full session report

# AI and Environmental Sustainability Panel Discussion Report


## Introduction


This panel discussion examined the intersection between artificial intelligence and environmental sustainability, addressing what the moderator described as a central paradox: while AI offers opportunities to tackle climate challenges, it simultaneously contributes to environmental problems through substantial energy and water consumption. The moderator emphasized the need to move beyond theoretical discussions toward practical solutions, setting a solution-oriented tone for the conversation.


The panel brought together perspectives from government policy, academia, civil society, and industry to explore how different stakeholders can collaborate on more sustainable AI technologies and governance frameworks.


## National Strategy and Policy Frameworks


### Italy’s Four-Pillar Approach


Mario Nobile, Director General of Italy’s Agency for Digital Italy (AGID), outlined Italy’s comprehensive AI governance strategy built on four pillars: education, scientific research, public administration, and enterprise applications. He emphasized that the contemporary debate has evolved, stating: “Now the debate is not humans versus machines. Now the debate is about who understands and uses managed AI versus who don’t.”


Nobile highlighted Italy’s substantial financial commitment, with 69 billion euros allocated for ecological transition and 13 billion euros for business digitalization from the National Recovery Plan. He noted the challenge of implementing guidelines across Italy’s 23,000 public administrations and described plans for tax credit frameworks to incentivize small and medium enterprises to adopt AI technologies.


A key element of Italy’s strategy involves transitioning from large, energy-intensive models to what Nobile called “vertical and agile foundation models” designed for specific sectors including manufacturing, health, transportation, and tourism. Italy’s Open Innovation Framework enables new procurement methods beyond conventional tenders for public administration, addressing regulatory challenges around data governance and cloud usage.


## Technical Solutions and Innovations


### TinyML: Small-Scale AI Applications


Marco Zennaro from the Abdus Salam International Centre of Theoretical Physics presented TinyML (Tiny Machine Learning) as an approach to sustainable AI. This technology enables machine learning on small, low-power devices costing under $10, consuming minimal energy, and operating without internet connectivity.


Zennaro described a global network of over 60 universities across 32 countries developing practical TinyML applications, including:


– Disease detection in cows in Zimbabwe


– Bee population monitoring in Kenya


– Anemia screening in remote villages in Peru


– Turtle behavior analysis for conservation in Argentina


He posed a fundamental question: “Do we always need these super wide models that can answer every question we have? Or is it better to focus on models that solve specific issues which are useful for SDGs or for humanity in general?”


The TinyML approach emphasizes regional collaboration and south-to-south knowledge sharing, with investment in local capacity building using open curricula developed with global partners.


### Model Optimization Research


Ioanna Ntinou from Queen Mary University of London presented research from the Radio project demonstrating how knowledge distillation techniques can achieve significant efficiency gains. Her work showed that model parameters could be reduced by 60% while maintaining accuracy in energy forecasting applications.


Ntinou raised concerns about current evaluation frameworks, noting: “If we are measuring everything by accuracy, and sometimes we neglect the cost that comes with accuracy, we might consume way more energy than what is actually needed.” She emphasized that sustainable AI development requires active intervention and new success metrics that balance performance with environmental impact.


## Open Source Collaboration and Economic Perspectives


### The Economics of Shared Development


Adham Abouzied from Boston Consulting Group presented economic evidence for open source collaboration, citing research showing that recreating existing open source intellectual property would cost $4 billion, while licensing equivalent proprietary solutions separately would cost $8 trillion. He argued that “open source solutions optimize energy costs by avoiding repetitive development.”


Abouzied emphasized that meaningful AI applications in sectors like energy require system-level cooperation and data sharing across value chains, noting that “AI breakthroughs came from training on wealth of internet data; for vertical impact, models need to see and train on data across value chains.”


## Civil Society and Environmental Justice


### Community-Centered Approaches


Mark Gachara from Mozilla Foundation brought environmental justice perspectives to the discussion, emphasizing that “the theater of where the most impact of climate is in the global south and it would be a farmer, it would be indigenous and local communities.”


Gachara shared a specific example from Kilifi, Kenya, where communities used AI tools for ecological mapping to advocate against a proposed nuclear reactor project. He highlighted how civil society can leverage AI for evidence generation while ensuring that communities most affected by climate change have agency in developing responses.


He referenced tools like Code Carbon that help developers measure their code’s energy consumption, demonstrating how transparency can drive behavioral change at the individual developer level.


## Transparency and Measurement Challenges


### The Need for Energy Reporting


A recurring theme was the urgent need for transparency in AI energy consumption reporting. Ntinou highlighted that the current lack of visibility into energy usage per prompt or model interaction makes it impossible to develop effective legislation or make informed decisions about AI deployment.


The panelists agreed that assessment of energy usage in widely used public models is essential before focusing on regulatory frameworks. Without standardized evaluation methods and transparent reporting, stakeholders cannot compare the environmental impact of different AI systems.


An audience question from Jan Lublinski raised the possibility of carbon trading mechanisms for AI, asking whether transparency in energy consumption could enable market-based solutions for reducing AI’s environmental impact.


## UNESCO Report Preview


Leona Verdadero participated online to provide a preview of UNESCO’s forthcoming report on resource-efficient generative AI. While details were limited in this discussion, she indicated the report would provide evidence about optimized AI models and their potential for improved efficiency.


## Key Themes and Future Directions


### Emerging Consensus


The discussion revealed broad agreement across speakers on several key principles:


– The value of smaller, task-specific AI models over large general-purpose systems


– The critical importance of transparency in energy consumption measurement


– The benefits of open source and collaborative approaches


– The need for comprehensive policy frameworks that balance innovation with sustainability


### Implementation Challenges


Several challenges remain unresolved, including the fundamental tension between AI model accuracy and energy consumption when accuracy remains the primary success metric. Regulatory challenges around data governance and cross-border data transfer continue to limit AI implementation, particularly in emerging economies.


The lack of transparency in energy consumption reporting for widely used AI models represents a critical gap that must be addressed before effective policy interventions can be developed.


## Conclusion


The discussion demonstrated that achieving sustainable AI requires coordinated efforts across education, policy, procurement, and international cooperation. The panelists agreed that sustainable AI will not emerge by default and requires active incentivization through both regulatory frameworks and market mechanisms.


The conversation revealed convergence toward practical, implementable solutions emphasizing efficiency over scale, transparency in measurement, open collaboration, and equitable access. The path forward involves not just technical innovation but fundamental changes in how success is measured and resources are allocated in AI development.


As the discussion highlighted, the solutions exist across multiple scales—from tiny, efficient models addressing specific local challenges to policy frameworks enabling sustainable AI adoption at national levels. What remains is coordinated implementation at the scale and speed required to address both AI’s environmental impact and its potential to help solve climate challenges.


Session transcript

Moderator: Good morning, everyone. A pleasure to be here with you all, in the audience, online, but also with this fantastic group of panelists. I’m sure if you read any of these reports about the most important issues of our times, you will certainly find the issues that we are discussing here today. Some reports will say governance of artificial intelligence is one of the most important issues we need to deal with. Certainly, climate change, many of these reports will say it’s one of the most important policy issues we have in front of us, as well as it’s one of the most threatening risks for humanity. Energy resources and scarce resources, and water is probably another of those issues that you’ll see in different reports here and there. The challenge for this gentleman and lady here is how to combine all those things together. And of course, with you, I hope we are going to have a very interesting conversation about that. So, I don’t want to use a clichĂ© here, but voilĂ , artificial intelligence offers, maybe needless to say, interesting opportunities for addressing some of these environmental risks, and we can speak about that, how the models can help us, and us as the climate scientists, as the policy makers, as the civil society, to solve this very complex equation on how we solve the planetary crisis that we are in the middle of, but tragically, or ironically, the same technology that can help us to address the issue is contributing to the problem, because it consumes a lot of energy, and water, and so on. So, again, as I said, it’s a clichĂ©, but our job is how we can foster the opportunity and mitigate the risk. And as usual, in these very complex lives we have right now, it’s easy to say, but not necessarily easy to do. The good news is that these people in this panel and online, they do have some interesting solutions to propose to you, and some of them are already implementing it. So, since I’m optimistic by nature, when my team asked me to moderate and I saw what they prepared, I was very happy that it was not only dark and terrible, but it was also about how we can actually walk the talk, right? So, this session is a lot about this, to have a dialogue with this group of different actors here that are dedicated to think about this problem, and how we can… and I think we can suggest and underline in the one hour we have some of these issues. Of course, let’s look into the implications of AI technologies for these problems. We will try to showcase some tools and frameworks. UNESCO with the University College of London, we are going to launch very soon a very exciting issue brief called Smarter, Smaller, Stronger, Resource Efficient Generative AI in the Future of Digital Transformation that my dear colleague Leona, who is online, unfortunately she couldn’t be here today also to reduce UNESCO’s carbon footprint of people travelling all over the world, but she is the person in charge of this report and you can ask further questions and interact with her on that. And also again, as I said, try to highlight the different approaches that we can take to address these issues. So I’m sure we are going to have an exciting panel and since we need to be expeditive, I will stop here and go straight to my panellists and let me start with you, Mario. So Mario Nobile is the Director General of the Agents for Digital Italy, AGIDS, I guess I say like that. Correct. And Mario, let’s first start on what you are already doing at the strategic level, right? So how your agency is trying to cope with these not easy challenges. Buongiorno, over to you.


Mario Nobile: Buongiorno, thank you and good morning to all. Our Italian strategy rests on four pillars. Education, first of all. Scientific Research, Public Administration and Enterprises. And these efforts aim to bridge the divide, ensuring inclusive growth and empowering individuals to thrive in an AI-driven economy. For the first, scientific research, our goal is promoting research and development of advanced AI technologies. You before mentioned technology evolves really fast. So now we are dealing with agentic AI. We started with large language models, then large multimodal models. Now there is a new frontier about small models for vertical domains. So we met also with Confindustria, which is the enterprises organization in Italy. And we are trying applications about manufacturing, health, tourism, transportation. And this is important for energy-consuming models. For public administration, we are trying to improve the efficiency and effectiveness of public services. For companies, Italy is the seventh country in the world for exports. So we must find a way to get to the application layer and to find concrete solutions for our enterprises. And education, first of all. I always say that now the debate is not humans versus machines. Now the debate is about who understands and uses managed AI versus machines. who don’t. Okay. And in Italy, the AI strategy is, we wrote it in 2024, we have also a strategic planning, the three year plan for IT in public administration. And we emphasize the importance of boosting digital transition using AI in an ethical and inclusive way. This is important for us. We have three minutes, so I go to the conclusion. And we are stressing our universities about techniques like incremental and federated learning to reduce model size and computational resource demands. This is the first goal. And so this approach minimizes energy consumption and we are creating the conditions to transition from brute force models, large and energy consuming, to vertical and agile foundation models with specific purposes, health, transportation, tourism, manufacturing. This is the point. Now, I was saying technology evolves faster than a strategy. So we have a strategy, but we are dealing with agentic AI, which is another frontier.


Moderator: Thank you very much, Mario. And I’m glad to hear the conclusion in terms of what you are working with your universities. Because I mean, I’m just the international bureaucrat here, right? I don’t understand anything about these things. But I did read the report that my team prepared and they were making recommendations towards what you were saying. And one of my questions to them is, is this Marco Zennaro, Next Edge AI Expert at the Abdus Salam International Center of Theoretical Physics. He also works for UNESCO. My children, they always tell me, why don’t you do these kind of interesting things? Because they don’t understand what I do, right? Because maybe not even I, but people like Marco are the ones walking the talk in UNESCO. So Marco, you have worked extensively on 10 ml and energy efficient AI applications, also in African contexts. So can you tell us, and especially for a person like me that is no political scientist, we don’t understand anything. So how you can understand the benefits of what you are doing?


Marco Zennaro: Sure, sure. Definitely. Thank you very much. So let me introduce TinyML first. So TinyML is about running machine learning models on really tiny devices, on really small devices. And when I say small, I say really small. So devices that have, you know, few kilobytes of memory, that have really slow processors, but they have two main advantages. The first one is that they’re extremely low power. So we’re talking about, you know, green AI, these devices consume very little power. And second advantage is that they’re extremely low cost. So one of these chips is less than a dollar, and one full device is about $10. So, you know, that’s, of course, very positive. And they allow AI or machine learning to run on the devices without the need of internet connection. So we heard, you know, during IGF that a third of the world is not connected. And if we want to have, you know, data from places that are remote, where there’s no internet connection, and we want to use AI or machine learning, that is a really good solution. So you ask about application. And so in 2020, together with colleagues from Harvard University and Columbia University, we created a network of people working on TinyML with a special focus on the Global South. Now we have more than 60 universities in 32 different countries. So we have many researchers working on TinyML in different environments. And they worked on very diverse applications. So just to cite a few, there’s colleagues in Zimbabwe that worked on TinyML for food and mouth disease in cows. So, you know, sticking this device in the mouth of the cow and detecting the disease. There’s colleagues in Kenya working on counting the number of bees that you have in a beehive. There’s colleagues in Peru that use TinyML to detect anemia through the eyes in remote villages. We have colleagues in Argentina that use TinyML on turtles to understand how they behave. So very diverse applications. Many of them have impact on SDGs. And again, using low cost and extremely low power devices.


Moderator: Thank you. This is fascinating. As you said, we are having in this IGF, which I think is a good thing, lots of discussions regarding, I mean, in this UN language, how we do these things, leaving no one behind. And these are very concrete examples, right? Because it’s about the cost, it’s about being low intensive on energy. So very glad to hear that. And again, to see that those things are possible. And this offers us a bit of hope, because as I said in the beginning, I’m always concerned that we are only looking to the side of the problems and the terrible risks, but without looking into what is already happening to address the issues, right? So let me move now to have a different setting to our online guests. We have One speaker that is speaking from the internet space, from the digital world. And he is Adham Abduzadeh, and he is Managing Director and Partner at the Boston Consulting Group. Welcome, Adham, to this conversation. And I know that the BCG, the Boston Consulting Group, is very much concerned about these issues as well. You are putting a lot of effort on that. And you yourself have worked at the intersection of AI, climate resilience and digital innovation. And with a specific focus on the open source AI solutions. So how you can tell us in your three minutes the connection of these issues with the main topic of this panel, that of course is the relationship with sustainability, environmental sustainability. Welcome and over to you.


Adham Abouzied: Thank you very much. Honored and very happy to join this very interesting panel. And I must say I enjoyed so much the interventions of the panelists before me. I think you asked a very, very interesting question. I will start my answer by basically the results of a study that has been recently made by Harvard University, which is around the value of the open source wealth that exists on the ground today. The study basically estimates that if we would recreate only one time the wealth of open source intellectual property that is available today on the Internet, it would cost us $4 billion. But it does not stop there, because if you assume or if you imagine that this material. was not open source, and that every player who would want to use it would either recreate it or pay a license, then basically you would increase this $4 billion of cost to $8 trillion just because of the repetition. So in a sense, having something that is open source optimizes significantly the amount of work that is required to get the value out of these AI algorithms or digital technologies in a larger sense. So instead of doing the thing one time, you will have communities that are actually contributing and building on top of the, I would say, intellectual creation of each other. First, I mean, to be able to get to broader impact at a much lower cost and a much lower energy cost as well. Now, from experience also, and building specifically on what Mario is saying, for you to be able to implement vertically focused AI models that generate value across a certain system or a certain value chain, you need to have some kind of system level cooperation, which means that today all of the, I would say, AI breakthroughs that we have been seeing basically through very large, generic foundational models, it’s there because they were trained on the data that is available out there on the internet, the wealth of text, the wealth of images, the wealth of videos, and basically these models are good at this generic, generative, tasks because of what these models have been able to see and train on. But now for you to be able to have a meaningful impact with models that have a vertical focus and can create value across a certain system, they have to basically see and train on data and build on top of the decisions that are made across a value chain. Let me give a concrete example that relates to this basically your starting speech. Let’s take the energy sector today. If you want to have or if you’re seeking to have basically an AI development that rather than creating a burden on the energy system, actually it’s becoming more optimised, then you also need to allow AI to create value within the energy sector, basically help optimise the decisions from the generation to the transmission to the actual usage. And if I take this energy system as an example, basically currently in very few countries in the world, you can see basically cooperation, data exchange, IP exchange across the different steps of the value chain. This would be essential and if it is allowed through policies, through protocols, like when we first started the internet with TCP IP, then the vertical models can create much more significant value that would allow, as a big example, the energy sector to optimise its income and will allow the AI algorithms not only to consume less energy by themselves, but also to help the energy sector itself. I would say, optimise its output and become greener.


Moderator: Thank you. Very interesting. So at the end of the day, we need to move from the lose-lose game to the win-win game, right? And you mentioned some keywords that I’m sure Jona will talk about, optimisation, cooperation, so that’s a very interesting segue. Let me just make one remark on the open source and the openness, on the open solutions, that is very interesting. We witnessed the concrete global example during the pandemic, right? When the scientists decided to open, the vaccines were produced in a record time in human history. So here, mutatis, mutandis, we need to use the same logic to fix this huge problem. So, Jona, in TINU, it’s correct? It’s TINU. OK, sorry about that. So Jona is a postdoctoral researcher in computer vision and machine learning at the Queen Mary University in London. And you work with Raido project, which focuses on reliable and optimised AI, again, the word. So, again, can you walk us through the specific use case from the project where you managed to show this connection with the efficiency and so on? Over to you.


Ioanna Ntinou: Yes, thank you. I’m happy to be here. So, as you said, Raido stands for reliable AI in data optimisation. And what we’re trying to do is to be a bit more conscious when we develop AI models on the energy consumption that the models are going to use. And by the end of the project, we will have developed a platform that we are going to test it against for real-life use cases. We call it pilots. And that they are spun in different domains. This can be… Institute of Technology, Robotics, Healthcare, Smart Farming, and Critical Infrastructure. Today, I will focus in one specific domain, which is the energy grid, and we’re working with two companies from Greece. One is an energy company, the National Energy Company, which is called PPC, and the other is like a research center called CERF. So, what we have is an optimized model for day ahead forecasting of energy demand and supply, particularly in smart homes that they have a microgrid. And what we were given was a big time series model that would predict the energy demand of small electronic devices that we have in our house, including, let’s say, a bulb. The problem with this model was that it was a bit big. It was very good in accuracy, but it was quite big, and it would need quite often retraining because, as you know, when we calculate the energy consumption of a device, this is dependent on the seasonality, this is dependent on the different habits that someone can have in a house. And what we did was simply a knowledge distillation. We took this model and we used it as a teacher to train a smaller model to produce more or less the same results. It was quite close in accuracy, but we reduced the number of parameters to 60%. So, we got a much smaller model, we call it the student now, that has more or less the same performance, and we got several benefits from this process. First, we have a smaller model that consumes much smaller energy. Secondly, this model is easier to deploy in small devices, as we discussed before in the panel, right, which is very critical because then you can democratize the… access to AI to a lot of houses, right? Because if you have a small model, you can deploy it much easier. And also, by maintaining the accuracy, we somehow help these companies have a more accurate forecast of the energy demand that they will be around. And in that way, they can consume or they produce energy having this into their mind. So we are happy with this and we hope that we will help also the rest of the pilots get good models and have good results.


Moderator: Thank you. Fascinating. And so it’s also a bit about evidence-based next steps, either for the companies or for policymaking. I remember a few years ago, before the AI times, but already in the open data environment, a Ministry of Energy of a particular country launched a hackathon with open data so that people could help to improve the efficiency of energy consumption and so on in the public sector. So transparency always sheds light, sometimes in very funny ways. And after they did the hackathon and the people did the run the models and so on, they found out that the ministry that consumed more energy in that country was the Ministry of Energy and Environment, which was a big shame for them. But then they actually implemented some very concrete policies to resolve these issues. So that’s also interesting. We need to connect these conversations with the conversation of transparency and accountability and so on. So, Mark, last but not least, Mark Gachara is a senior advisor, Global Public Policy Engagements at the Mozilla Foundation. And we are going to get back to the conversation, I guess, of open source because Mozilla has a long experience on that. And this mission of Mozilla


Mark Gachara: We had a number of grantees think about this particular area. We had, for example, an organization from France called Code Carbon. And within this theme of environmental justice, they were thinking about, when I write code, how much energy am I using? Because at the end of the day, fossil fuels are used to generate the energy. So a developer can actually optimize their code. And it’s an open source project that’s available that people can. before they harm the environment. These are just examples of practical use cases that are measuring because in management they have told us if you can’t measure it, you can’t manage it. Probably you also can’t manage the risk. So I think being able to run this kind of action research that can talk to policy eventually then sheds light to make like environmental justice part of a core definition of how we roll out AI solutions.


Moderator: Thank you. Thank you. Super interesting and if I hear you correctly there are two important issues. There may be a energy efficiency AI by design right from the moment you are writing the code which is an interesting story also from the human rights perspective to be by design but also the capacity of having decent risk assessments for these things. So let’s now what’s going to happen we are going to have a second round here very quickly ping pong and then we are going to open to you so please start thinking about your questions to these fantastic people here. Mario you are also the regulator so and regulators can use sticks but they also can use carrots. So I guess my question for you is about the carrots.


Mario Nobile: Thank you. I’ll try to connect some dots and And my answer, I have three answers for this, but I want to connect. I fully agree with the other panelists, with Adam. The first one is education, and I connect with the answer from Adam. We are writing with public consultation guidelines for AI adoption, AI procurement, and AI development. So think about public administration. In Italy, we have 23,000 public administrations. Everyone must adopt. Everyone must do procurement. Some of them must develop. I think ministries like Environment and Energy, of course, but also the National Institute for Welfare. And what Adam was saying before is also about the interaction between cloud and edge computing. This is important. But we have also other challenges, and I’m thinking about the energy model. About which are the challenges now for a good use of artificial intelligence? Data quality and breaking the silos. These two points are really important for us. So the first answer is education, collaboration, sharing. New questions about what can I do with my data and with my data quality? What can I do about the cloud and the edge services I can develop? The second one, and the Agency for Digital Italy is working on it, is the Open Innovation Framework. It’s about new applications, not the classic tender about buying something, but the Open Innovation Framework, so I can enable public administration to carry for planned procedures and facilitate the supply and the demand in a new way. The third one is money. So the carrot is money. We are using in Italy a big amount of money from the National Recovery and Resilience Plan. We have 69 billion euros for ecological transition and 13 billion euros for business digitalization. Now we are working on a new framework about tax credit for enterprises for a good start of using AI in the small and medium enterprises. Thank you very much. This is fascinating, and also finishing with the money always gives people hope.


Moderator: But I wanted to underline the aspect of procurement, because in 99% of the UN member states, the public sector is still the biggest single buyer. So if we can have decent procurement policies, it’s already a lot, right? So congrats on that. Marco, on your side of the story, with your experience, if you can tell us what are the key enablers for this story, right? So what are the drivers? If someone needs to start looking into that, the policy makers and the scientists, what would you…


Marco Zennaro: and I’m going to be very concrete and practical. So the first one is investing in local capacity building in embedded AI. So capacity building has been mentioned many, many times in the last few days. But I would say the new aspect from my side is to give priority to funding for curricula that is co-developed with global partners. So not reinventing the wheel, but using the data to make the decision. So not reinventing the wheel, but using existing and open curricula such as the one that we developed with the TinyML Academic Network, which is completely open and can be reused. The second one is to promote open, low-cost infrastructure for TinyML deployment. So people need to have these devices in their hands, and very often that’s not easy. So, you know, subsidizing the access to these open source TinyML tools and low-power hardware would be extremely useful. Because, of course, this lowers the barriers, the entry barriers, and stimulates local innovation with these, you know, SDG-related challenges. The third one is to integrate TinyML into national digital and innovation strategies. So people, when they design a strategy, please don’t forget that you also have this kind of alternative model of really tiny devices running AI. So that’s, you know, a component that should be included. Next one is to fund context-aware pilot projects in key development sectors. We heard from Mozilla about, you know, funding kind of pilots and, you know, testing new solutions. Well, that’s possible for TinyML. Again, we have seen many, many interesting applications. So, you know, funding even more would be extremely useful. And finally, facilitating regional collaboration and knowledge sharing. So we had a few activities which were, like, tailored for specific regions with the idea that, you know, people from the same region have the same issues. And that has been extremely successful. So, you know, supporting this south-to-south collaboration and, you know, focusing on specific regions so that they can use TinyML. to solve their common issues.


Moderator: Super, very interesting. So co-working in many levels, right, as you said, but also interesting because I’ve been participating in several discussions about the DPI, digital public infrastructure, or also public interest infrastructures. And to be honest, I have not been hearing a lot about what you are saying. So I think it’s an interesting way also to connect the dots if this can be more and more presented as a potential solution. So Adam, let me get back to you now. I know that from where you are sitting in the Boston Consulting Group, you are also looking to governance models for this issue. So what are key features there of obviously in three minutes? It’s always a challenge, but that you can tell us on the governance side of the story.


Adham Abouzied: Yeah, yeah, very clear. I think we’ve been mainly looking at basically how to inspire what is the right governance for a systems level change to basically push forward, accelerate the adoption of open source data sharing, intellectual wealth sharing across different systems. And I think the most important thing and is to have the, I would say, the right policies and the right sharing protocols across every industry. And to be very well designed, well enforced, as you said before, with the carrot and the stick at the same time. It is also very important to make sure that the different players are actually incentivized to adopt. And actually, as Mario was saying also earlier, have the right, I would say, set up the right regulatory reference for them to adopt on their own level. And then afterwards, also sharing the outcomes, the insights, the data with others. We have faced so many difficulties in so many sectors while trying to apply AI applications, whether it’s generative or whether it’s other techniques with facing the current regulation. The cloud is one, basically having the proper data governance, data classification, understanding what is sensitive, what should be on the cloud, what should not be on the cloud and under which levels or layers of security. Even worse in several countries and specifically emerging countries that basically do not have hyperscalers or do not have actual physical wealth of data centers implemented locally and to have actual regulation against the data traveling, their data traveling cross border, wouldn’t even allow the, I would say, the prompts, the queries that would go up to query foundational models or other models that are sitting in the cloud somewhere outside of the country. So regulating this, what type of prompts should actually cross the border, which ones shouldn’t, and for which you maybe need to go for local alternatives that are more focused and smaller models, maybe less efficient. you start there until the regulation evolves is something that is that is very, very important. But I mean, after all, what is really, really important is to have certain rules across the players in a value chain about what they can share in which format. And what are the, I would say the rights and responsibilities that go with it as it moves through the value chain, and for them to believe that actually, sharing is a win-win situation. It is not the opposite of, I would say, a free market. I would say competitiveness and keeping your intellectual property and conservative, conserving your competitiveness, it actually contributes to it, it gives you a wealth of information on top of which a layer on top of which you can develop a competitive edge.


Moderator: Thank you. Very, very interesting. And again, I guess, Marco was with me yesterday in another panel about AI and the issue of standardization and rules appeared a lot, right? So we will need to deal with that rather sooner than later. So, Leona, again, as you noticed, the second round is a lot about lessons learned and insights for the different stakeholders. So in your case, what is what you would tell to policymakers from the perspective of what you are doing in Raido? What are key lessons to share with them?


Ioanna Ntinou: As you said, I have more of a technical background, right? So I think that one of the key lessons that I have learned from Raido or working as an AI developer is that sustainable AI is not going to emerge by default. We need to incentivize and it needs to be actively supported. And the reason that even when I train a model, I will opt for a bigger model. I will opt for more data. is the way we are actually measuring success, which is measured by accuracy. So if we are measuring everything by accuracy, and sometimes we neglect the cost that comes with accuracy, we might consume way more energy than what is actually needed. Like, in some cases, a small improvement comes with a huge increase in energy, and we have to be careful with this trade-off. And I think that one of the first steps that we need to do is to actually simply assess how much energy is used in widely used public models. And what I mean is that we… I will give you a concrete example, right? There is now GPT-4, or there is DeepSeq, that they might have 70 billion parameters, and they consume massive amounts of electricity during training and during inference, but we don’t really know how much energy is used when we put a simple prompt on GPT. We are not aware of this. So I think that we should start by having some transparency in the way we report energy. So by simply developing evaluation standards at prompt level, it will be a great first step, because it will develop awareness and transparency. And after this, we can see what can be done with this. But without having the knowledge of how much energy is actually used, I think we cannot focus on legislation, right?


Moderator: Super. Of course, I mean, it’s very important, because the transparency generates the evidence that is needed, right? And it’s a good segue for you, Marc, because the question for you was precisely how the civil society can be even stronger in demanding more transparency and accountability for this company.


Mark Gachara: Yeah, so I will answer the question in two parts. I will start with… The things that as Mozilla we see are missing so right now there’s there’s a lot of money going into climate or smart agriculture or climate AI solutions But it’s to make them more efficient more effective as opposed to how do we mitigate? The harms that they are causing and and the owner has just mentioned a little bit about that So It is important from our end to to think about how the solutions We build can prevent they can also improve transparency and then they can make Also it more visible over the impact on the environment over over the work that you are doing And this could come Through policy work through research and also strengthening community work like strengthening community civil society Organizations that are working with communities on the ground and I’ll give an example of somebody that we worked with Last year, but this is ongoing. So so in Kilifi Kenya, we have the Center for Justice governance and environmental action They Are working on climate justice issues and these plants in Kenya to build a nuclear reactor somewhere in Kilifi in the sea and and one of the things that this CSO did was to Do an ecological mapping of an area in the Indian Ocean and they used AI to do this and they have come up with a report and they are saying that We think that this is ill-advised and they’ve given the reasons why it’s ill-advised We should actually be focusing on using renewable energy sources because Kenya has is is net is a net producer of renewable energy, so Thank you very much. This is what civil society can do, for example. They are pushing back. It’s still a work in progress right now. But you can generate evidence to be able to do advocacy. Unfortunately, once the ship has left the dock for civil society, you are left trying to push back, which is unfortunate. But again, I come back to the three things. How do we use such kind of evidence to advise policy? How can we do action research? And how do we put money into preventive, into making these issues more transparent? So that actually the taxpayers and government can actually quantify the issue in front of them. Thank you.


Moderator: Thank you. Very interesting. And also the need to do that in an unfortunate, shrinking space for the civil society. And with less funds for this accountability and transparency, including for journalism. So now is the time for you. Questions? Leona, also on the online space, do we have questions for the panel? You have two mics here. I know that this setting looks like we are very distant, but we welcome your thoughts and questions. And Leona, from the online, I don’t know if you are hearing us.


Leona Verdadero: Yes, hi. No questions yet.


Moderator: So while maybe you guys are getting less shy, can I ask you, Leona, to be on the screen and give us a teaser of one minute about the report that we are going to launch soon about these issues?


Leona Verdadero: Yes. Hi. Good morning, everyone. Can you hear me? Yeah. Yes. Okay, great. Great to see all of the panelists. and thank you so much for joining. And I think just to also echo UNESCO’s work and really our work on being a laboratory of ideas and also trying to push the envelope on actually being able to define what do we mean by all these sustainable, low-resource, energy-efficient solutions. So yes, we’ve been doing this research in partnership with the University College of London where we’re doing experiments actually looking at how can we optimize the inference space, right, of AI. So inference is what other colleagues have mentioned since when we are interacting with these systems. But really what we are uncovering is which and I really love this conversation so far because most of you have echoed the need to be more efficient climate conscious and looking at AI in a smaller, stronger, smarter way. So here we’re looking at energy efficient techniques such as, you know, optimizing the models, making them smaller with quantization, distillation, all of these technical things. But what we are really seeing here is that these different experiments that we’ve done actually make the models smaller and also just more efficient and better performing. And what that means especially for stakeholders working in low resource settings, right, where you have limited access to compute and infrastructure, these types of models are made more accessible to you. And so it’s really also part of the answering the question, you know, what type of model, what type of AI do we need for the right type of job, right? So also trying to demystify the thought that, you know, bigger is better. It’s not necessarily better, right? So also that’s what you’re seeing now in terms of, you know, all these large language models are power hungry and water thirsty. So, you know, here’s where we’re trying to, you know, merge two things together. So one is very technical research, how we’re able to translate that in a policy setting, what it means for policymakers, if you’re thinking about exactly developing, deploying, procuring AI systems for your use cases. So if we’re able to, by presenting evidence, try to move the needle by promoting more eco-conscious AI choices, right, and model usage, then I think that’s one very concrete start to actually have this bigger push to look very concretely on what it means to use smaller and smarter AI. And so actually also in the chat, we’re going to put here a sign-up link so all of you will be notified when we launch this report. Thank you.


Moderator: Thank you, Leona, for the teaser, and I strongly recommend you’ll see the report. By the way, it’s also beautifully designed by one of our colleagues that is an expert in designing. So it shows, tried to show also through cartoons, etc., in a user-friendly way, some of these issues that are sometimes complex to understand. Questions from here? Otherwise, I have Brent, Jan. Please take a mic.


Audience: Hello, my name is Jan Lublinski. I work for DW Academy in Media Development under the roof of Deutsche Welle, but I’m also a former, in my earlier life, I was a science journalist, that’s 10 years ago, and I’m used to hearing new things, and I must say I’m thinking while I talk because most of what you told us is new to me. So that’s really exciting, and congratulations for the report and putting together the panel the way you have done, and I’m trying to remember other technological revolutions we’ve seen in the past and how they could be made more transparent, like, I don’t know, heating systems in houses. You know, in Germany we have passes for every house where you can see how energy-efficient a house is, and it’s obligatory in the EU, I think, and I think we can think of many other technologies where we made, step by step, made it more transparent to then actually see what it means so people take conscious decisions, and that’s why I’m really glad, again, that you… Stress the transparency issue, but of course next thought I have is we also trade carbon as a negative good, right? So we force the industry To become more efficient by giving them certificates on their carbon emissions, and I wonder would By everything you know now so far about this topic Do you think we can take transparency on energy consumption by this far that we can one? One day actually trade the right to consume energy on behalf of with AI technologies And then and by that way forced industry to you know develop more small models and and be really conscious of whether as you say We really need to take the last step that will be very expensive to get the very Extreme accuracy that we might not need so not in the way to advance in the direction that you all point to Maybe our colleague from Boston starting group has ideas on this as he’s looking at the overarching strategies. Thank you


Moderator: Thank you. Yeah, so we have five minutes and 39 seconds So it’s one minute per person to interact with Jan’s question and our maybe already Offer us one takeaway and maybe if you don’t necessarily want to interact with young questions I would ask you something related to what he said What are the questions you think science journalists should be asking about this issue So Mari over to you


Mario Nobile: Well in one minute is very tricky, but I’ll try I Think that the key words are related so we have sustainability calls for transparency calls for awareness calls for Policies also government policies, and I think that the good question is not when but now we must think about Sustainability, energy consumption, and the AI potential, the potential for AI to displace jobs. Everything is related. We cannot think about energy consumption without job losses. Okay, so I think that the journalist can ask for a solution to job losses, energy consumption, awareness from people. It’s very complicated. We need two hours and not one minute.


Moderator: Thank you, but interesting. And well, that’s the job of the journalists to find the time to do it.


Marco Zennaro: Well, my kind of question would be what kind of models do we need? So in TinyML, of course, in these small devices, you can only have like models which are very specific to an application. Again, you know, coffee leaves or, you know, turtles. So my point is, do we always need these super wide models that can answer every question we have? Or is it better to focus on models that solve specific issues which are useful for, you know, SDGs or for humanity in general?


Moderator: Small is beautiful, right? There was an article in the New Yorker a few years ago. Fantastic. I recommend not about this, about overall small is beautiful. Adam, your one minute.


Adham Abouzied: Yeah, I think it’s very interesting. I would reiterate again what I’m saying. I think, yes, smaller focused models is something that would create significant value and would be much more optimized to get there. They need to have access to data that is currently not necessarily out there. And for this to happen, we need to have in place the policy, the governance. that would make this happen, because there isn’t a wealth of data out there that would allow this. And honestly, to be, I mean, to consume the energy that is required for AI to deliver real value within the systems and value chains that helps develop on the SDGs and help the daily lives is a much more important, I would say, use than consuming this energy for the large models to create videos and images on the internet.


Moderator: Thank you. Ioana?


Ioanna Ntinou: I think that my question will be, as a researcher, if we focus so much on having smaller models, if we actually neglect all the progress that has been done so far with the large language models, what all the revolution that has done so far. But then I guess the answer to this is that task-specific small models have still a great value. And I’m actually, I’m talking about actual value. You see our mobile phones, the technology there is small models, because they are bounded from the battery life and the processor that the phone has. We can see that our phones are actually having small task-specific models, meaning that there is still a lot of things to learn, except for the energy part, in terms of science and knowledge and the things that we can achieve as humans from pushing the boundaries of research. So I also think that we should give another type of value on this, which is actually the knowledge and the learning out of this process.


Moderator: Thank you. Mark?


Mark Gachara: Yeah, on my end, to respond a bitto the question that was asked. The theater of where the most impact of climate is is in the global south and it would be a farmer, it would be indigenous and local communities. So it come back and say like how do we have funds that support grassroots organizers and indigenous communities to actually build some of these solutions together. It’s already been said like the problems are really localized and we can focus on local solutions and then we need to foster funding strategies that could actually look into research and creating science that actually solves this climate solutions with these local communities. And procurement has already been mentioned which was a good thing, public procurement. Thank you. Thank you so much.


Moderator: We come to the end of this fascinating discussion. I want to personally thank you a lot each one of you because I have learned a lot. I hope it was the same with the audience and those online. Thank you Adam, thank you Leona for your online participation and here Mario and Marco and Iona and Mark and for all of you to be listening to us attentively and let’s try to make this greener world while still trying to have the good benefits of this fantastic digital and AI revolution. Thank you so much. Enjoy the rest of the IGF. Thank you. Export animation Critical Shirt Critical Birth Critical Birth Main animation Critical Birth Last one


M

Mario Nobile

Speech speed

99 words per minute

Speech length

803 words

Speech time

482 seconds

Italy’s AI strategy rests on four pillars: education, scientific research, public administration, and enterprises to ensure inclusive growth

Explanation

Mario Nobile outlined Italy’s comprehensive approach to AI development that focuses on four key areas to bridge divides and empower individuals in an AI-driven economy. The strategy emphasizes inclusive growth and ensuring that AI benefits reach all sectors of society.


Evidence

The strategy includes promoting research and development of advanced AI technologies, improving efficiency of public services, supporting Italy as the seventh country in the world for exports, and emphasizing education where the debate is not humans versus machines but those who understand AI versus those who don’t


Major discussion point

AI Governance and Strategic Implementation


Topics

Development | Economic | Legal and regulatory


Need for guidelines on AI adoption, procurement, and development across 23,000 public administrations in Italy

Explanation

Mario Nobile emphasized the importance of creating comprehensive guidelines for AI implementation across Italy’s vast public administration network. This involves standardizing approaches to adopting, procuring, and developing AI solutions across all government levels.


Evidence

Italy has 23,000 public administrations that must adopt, procure, and some must develop AI solutions, including ministries like Environment and Energy, and the National Institute for Welfare


Major discussion point

AI Governance and Strategic Implementation


Topics

Legal and regulatory | Development


Agreed with

– Adham Abouzied
– Marco Zennaro

Agreed on

Need for comprehensive policy frameworks and governance structures


Transition from large energy-consuming models to vertical and agile foundation models for specific purposes like health, transportation, and manufacturing

Explanation

Mario Nobile advocated for moving away from brute force, large AI models toward smaller, more efficient models designed for specific industry applications. This approach aims to reduce energy consumption while maintaining effectiveness for targeted use cases.


Evidence

Working with Confindustria (enterprises organization in Italy) on applications for manufacturing, health, tourism, transportation; emphasizing universities work on techniques like incremental and federated learning to reduce model size and computational resource demands


Major discussion point

Energy-Efficient AI Technologies and Solutions


Topics

Development | Infrastructure | Economic


Agreed with

– Marco Zennaro
– Adham Abouzied
– Ioanna Ntinou
– Leona Verdadero

Agreed on

Need for smaller, more efficient AI models over large general-purpose models


Italy allocates 69 billion euros for ecological transition and 13 billion for business digitalization from National Recovery Plan

Explanation

Mario Nobile highlighted Italy’s significant financial commitment to both environmental sustainability and digital transformation through the National Recovery and Resilience Plan. This substantial funding demonstrates the government’s prioritization of green and digital transitions.


Evidence

69 billion euros for ecological transition and 13 billion euros for business digitalization, plus work on new framework for tax credit for enterprises for AI adoption in small and medium enterprises


Major discussion point

Funding and Policy Mechanisms


Topics

Economic | Development | Legal and regulatory


Open Innovation Framework enables new procurement approaches beyond traditional tenders for public administration

Explanation

Mario Nobile described Italy’s innovative approach to public procurement that moves beyond conventional tendering processes. This framework facilitates better matching of supply and demand for AI solutions in the public sector.


Evidence

The Agency for Digital Italy is working on Open Innovation Framework for new applications, not classic tender about buying something, but enabling public administration to carry planned procedures and facilitate supply and demand in a new way


Major discussion point

Funding and Policy Mechanisms


Topics

Legal and regulatory | Economic | Development


Need to address job displacement alongside energy consumption as interconnected challenges requiring comprehensive solutions

Explanation

Mario Nobile emphasized that sustainability, energy consumption, and job displacement from AI cannot be addressed in isolation. He argued for a holistic approach that considers the social and economic impacts alongside environmental concerns.


Evidence

Key words are related: sustainability, transparency, awareness, policies, and government policies; cannot think about energy consumption without job losses; need solutions for job losses, energy consumption, and awareness from people


Major discussion point

Future Research and Development Directions


Topics

Economic | Development | Sociocultural


M

Marco Zennaro

Speech speed

169 words per minute

Speech length

780 words

Speech time

276 seconds

TinyML enables machine learning on tiny devices with extremely low power consumption and cost under $10 per device

Explanation

Marco Zennaro explained that TinyML allows AI to run on very small devices with minimal memory and slow processors, but with significant advantages in power efficiency and affordability. These devices can operate without internet connection, making AI accessible in remote areas.


Evidence

Devices with few kilobytes of memory and slow processors, chips cost less than a dollar, full device about $10, extremely low power consumption, can run without internet connection


Major discussion point

Energy-Efficient AI Technologies and Solutions


Topics

Development | Infrastructure | Economic


Integration of TinyML into national digital and innovation strategies is essential for comprehensive AI planning

Explanation

Marco Zennaro argued that policymakers should include TinyML as a component in their national strategies rather than overlooking this alternative approach to AI. This integration ensures that low-power, accessible AI solutions are considered in national planning.


Evidence

When people design a strategy, they should not forget that they have this kind of alternative model of really tiny devices running AI as a component that should be included


Major discussion point

AI Governance and Strategic Implementation


Topics

Legal and regulatory | Development | Infrastructure


Agreed with

– Mario Nobile
– Adham Abouzied

Agreed on

Need for comprehensive policy frameworks and governance structures


Investment in local capacity building using open curricula co-developed with global partners prevents reinventing the wheel

Explanation

Marco Zennaro emphasized the importance of building local expertise in embedded AI while leveraging existing open educational resources. This approach avoids duplication of effort and accelerates learning by using proven curricula developed collaboratively.


Evidence

Give priority to funding for curricula co-developed with global partners, not reinventing the wheel but using existing and open curricula such as the TinyML Academic Network which is completely open and can be reused


Major discussion point

Open Source and Collaborative Approaches


Topics

Development | Sociocultural | Infrastructure


Agreed with

– Adham Abouzied
– Mark Gachara

Agreed on

Necessity of open source and collaborative approaches for sustainable AI development


Regional collaboration and south-to-south knowledge sharing has proven extremely successful for TinyML applications

Explanation

Marco Zennaro highlighted the effectiveness of regional cooperation where countries with similar challenges work together on TinyML solutions. This approach recognizes that neighboring regions often face common issues that can be addressed through shared expertise.


Evidence

Activities tailored for specific regions with the idea that people from the same region have the same issues has been extremely successful; supporting south-to-south collaboration focusing on specific regions to use TinyML to solve common issues


Major discussion point

Open Source and Collaborative Approaches


Topics

Development | Sociocultural | Infrastructure


TinyML applications include disease detection in livestock, bee counting, anemia detection, and turtle behavior monitoring across 60+ universities in 32 countries

Explanation

Marco Zennaro provided concrete examples of TinyML applications that address real-world challenges across diverse sectors and geographic regions. These applications demonstrate the practical impact of low-power AI on sustainable development goals.


Evidence

Network created in 2020 with Harvard and Columbia Universities, now 60+ universities in 32 countries; applications include food and mouth disease detection in cows in Zimbabwe, bee counting in Kenya beehives, anemia detection through eyes in Peru villages, turtle behavior monitoring in Argentina


Major discussion point

Practical Applications and Use Cases


Topics

Development | Sustainable development | Sociocultural


Question whether super-wide models answering every question are needed versus specific models solving targeted issues

Explanation

Marco Zennaro challenged the assumption that large, general-purpose AI models are always necessary, suggesting that focused models designed for specific applications might be more appropriate. This approach aligns with sustainability goals and practical problem-solving needs.


Evidence

In TinyML, small devices can only have models which are very specific to an application like coffee leaves or turtles; questioning if we always need super wide models that can answer every question or focus on models that solve specific issues useful for SDGs or humanity


Major discussion point

Future Research and Development Directions


Topics

Development | Infrastructure | Sustainable development


Agreed with

– Mario Nobile
– Adham Abouzied
– Ioanna Ntinou
– Leona Verdadero

Agreed on

Need for smaller, more efficient AI models over large general-purpose models


Disagreed with

– Ioanna Ntinou
– Adham Abouzied

Disagreed on

Balance between large foundational models and small specialized models


A

Adham Abouzied

Speech speed

118 words per minute

Speech length

1219 words

Speech time

614 seconds

Open source solutions optimize energy costs by avoiding repetitive development, potentially saving trillions in licensing costs

Explanation

Adham Abouzied presented research showing that open source approaches significantly reduce both development costs and energy consumption by eliminating redundant work. The collaborative nature of open source creates exponential value compared to proprietary development.


Evidence

Harvard University study estimates recreating open source intellectual property would cost $4 billion, but if not open source and every player recreated or paid licenses, cost would increase to $8 trillion due to repetition


Major discussion point

Open Source and Collaborative Approaches


Topics

Economic | Development | Infrastructure


Agreed with

– Marco Zennaro
– Mark Gachara

Agreed on

Necessity of open source and collaborative approaches for sustainable AI development


System-level cooperation and data sharing across value chains is essential for meaningful vertical AI models

Explanation

Adham Abouzied argued that effective vertical AI models require collaboration and data sharing across entire industry value chains, not just individual companies. This systemic approach enables AI to create value across interconnected processes and decisions.


Evidence

AI breakthroughs came from training on wealth of internet data; for vertical impact, models need to see and train on data across value chains; energy sector example where cooperation and data exchange from generation to transmission to usage is needed but currently exists in few countries


Major discussion point

Open Source and Collaborative Approaches


Topics

Economic | Infrastructure | Legal and regulatory


Right policies and sharing protocols across industries are needed with proper regulatory frameworks and incentives

Explanation

Adham Abouzied emphasized the need for comprehensive governance structures that encourage data and intellectual property sharing while maintaining competitive advantages. This includes addressing regulatory barriers that prevent cross-border data flows and AI implementation.


Evidence

Need well-designed, well-enforced policies with carrot and stick; difficulties with current regulation including cloud governance, data classification, and restrictions on cross-border data travel in emerging countries without local hyperscalers


Major discussion point

AI Governance and Strategic Implementation


Topics

Legal and regulatory | Economic | Development


Agreed with

– Mario Nobile
– Marco Zennaro

Agreed on

Need for comprehensive policy frameworks and governance structures


AI optimization in energy sector from generation to transmission to usage requires cooperation and data exchange across value chains

Explanation

Adham Abouzied provided a specific example of how AI can help optimize energy systems while reducing its own consumption, but this requires unprecedented cooperation across the entire energy value chain. This approach transforms AI from an energy burden to an energy optimization tool.


Evidence

Energy sector example where AI can help optimize decisions from generation to transmission to usage, but currently very few countries have cooperation and data exchange across different steps of the value chain; this would allow AI to consume less energy while helping the energy sector optimize and become greener


Major discussion point

Practical Applications and Use Cases


Topics

Infrastructure | Development | Sustainable development


Agreed with

– Mario Nobile
– Marco Zennaro
– Ioanna Ntinou
– Leona Verdadero

Agreed on

Need for smaller, more efficient AI models over large general-purpose models


I

Ioanna Ntinou

Speech speed

152 words per minute

Speech length

951 words

Speech time

373 seconds

Knowledge distillation can reduce model parameters by 60% while maintaining accuracy, as demonstrated in energy grid forecasting

Explanation

Ioanna Ntinou described a practical technique where a large, accurate model teaches a smaller model to achieve similar performance with significantly fewer parameters. This approach maintains effectiveness while dramatically reducing computational requirements and energy consumption.


Evidence

Raido project example with Greek energy companies PPC and CERF: used knowledge distillation to create a student model from a teacher model for energy demand forecasting in smart homes, reduced parameters by 60% while maintaining close accuracy


Major discussion point

Energy-Efficient AI Technologies and Solutions


Topics

Infrastructure | Development | Economic


Agreed with

– Mario Nobile
– Marco Zennaro
– Adham Abouzied
– Leona Verdadero

Agreed on

Need for smaller, more efficient AI models over large general-purpose models


Need for transparency in energy reporting and evaluation standards at prompt level to develop awareness of AI energy usage

Explanation

Ioanna Ntinou argued that without knowing the energy cost of individual AI interactions, it’s impossible to make informed decisions about AI usage or develop appropriate legislation. She emphasized the need for standardized measurement and reporting of energy consumption.


Evidence

Models like GPT-4 or DeepSeq with 70 billion parameters consume massive electricity during training and inference, but we don’t know how much energy is used when putting a simple prompt on GPT; need evaluation standards at prompt level for awareness and transparency


Major discussion point

Transparency and Measurement in AI Energy Consumption


Topics

Legal and regulatory | Development | Infrastructure


Agreed with

– Mark Gachara
– Moderator

Agreed on

Critical importance of transparency and measurement in AI energy consumption


Assessment of energy usage in widely used public models is essential before focusing on legislation

Explanation

Ioanna Ntinou emphasized that effective policy-making requires concrete data about energy consumption patterns in commonly used AI systems. Without this foundational knowledge, regulatory efforts lack the evidence base needed for effective governance.


Evidence

Simply assess how much energy is used in widely used public models; without having knowledge of how much energy is actually used, cannot focus on legislation


Major discussion point

Transparency and Measurement in AI Energy Consumption


Topics

Legal and regulatory | Development | Infrastructure


Sustainable AI will not emerge by default and needs active support and incentivization beyond just accuracy metrics

Explanation

Ioanna Ntinou pointed out that current AI development practices prioritize accuracy over energy efficiency, leading to unnecessarily resource-intensive models. She argued for deliberate intervention to change these incentive structures and promote more sustainable development practices.


Evidence

When training a model, developers opt for bigger models and more data because success is measured by accuracy; sometimes small improvement comes with huge increase in energy, need to be careful with this trade-off


Major discussion point

Transparency and Measurement in AI Energy Consumption


Topics

Development | Legal and regulatory | Economic


Smaller models deployed in smart homes for energy demand forecasting help companies make more accurate production decisions

Explanation

Ioanna Ntinou described how efficient AI models can provide practical benefits beyond just energy savings, including improved business decision-making and easier deployment to end-users. This demonstrates the multiple advantages of the smaller model approach.


Evidence

Optimized model for day-ahead forecasting of energy demand and supply in smart homes with microgrids; smaller model easier to deploy in small devices, democratizes access to AI, helps companies have more accurate forecast for energy production planning


Major discussion point

Practical Applications and Use Cases


Topics

Infrastructure | Economic | Development


Focus on smaller, task-specific models while not neglecting progress made with large language models

Explanation

Ioanna Ntinou acknowledged the tension between developing efficient small models and continuing to advance the field through large model research. She argued that both approaches have value and that small models offer scientific learning opportunities beyond just energy savings.


Evidence

Mobile phones use small task-specific models due to battery and processor constraints; there is value in terms of science and knowledge from pushing boundaries of research with small models, not just energy benefits


Major discussion point

Future Research and Development Directions


Topics

Development | Infrastructure | Economic


Disagreed with

– Marco Zennaro
– Adham Abouzied

Disagreed on

Balance between large foundational models and small specialized models


M

Mark Gachara

Speech speed

145 words per minute

Speech length

686 words

Speech time

282 seconds

Developers can optimize code energy consumption through tools like Code Carbon to measure environmental impact

Explanation

Mark Gachara highlighted Mozilla’s support for practical tools that help developers understand and reduce the environmental impact of their coding practices. This approach addresses sustainability at the fundamental level of software development.


Evidence

Mozilla grantee Code Carbon from France created open source project asking ‘when I write code, how much energy am I using?’ since fossil fuels generate energy, allowing developers to optimize code before harming environment


Major discussion point

Transparency and Measurement in AI Energy Consumption


Topics

Development | Infrastructure | Sustainable development


Agreed with

– Marco Zennaro
– Adham Abouzied

Agreed on

Necessity of open source and collaborative approaches for sustainable AI development


Civil society can use AI for ecological mapping and evidence generation to advocate against environmentally harmful projects

Explanation

Mark Gachara provided an example of how civil society organizations can leverage AI tools to generate scientific evidence for environmental advocacy. This demonstrates AI’s potential as a tool for environmental protection rather than just consumption.


Evidence

Center for Justice Governance and Environmental Action in Kilifi, Kenya used AI for ecological mapping of Indian Ocean area to oppose proposed nuclear reactor, generating report arguing for renewable energy focus since Kenya is net producer of renewable energy


Major discussion point

Practical Applications and Use Cases


Topics

Human rights | Development | Sustainable development


Support for grassroots organizers and indigenous communities to build localized climate solutions through targeted funding

Explanation

Mark Gachara emphasized that climate impacts are most severe in the Global South and among indigenous communities, so funding should support these groups in developing locally appropriate AI solutions. This approach recognizes that effective climate solutions must be community-driven and context-specific.


Evidence

Theater of most climate impact is in global south with farmers and indigenous communities; need funds supporting grassroots organizers and indigenous communities to build solutions together; problems are localized so need local solutions with these communities


Major discussion point

Funding and Policy Mechanisms


Topics

Development | Human rights | Sustainable development


L

Leona Verdadero

Speech speed

161 words per minute

Speech length

450 words

Speech time

167 seconds

UNESCO’s upcoming report “Smarter, Smaller, Stronger” demonstrates that optimized models can be smaller, more efficient, and better performing

Explanation

Leona Verdadero described UNESCO’s research partnership with University College London that challenges the assumption that bigger AI models are better. Their experiments show that optimization techniques can create models that are simultaneously smaller, more efficient, and better performing.


Evidence

Partnership with University College London doing experiments on optimizing AI inference space; energy efficient techniques like quantization and distillation make models smaller, more efficient, and better performing; makes AI more accessible in low resource settings with limited compute and infrastructure


Major discussion point

Future Research and Development Directions


Topics

Development | Infrastructure | Economic


Agreed with

– Mario Nobile
– Marco Zennaro
– Adham Abouzied
– Ioanna Ntinou

Agreed on

Need for smaller, more efficient AI models over large general-purpose models


M

Moderator

Speech speed

135 words per minute

Speech length

2419 words

Speech time

1069 seconds

AI governance, climate change, energy resources, and water scarcity are among the most important policy issues requiring integrated solutions

Explanation

The moderator highlighted that multiple global reports identify these interconnected challenges as critical issues of our time. The challenge lies in combining all these elements together to address the planetary crisis we are currently experiencing.


Evidence

Reports consistently identify governance of artificial intelligence, climate change, energy resources and water as the most important and threatening issues for humanity that appear across different policy reports


Major discussion point

AI Governance and Strategic Implementation


Topics

Legal and regulatory | Sustainable development | Development


AI presents both opportunities to address environmental risks and contributes to the problem through high energy and water consumption

Explanation

The moderator emphasized the paradoxical nature of AI technology – it can help solve climate and environmental challenges through modeling and analysis, but simultaneously contributes to these problems through its resource consumption. This creates a complex challenge of fostering opportunities while mitigating risks.


Evidence

AI models can help climate scientists, policymakers, and civil society solve complex environmental equations, but the same technology consumes significant energy and water resources


Major discussion point

Energy-Efficient AI Technologies and Solutions


Topics

Sustainable development | Infrastructure | Development


Public sector procurement policies can drive sustainable AI adoption since governments are the biggest single buyers in most countries

Explanation

The moderator noted that in 99% of UN member states, the public sector remains the largest single purchaser, making government procurement policies a powerful tool for promoting sustainable AI practices. Decent procurement policies alone can create significant impact in driving market demand for energy-efficient AI solutions.


Evidence

In 99% of UN member states, the public sector is still the biggest single buyer, making procurement policies a key lever for change


Major discussion point

Funding and Policy Mechanisms


Topics

Legal and regulatory | Economic | Development


Transparency in technology adoption generates evidence needed for effective policymaking, similar to energy efficiency standards in other sectors

Explanation

The moderator drew parallels between AI transparency and other technological revolutions, noting how transparency requirements in areas like building energy efficiency have enabled conscious decision-making. This transparency creates the evidence base necessary for informed policy development and public awareness.


Evidence

Example of energy efficiency passes for houses in Germany and EU that are obligatory, making energy consumption transparent so people can make conscious decisions; similar to carbon trading systems that force industry efficiency


Major discussion point

Transparency and Measurement in AI Energy Consumption


Topics

Legal and regulatory | Development | Infrastructure


Agreed with

– Ioanna Ntinou
– Mark Gachara

Agreed on

Critical importance of transparency and measurement in AI energy consumption


A

Audience

Speech speed

177 words per minute

Speech length

349 words

Speech time

118 seconds

Carbon trading mechanisms could potentially be applied to AI energy consumption to force industry toward more efficient models

Explanation

An audience member suggested that transparency in AI energy consumption could eventually lead to trading systems similar to carbon markets. This would create economic incentives for developing smaller, more efficient models by making energy consumption a tradeable commodity with associated costs.


Evidence

Reference to existing carbon trading systems that force industry to become more efficient through certificates on carbon emissions; questioning whether transparency on AI energy consumption could lead to trading rights to consume energy with AI technologies


Major discussion point

Funding and Policy Mechanisms


Topics

Economic | Legal and regulatory | Sustainable development


Science journalists should ask comprehensive questions about AI’s interconnected impacts on sustainability, job displacement, and societal awareness

Explanation

The audience member prompted discussion about what questions journalists should be asking about AI and sustainability issues. This highlights the need for media coverage that addresses the complex, interconnected nature of AI’s impacts rather than treating these issues in isolation.


Evidence

Question about what science journalists should be asking about AI sustainability issues, leading to responses about interconnected challenges of energy consumption, job losses, and public awareness


Major discussion point

Future Research and Development Directions


Topics

Sociocultural | Development | Economic


Agreements

Agreement points

Need for smaller, more efficient AI models over large general-purpose models

Speakers

– Mario Nobile
– Marco Zennaro
– Adham Abouzied
– Ioanna Ntinou
– Leona Verdadero

Arguments

Transition from large energy-consuming models to vertical and agile foundation models for specific purposes like health, transportation, and manufacturing


Question whether super-wide models answering every question are needed versus specific models solving targeted issues


AI optimization in energy sector from generation to transmission to usage requires cooperation and data exchange across value chains


Knowledge distillation can reduce model parameters by 60% while maintaining accuracy, as demonstrated in energy grid forecasting


UNESCO’s upcoming report “Smarter, Smaller, Stronger” demonstrates that optimized models can be smaller, more efficient, and better performing


Summary

All speakers agreed that the future of sustainable AI lies in developing smaller, task-specific models rather than continuing to scale up large general-purpose models. They emphasized that these smaller models can maintain effectiveness while dramatically reducing energy consumption.


Topics

Development | Infrastructure | Economic


Critical importance of transparency and measurement in AI energy consumption

Speakers

– Ioanna Ntinou
– Mark Gachara
– Moderator

Arguments

Need for transparency in energy reporting and evaluation standards at prompt level to develop awareness of AI energy usage


Developers can optimize code energy consumption through tools like Code Carbon to measure environmental impact


Transparency in technology adoption generates evidence needed for effective policymaking, similar to energy efficiency standards in other sectors


Summary

Speakers unanimously agreed that without proper measurement and transparency of AI energy consumption, it’s impossible to make informed decisions about AI usage or develop appropriate legislation. They emphasized the need for standardized measurement tools and reporting mechanisms.


Topics

Legal and regulatory | Development | Infrastructure


Necessity of open source and collaborative approaches for sustainable AI development

Speakers

– Marco Zennaro
– Adham Abouzied
– Mark Gachara

Arguments

Investment in local capacity building using open curricula co-developed with global partners prevents reinventing the wheel


Open source solutions optimize energy costs by avoiding repetitive development, potentially saving trillions in licensing costs


Developers can optimize code energy consumption through tools like Code Carbon to measure environmental impact


Summary

Speakers agreed that open source approaches are essential for sustainable AI development, as they prevent duplication of effort, reduce costs, and enable collaborative problem-solving while minimizing energy consumption through shared resources and knowledge.


Topics

Economic | Development | Infrastructure


Need for comprehensive policy frameworks and governance structures

Speakers

– Mario Nobile
– Adham Abouzied
– Marco Zennaro

Arguments

Need for guidelines on AI adoption, procurement, and development across 23,000 public administrations in Italy


Right policies and sharing protocols across industries are needed with proper regulatory frameworks and incentives


Integration of TinyML into national digital and innovation strategies is essential for comprehensive AI planning


Summary

All speakers emphasized the critical need for comprehensive governance frameworks that include proper policies, guidelines, and regulatory structures to support sustainable AI development and deployment across different sectors and scales.


Topics

Legal and regulatory | Economic | Development


Similar viewpoints

Both speakers emphasized the importance of supporting local and regional communities, particularly in the Global South, to develop context-appropriate AI solutions that address their specific challenges and needs.

Speakers

– Marco Zennaro
– Mark Gachara

Arguments

Regional collaboration and south-to-south knowledge sharing has proven extremely successful for TinyML applications


Support for grassroots organizers and indigenous communities to build localized climate solutions through targeted funding


Topics

Development | Human rights | Sustainable development


Both speakers advocated for innovative approaches to cooperation and procurement that move beyond traditional methods to enable better collaboration and data sharing across organizations and sectors.

Speakers

– Mario Nobile
– Adham Abouzied

Arguments

Open Innovation Framework enables new procurement approaches beyond traditional tenders for public administration


System-level cooperation and data sharing across value chains is essential for meaningful vertical AI models


Topics

Legal and regulatory | Economic | Development


Both speakers emphasized that sustainable AI development requires deliberate intervention and active promotion, challenging the current paradigm that prioritizes model size and accuracy over efficiency and sustainability.

Speakers

– Ioanna Ntinou
– Leona Verdadero

Arguments

Sustainable AI will not emerge by default and needs active support and incentivization beyond just accuracy metrics


UNESCO’s upcoming report “Smarter, Smaller, Stronger” demonstrates that optimized models can be smaller, more efficient, and better performing


Topics

Development | Legal and regulatory | Economic


Unexpected consensus

Integration of social and environmental concerns in AI development

Speakers

– Mario Nobile
– Mark Gachara
– Ioanna Ntinou

Arguments

Need to address job displacement alongside energy consumption as interconnected challenges requiring comprehensive solutions


Civil society can use AI for ecological mapping and evidence generation to advocate against environmentally harmful projects


Focus on smaller, task-specific models while not neglecting progress made with large language models


Explanation

Unexpectedly, speakers from different backgrounds (government regulator, civil society advocate, and technical researcher) all emphasized the need to consider AI’s social, environmental, and technical impacts as interconnected rather than separate issues. This holistic view was surprising given their different professional perspectives.


Topics

Economic | Development | Sociocultural


Practical implementation focus over theoretical discussions

Speakers

– Mario Nobile
– Marco Zennaro
– Ioanna Ntinou
– Mark Gachara

Arguments

Italy allocates 69 billion euros for ecological transition and 13 billion for business digitalization from National Recovery Plan


TinyML applications include disease detection in livestock, bee counting, anemia detection, and turtle behavior monitoring across 60+ universities in 32 countries


Smaller models deployed in smart homes for energy demand forecasting help companies make more accurate production decisions


Civil society can use AI for ecological mapping and evidence generation to advocate against environmentally harmful projects


Explanation

All speakers, regardless of their institutional background, focused heavily on concrete, practical applications and real-world implementations rather than theoretical discussions. This consensus on pragmatic approaches was unexpected in an academic/policy setting where abstract discussions often dominate.


Topics

Development | Infrastructure | Economic


Overall assessment

Summary

The speakers demonstrated remarkable consensus across multiple key areas: the need for smaller, more efficient AI models; the critical importance of transparency and measurement; the value of open source and collaborative approaches; and the necessity of comprehensive policy frameworks. They also shared unexpected agreement on integrating social and environmental concerns and focusing on practical implementation.


Consensus level

High level of consensus with strong implications for sustainable AI development. The agreement across speakers from different sectors (government, academia, civil society, private sector) suggests these principles have broad support and could form the foundation for coordinated action on sustainable AI. The consensus indicates a mature understanding of the challenges and a convergence toward practical, implementable solutions rather than competing approaches.


Differences

Different viewpoints

Balance between large foundational models and small specialized models

Speakers

– Ioanna Ntinou
– Marco Zennaro
– Adham Abouzied

Arguments

Focus on smaller, task-specific models while not neglecting progress made with large language models


Question whether super-wide models answering every question are needed versus specific models solving targeted issues


AI breakthroughs came from training on wealth of internet data; for vertical impact, models need to see and train on data across value chains


Summary

Ioanna expressed concern about neglecting progress from large language models while focusing on smaller ones, whereas Marco questioned the necessity of super-wide models, and Adham emphasized the value of large foundational models for breakthrough innovations while supporting vertical specialization


Topics

Development | Infrastructure | Economic


Unexpected differences

Scope of open source collaboration versus national strategic control

Speakers

– Mario Nobile
– Adham Abouzied
– Marco Zennaro

Arguments

Need for guidelines on AI adoption, procurement, and development across 23,000 public administrations in Italy


Open source solutions optimize energy costs by avoiding repetitive development, potentially saving trillions in licensing costs


Investment in local capacity building using open curricula co-developed with global partners prevents reinventing the wheel


Explanation

While all speakers supported collaboration, there was an unexpected tension between Mario’s emphasis on national strategic control and standardized guidelines versus Adham and Marco’s push for more open, collaborative approaches that transcend national boundaries


Topics

Legal and regulatory | Economic | Development


Overall assessment

Summary

The discussion showed remarkable consensus on core goals (energy efficiency, sustainability, accessibility) with disagreements primarily focused on implementation approaches and the balance between different technical strategies


Disagreement level

Low to moderate disagreement level. Most conflicts were constructive and focused on technical approaches rather than fundamental goals. The main tension was between preserving innovation from large models while promoting efficiency through smaller ones. This suggests a healthy debate that could lead to complementary rather than competing solutions, with implications for developing comprehensive AI sustainability policies that accommodate multiple technical approaches.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers emphasized the importance of supporting local and regional communities, particularly in the Global South, to develop context-appropriate AI solutions that address their specific challenges and needs.

Speakers

– Marco Zennaro
– Mark Gachara

Arguments

Regional collaboration and south-to-south knowledge sharing has proven extremely successful for TinyML applications


Support for grassroots organizers and indigenous communities to build localized climate solutions through targeted funding


Topics

Development | Human rights | Sustainable development


Both speakers advocated for innovative approaches to cooperation and procurement that move beyond traditional methods to enable better collaboration and data sharing across organizations and sectors.

Speakers

– Mario Nobile
– Adham Abouzied

Arguments

Open Innovation Framework enables new procurement approaches beyond traditional tenders for public administration


System-level cooperation and data sharing across value chains is essential for meaningful vertical AI models


Topics

Legal and regulatory | Economic | Development


Both speakers emphasized that sustainable AI development requires deliberate intervention and active promotion, challenging the current paradigm that prioritizes model size and accuracy over efficiency and sustainability.

Speakers

– Ioanna Ntinou
– Leona Verdadero

Arguments

Sustainable AI will not emerge by default and needs active support and incentivization beyond just accuracy metrics


UNESCO’s upcoming report “Smarter, Smaller, Stronger” demonstrates that optimized models can be smaller, more efficient, and better performing


Topics

Development | Legal and regulatory | Economic


Takeaways

Key takeaways

AI governance requires a multi-pillar approach combining education, research, public administration, and enterprise engagement to ensure inclusive growth


Energy-efficient AI solutions like TinyML and model optimization can significantly reduce power consumption while maintaining performance – demonstrated by 60% parameter reduction with maintained accuracy


Open source approaches can save trillions in development costs by avoiding repetitive creation of AI solutions across organizations


Transparency in AI energy consumption reporting is essential before effective legislation can be developed – current lack of visibility into energy usage per prompt or model interaction


Smaller, task-specific AI models often provide better value than large general-purpose models for solving specific problems like healthcare, agriculture, and environmental monitoring


System-level cooperation and data sharing across value chains is necessary for vertical AI models to create meaningful impact in sectors like energy optimization


Civil society can leverage AI tools for evidence generation and advocacy, particularly for environmental justice issues in developing regions


Public procurement policies represent a powerful lever for promoting sustainable AI adoption given governments’ role as largest buyers


Resolutions and action items

UNESCO and University College London to launch ‘Smarter, Smaller, Stronger’ report on resource-efficient generative AI


Italy to implement guidelines for AI adoption, procurement, and development across 23,000 public administrations


Italy developing tax credit framework for small and medium enterprises to incentivize AI adoption


Promotion of open Innovation Framework to facilitate new procurement approaches beyond traditional tenders


Integration of TinyML into national digital and innovation strategies


Investment in local capacity building using open curricula co-developed with global partners


Funding for context-aware pilot projects in key development sectors


Development of evaluation standards for energy consumption at prompt level


Unresolved issues

How to balance the trade-off between AI model accuracy and energy consumption when accuracy remains the primary success metric


Regulatory challenges around data governance, cloud usage, and cross-border data transfer that limit AI implementation


Lack of transparency in energy consumption reporting for widely used public AI models like GPT-4


How to address job displacement concerns alongside energy consumption as interconnected challenges


Whether focusing on smaller models might neglect important progress made with large language models


How to ensure adequate funding reaches grassroots organizations and indigenous communities for localized climate solutions


Implementation of carbon trading mechanisms for AI energy consumption similar to other industries


Suggested compromises

Transition gradually from large energy-consuming models to smaller vertical models while maintaining research progress on both fronts


Use large models as ‘teachers’ through knowledge distillation to create smaller ‘student’ models that maintain performance


Implement hybrid approaches combining cloud and edge computing to optimize energy usage


Focus AI energy consumption on delivering real value within systems that support SDGs rather than general content generation


Develop task-specific models for mobile and resource-constrained environments while continuing research on larger models


Balance accuracy requirements with energy costs by questioning whether maximum accuracy is always necessary for specific use cases


Thought provoking comments

Now the debate is not humans versus machines. Now the debate is about who understands and uses managed AI versus machines. who don’t.

Speaker

Mario Nobile


Reason

This reframes the entire AI discourse from a fear-based narrative about AI replacing humans to a more nuanced understanding about digital literacy and AI competency. It shifts focus from existential concerns to practical skill development and education.


Impact

This comment established education as a central theme throughout the discussion. It influenced subsequent speakers to emphasize capacity building, local training, and the democratization of AI knowledge, moving the conversation from technical solutions to human empowerment.


The study basically estimates that if we would recreate only one time the wealth of open source intellectual property that is available today on the Internet, it would cost us $4 billion. But it does not stop there… you would increase this $4 billion of cost to $8 trillion just because of the repetition.

Speaker

Adham Abouzied


Reason

This provides concrete economic evidence for the value of open-source collaboration, transforming an abstract concept into tangible financial terms. It demonstrates how sharing and collaboration can create exponential value while reducing resource consumption.


Impact

This comment shifted the discussion from individual technical solutions to systemic collaboration. It influenced other panelists to emphasize knowledge sharing, regional cooperation, and the importance of breaking down silos between organizations and sectors.


Sustainable AI is not going to emerge by default. We need to incentivize and it needs to be actively supported… if we are measuring everything by accuracy, and sometimes we neglect the cost that comes with accuracy, we might consume way more energy than what is actually needed.

Speaker

Ioanna Ntinou


Reason

This challenges the fundamental assumption in AI development that bigger and more accurate is always better. It exposes the hidden costs of the current success metrics and calls for a paradigm shift in how we evaluate AI systems.


Impact

This comment introduced critical thinking about measurement frameworks and success criteria. It led to discussions about transparency in energy reporting and the need for new evaluation standards that balance performance with sustainability.


Do we always need these super wide models that can answer every question we have? Or is it better to focus on models that solve specific issues which are useful for, you know, SDGs or for humanity in general?

Speaker

Marco Zennaro


Reason

This question fundamentally challenges the current trajectory of AI development toward ever-larger general-purpose models. It connects technical decisions to broader humanitarian goals and sustainable development.


Impact

This question became a recurring theme that influenced multiple speakers to advocate for task-specific, smaller models. It helped establish the ‘small is beautiful’ philosophy that ran through the latter part of the discussion and connected technical choices to social impact.


The theater of where the most impact of climate is is in the global south and it would be a farmer, it would be indigenous and local communities… we need to foster funding strategies that could actually look into research and creating science that actually solves this climate solutions with these local communities.

Speaker

Mark Gachara


Reason

This comment brings crucial equity and justice perspectives to the technical discussion, highlighting that those most affected by climate change are often least represented in AI development. It challenges the panel to consider who benefits from and who bears the costs of AI solutions.


Impact

This intervention grounded the entire discussion in real-world impact and social justice. It influenced the conversation to consider not just technical efficiency but also accessibility, local relevance, and community empowerment in AI solutions.


Overall assessment

These key comments fundamentally shaped the discussion by challenging conventional assumptions about AI development and success metrics. They moved the conversation from a purely technical focus to a more holistic view that encompasses education, collaboration, sustainability, and social justice. The comments created a progression from individual technical solutions to systemic thinking about governance, measurement, and community impact. Most significantly, they established a counter-narrative to the ‘bigger is better’ approach in AI, advocating instead for targeted, efficient, and socially conscious AI development that prioritizes real-world problem-solving over technical prowess.


Follow-up questions

How can we transition from brute force models to vertical and agile foundation models with specific purposes across different sectors like health, transportation, tourism, and manufacturing?

Speaker

Mario Nobile


Explanation

This is crucial for reducing energy consumption while maintaining AI effectiveness in specific domains, representing a key strategic shift in AI development approach.


How can we effectively break data silos and improve data quality to enable better AI applications in public administration?

Speaker

Mario Nobile


Explanation

Data quality and interoperability are fundamental challenges that need to be addressed for successful AI implementation in government services.


What are the specific energy consumption metrics at the prompt level for widely used public models like GPT-4?

Speaker

Ioanna Ntinou


Explanation

Without transparency in energy reporting at the individual query level, it’s impossible to make informed decisions about sustainable AI usage and develop appropriate legislation.


Can we develop a carbon trading system specifically for AI energy consumption to incentivize more efficient models?

Speaker

Jan Lublinski (Audience)


Explanation

This would create market-based incentives for developing energy-efficient AI systems, similar to existing carbon trading mechanisms in other industries.


Do we always need super wide models that can answer every question, or is it better to focus on models that solve specific issues useful for SDGs?

Speaker

Marco Zennaro


Explanation

This fundamental question challenges the current trend toward large general-purpose models and could reshape AI development priorities toward more sustainable, task-specific solutions.


How can we establish proper data governance and classification standards for cross-border AI applications, especially in emerging countries?

Speaker

Adham Abouzied


Explanation

Current regulations often prevent effective AI implementation due to unclear data governance rules, particularly affecting countries without local hyperscaler infrastructure.


What evaluation standards should be developed to assess energy consumption at the prompt level for AI models?

Speaker

Ioanna Ntinou


Explanation

Standardized evaluation methods are needed to create transparency and enable comparison of energy efficiency across different AI systems.


How can funding strategies be developed to support grassroots organizers and indigenous communities in building localized AI climate solutions?

Speaker

Mark Gachara


Explanation

Since climate impacts are most severe in the Global South and among indigenous communities, research funding should prioritize supporting these communities in developing their own solutions.


What are the optimal policies and sharing protocols needed across industries to accelerate adoption of open source AI solutions?

Speaker

Adham Abouzied


Explanation

System-level cooperation requires well-designed governance frameworks to incentivize data and intellectual property sharing while maintaining competitive advantages.


How can we balance the trade-off between model accuracy and energy consumption in AI development?

Speaker

Ioanna Ntinou


Explanation

Current success metrics focus primarily on accuracy, often neglecting the disproportionate energy costs of marginal improvements, requiring new evaluation frameworks.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.