Day 0 Event #251 Large Models and Small Player Leveraging AI in Small States and Startups
23 Jun 2025 14:00h - 15:30h
Day 0 Event #251 Large Models and Small Player Leveraging AI in Small States and Startups
Session at a glance
Summary
This discussion, presented by IGF 2025 host country Norway, focused on how small states and startups can leverage AI to compete with larger players in the global AI landscape. The session explored whether smaller actors are sidelined by AI’s resource demands or positioned for unique opportunities through agility, trust, and strategic collaborations.
Norwegian Minister Karianne Tung outlined Norway’s ambitious goal to become the world’s most digitalized country by 2030, highlighting investments in national AI infrastructure, including the new Olivia supercomputer and free Norwegian language models. She emphasized that small states can lead through flexibility and value-based approaches rather than despite their size. Jan-Marcus Lervig from Cognite demonstrated how startups can compete by focusing on specific domains where they possess more relevant data than tech giants, citing Cognite’s leadership in industrial data management.
Professor Ole-Christopher Granmo presented the Settling Machine as an energy-efficient alternative to deep learning, using up to 10,000 times less electricity while maintaining accuracy and explainability. Dr. Chinasa Okolo emphasized opportunities for smaller nations to lead in ethical AI development through contextual innovation, data sovereignty, and peer-to-peer collaboration, particularly in the global majority regions.
Industry representatives Jeff Bullwinkel from Microsoft and Kojo Boake from Meta discussed how large platforms support smaller players through open infrastructure and models. Esther Kunda from Rwanda shared insights from the AI playbook for small states, emphasizing capability building and trusted environments for innovation.
The panelists agreed that success requires focusing on creating value first, then implementing appropriate governance frameworks, while avoiding over-regulation that could stifle innovation. The discussion concluded that small players can become strategic shapers of AI’s future through smart partnerships, domain expertise, and leveraging unique national advantages like renewable energy and specialized knowledge.
Keypoints
## Major Discussion Points:
– **Small States and Startups Leveraging AI Innovation**: The discussion explored how smaller nations and companies can compete with tech giants by focusing on domain expertise, agility, and unique advantages rather than trying to match the scale of hyperscalers. Examples included Norway’s focus on industrial data, Rwanda’s AI policy leadership, and Estonia’s digital government initiatives.
– **Energy-Efficient and Alternative AI Technologies**: Significant attention was given to developing sustainable AI solutions, including Professor Granmo’s Settling Machine as an energy-efficient alternative to deep learning, and Norway’s advantage in green energy for AI infrastructure. The discussion highlighted the environmental costs of current AI models and the need for more efficient approaches.
– **Open Source vs. Closed AI Models**: The debate centered on democratizing AI access through open-source models (like Meta’s Llama) versus proprietary systems, with speakers discussing how open-source approaches can level the playing field for smaller players and enable local customization and fine-tuning.
– **AI Governance and Regulation Frameworks**: Extensive discussion on balancing innovation with responsible AI development, including the EU AI Act implementation, regulatory sandboxes, and the need for context-appropriate governance frameworks that don’t stifle innovation while ensuring ethical AI deployment.
– **Data Sovereignty and Local Context**: The importance of countries maintaining control over their data and developing AI solutions that reflect local values, languages, and societal needs, rather than relying solely on models trained on Western data and perspectives.
## Overall Purpose:
The discussion aimed to explore how smaller nations, startups, and underrepresented regions can effectively participate in and shape the global AI landscape despite resource constraints. The session sought to identify strategies for leveraging unique advantages, fostering innovation ecosystems, and creating inclusive AI development that serves diverse global needs rather than being dominated by a few major tech companies.
## Overall Tone:
The discussion maintained an optimistic and collaborative tone throughout, with speakers emphasizing opportunities rather than limitations. There was a strong sense of partnership and shared purpose among panelists from different sectors (government, academia, industry). The tone was pragmatic yet aspirational, acknowledging challenges while focusing on actionable solutions. Speakers consistently reinforced themes of cooperation, innovation, and the potential for smaller players to make significant contributions to the AI ecosystem. The atmosphere remained constructive and forward-looking, with minimal tension despite representing different perspectives on AI development and governance.
Speakers
**Speakers from the provided list:**
– **Natalie Becker Aakervik** – Moderator for the session
– **Karianne Tung** – Norway’s Minister of Digitalization and Public Governance
– **John M Lervik** – Entrepreneur and strategist from Cognite, described as one of the leading voices in Norway’s startup ecosystem
– **Ole Christopher Granmo** – Professor at University of Agder and Director of CAIR (Centre for Artificial Intelligence Research), expert in Tsetlin Machine approach to AI
– **Chinasa T. Okolo** – Fellow at the Center for Technology Innovation at the Brookings Institute, recognized as one of the world’s most influential people in AI by Time, expert at the intersection of AI, equity and global governance
– **Esther Kunda** – Director General of Innovation and Emerging Technologies (from Rwanda)
– **Jeff Bullwinkel** – Deputy General Counsel for Microsoft EMEA
– **Kojo Boake** – Vice President of Public Policy for Africa, the Middle East, and Turkey at META
– **Daniel Dykes** – (Role/expertise not clearly specified in transcript)
– **Noel Hurley** – (Role/expertise not clearly specified in transcript)
– **Rishad A. Shafik** – (Role/expertise not clearly specified in transcript)
**Additional speakers:**
None identified beyond the provided speakers names list.
Full session report
# Small States and Startups in the Global AI Landscape: IGF 2025 Discussion Report
## Introduction and Context
This IGF 2025 session, hosted by Norway, examined how small states and startups can leverage artificial intelligence to compete in the global AI landscape. Moderated by Natalie Becker Aakervik, the hybrid session brought together government ministers, industry leaders, academics, and policy experts to explore whether smaller actors face insurmountable challenges from AI’s resource demands or can find unique opportunities through strategic positioning.
The panel featured Norway’s Minister of Digitalization and Public Governance Karianne Tung, entrepreneur John M Lervik from Cognite, Professor Ole Christopher Granmo from the University of Agder, AI governance expert Chinasa T. Okolo from the Brookings Institution, Rwanda’s Director General of Innovation Esther Kunda, Microsoft’s Jeff Bullwinkel, and Meta’s Kojo Boake.
## National AI Strategies and Strategic Positioning
### Norway’s Comprehensive Digital Vision
Minister Karianne Tung outlined Norway’s ambitious strategy to become the world’s most digitalized country by 2030. This vision is supported by substantial investments including the new Olivia supercomputer and development of Norwegian language models. Tung emphasized that Norway’s approach focuses on creating value through flexibility and values-based approaches rather than competing purely on scale.
The Norwegian government has allocated 1.3 billion Norwegian kroners to AI research through six newly selected research centers beginning operations in summer 2025. Norway is also implementing the EU’s AI Act with a national supervisory authority and launching the AI Norway initiative, which includes regulatory sandboxes to foster innovation while ensuring responsible development.
Tung articulated a fundamental principle: “AI must not become a playground for the powerful, it must serve the public good. And small players are often well positioned to drive innovation with purpose.”
### Rwanda’s Innovation Laboratory Approach
Esther Kunda shared Rwanda’s comprehensive AI strategy, highlighting the country’s collaboration with Singapore on an AI Playbook for Small States and partnerships with Carnegie Mellon University for talent development. Rwanda has developed regulatory sandboxes and positioned itself as an innovation laboratory with agile regulatory frameworks.
Kunda emphasized Rwanda’s focus on three key areas: access to high-performance computing, quality data governance including data sharing policies, and skilled workforce development. She noted that Rwanda participates in the Digital FOSS platform of 108 small states established in 1992, demonstrating long-standing commitment to collaborative digital development.
## Energy-Efficient AI Technologies
### The Tsetlin Machine Alternative
Professor Ole Christopher Granmo presented the Tsetlin Machine as a revolutionary energy-efficient alternative to current AI technologies. He provided stark statistics about AI energy consumption: “One query with ChatGPT… is the same amount of energy as it takes to light one light bulb for 20 minutes. Furthermore, every month, ChatGPT produces more than 260,000 carbon CO2… equal to the emission of 260 flights from New York to London.”
The Tsetlin Machine offers significant energy savings while maintaining explainability—a critical advantage over current “black box” AI systems. Granmo argued that “if we don’t understand the AI, the AI controls us,” emphasizing the importance of maintaining human agency over artificial intelligence systems.
### Norway’s Green Energy Advantage
John M Lervik highlighted Norway’s unique position combining 100% clean energy with cold climate, creating natural advantages for sustainable AI infrastructure. Combined with Norway’s expertise in industrial data, this positions the country to lead in sustainable AI development. A salmon farming demonstration video showcased practical applications of AI in Norway’s key industries.
## Strategic Approaches for Smaller Players
### Domain Expertise Over Scale
Lervik shared Cognite’s success story, demonstrating how startups can compete by focusing on areas where they possess more relevant data than large technology companies. He advocated for strategic focus: “Small players should focus on particular problems and ensure they’re sufficiently big that large companies also care about them to create competitive tension.”
This approach creates competitive dynamics that benefit smaller players while addressing substantial market needs, rather than attempting to compete with tech giants on general-purpose AI.
### Contextual Innovation and Local Solutions
Chinasa T. Okolo emphasized opportunities for smaller nations to lead through contextual innovation, data sovereignty, and peer-to-peer collaboration. She highlighted the importance of addressing AI bias in non-Western contexts, noting that current AI fairness literature focuses primarily on Western concepts while more research is needed on discrimination based on social identities relevant to global majority countries, such as caste or tribal affiliation.
Okolo argued that smaller nations can lead by focusing on contextualized AI approaches rather than trying to build general AI models that compete directly with tech giants.
## Industry Platform Strategies
### Open Source as Democratization Tool
Kojo Boake from Meta discussed how open source models like Llama enable smaller players to fine-tune AI for local purposes while reducing compute costs and increasing transparency. Meta’s approach has enabled applications reaching 3 million students through educational tools and providing agricultural SMS services for farmers.
Boake emphasized that open source models democratize AI access by allowing smaller players to customize solutions for their specific contexts without requiring massive resources to train models from scratch. He also advocated for avoiding “cookie-cutter regulatory approaches,” supporting frameworks suited to local contexts.
### Microsoft’s Sovereign AI Commitment
Jeff Bullwinkel outlined Microsoft’s three-part European digital commitment: expanding sovereign cloud services, enhancing cybersecurity programs, and supporting digital skills development. This approach aims to give smaller nations more control over their digital infrastructure while maintaining responsible AI principles including privacy, security, and ethical frameworks as foundational elements.
## AI Governance and Regulatory Innovation
### Avoiding One-Size-Fits-All Approaches
Multiple speakers agreed that smaller nations should develop governance frameworks suited to their contexts rather than simply copying larger jurisdictions’ regulations. Okolo argued: “Just as we shouldn’t rely on these big tech companies to be the standard of AI development, we also should not rely on these bigger regional blocks or countries to also be the model for AI governance.”
### Balancing Innovation and Responsibility
A nuanced discussion emerged around the appropriate balance between enabling innovation and ensuring responsible AI development. While there was consensus on the importance of both objectives, speakers emphasized different priorities in achieving this balance.
Lervik suggested focusing on value creation first: “We are starting with the cart in front of the horse in many ways. We started to talk about ethical use and privacy… We need to start with understanding how do we create value from AI?”
Other speakers emphasized building responsible AI principles from the beginning, with Bullwinkel stressing Microsoft’s commitment to maintaining data privacy and security as foundational elements.
## Multi-Stakeholder Collaboration
### Essential Partnerships
Strong consensus emerged on the importance of multi-stakeholder collaboration involving companies of all sizes, governments, academia, and civil society. Boake emphasized that “multi-stakeholder collaboration involving big players, medium companies, small regional operators, and academics is essential for effective AI governance.”
This collaborative approach recognizes that effective AI governance requires diverse perspectives and expertise that no single actor possesses.
### Data Sovereignty and Independence
The discussion highlighted data sovereignty as critical for smaller countries seeking to maintain independence from big tech dominance. Okolo emphasized that “data sovereignty, contextual innovation, and peer-to-peer collaboration can help smaller countries control digital resources and increase independence.”
## Key Areas of Consensus
### Strategic Advantages Over Scale
All speakers agreed that small states and companies can compete effectively in AI by leveraging unique advantages like agility, specialized focus, and strategic positioning rather than trying to match the scale of large players.
### Sustainability Imperative
Multiple speakers emphasized the urgent need for energy-efficient AI solutions, with growing awareness of sustainability challenges across the AI community.
### Collaborative Approaches
All speakers emphasized the importance of collaborative approaches to AI development, whether through public-private partnerships, multi-stakeholder governance, or international cooperation.
## Practical Commitments and Next Steps
The session generated concrete commitments: Norway committed to implementing the EU’s AI Act with national supervisory authority and continuing its AI Norway initiative. Meta invited collaboration on using Llama models for national problem-solving and encouraged participation in their Impact Accelerator Program. Microsoft announced its European digital commitments including sovereign cloud services. Rwanda committed to continuing partnerships with academia and other countries for AI talent development.
## Conclusion
The discussion revealed optimism about smaller players’ potential to shape AI’s future through smart partnerships, domain expertise, and leveraging unique national advantages. Rather than being sidelined by AI’s resource demands, smaller players can find opportunities through agility, trust, and values-driven approaches that serve public good.
The session successfully reframed the conversation from defensive survival strategies to empowering leadership approaches in AI development through sustainability, explainability, local context, and collaborative governance. However, significant challenges remain in translating strategic insights into practical implementation, particularly around infrastructure development and maintaining the balance between innovation and responsibility that all speakers recognized as essential.
The collaborative spirit and mature understanding of AI challenges across different stakeholder groups suggests potential for more coordinated and effective AI governance and development strategies globally.
Session transcript
Natalie Becker Aakervik: Hello, everybody. Welcome back. We hope you had a lovely lunch and got to meet and connect with and explore some conversations of people you’ve met. I know the speakers have also been in the networking or rather the lunch session, so if you would have wanted to chat with them, we hope that you got the opportunity to do so. Welcome back. I hope that you’re energized and ready for the next session. Now, good afternoon also to our guests from watching globally from online, welcoming you back as well to this session presented by IGF 2025 host country, Norway. You heard earlier on that Norway was the second country in the world to get connected to the Internet. That’s an important fact. So, we’re looking at large models and small players leveraging AI in small states and startups. I’m Natalie Becker-Arkovic and I’ll be your moderator for this session. Now, over the past few years, we have witnessed something truly extraordinary. AI has moved from the research lab to the boardroom, to the factory floor, to the hospital and increasingly to the center of political and economic power. But, here’s the paradox. As AI becomes more accessible in some ways, it’s also becoming harder to compete. So, the biggest models demand enormous data, compute, investment and resources which are often concentrated in the hands of a few major players. what does this mean for the rest of us? Well, for small estates or for startups and for those not operating at hyperscale, are we sidelined or are we in fact standing at a unique point of opportunity? That is the question. Because here’s what we do know, for example, innovation doesn’t always come from size. It comes from agility, it comes from trust, it comes from deep knowledge and from smart, sometimes surprising collaborations. And today we’re going to explore how small actors can play a big role in shaping the future of AI. We’ll talk about regulation that enables, about startups that outmaneuver giants, about AI systems that work where bandwidth and budgets are limited, but creativity is not. And most of all, we’ll talk about partnerships. Very important word, collaboration has come up very strongly today. Partnerships has come up very strongly today, so we should take note and take that as an actionable takeaway, one of the many. And also partnerships, the kind that really makes innovation inclusive and global and sustainable. In other words, how can we move from being small players to being strategic shapers of the digital world? First, we’ll hear from Karin Tung, Norway’s Minister of Digitalization and Public Governance. Minister Tung will share her vision for how small states like Norway can shape AI policy in a way that not only protects values like fairness and transparency, but also prioritizes countries like hers as competitive innovation hubs in the global AI landscape. So a warm round of applause, please. Minister Tung, the floor is yours. Thank you. Thank you. Thank you.
Karianne Tung: Good afternoon, everyone. It is a pleasure being here and to start this very interesting discussion on leveraging. artificial intelligence to increase business competitiveness and also to create better public services. I think we all can agree that AI will transform industries and markets as well as individual lives and our whole society. Because managed and prioritized correctly, it can be the tool we need to solve many of the complex challenges that we are up against today. And at the same time, quite understandably, I must say many people feel uncertain and concerned. It’s evident that the AI revolution raises many dilemmas and questions and concerns that we need to address. And as digitalization knows no borders, we need to work together to find the best solutions. Artificial intelligence is no longer just a technological issue, it is a matter of geopolitics. AI must not become a playground for the powerful, it must serve the public good. And small players are often well positioned to drive innovation with purpose. Many groundbreaking and impactful AI innovations come from small labs, agile startups and public agencies. Through some received support or investment from big tech, their creativity and flexibility are essential forces behind AI’s rapid progress. As nations race to harness the power of AI, the development of international standards is emerging as a key strategic tool. By shaping the rules and norms that govern AI, we are not only ensuring safety and trust, but also asserting our values and in a rapidly evolving global landscape. For many, the rise of large AI models feels like a race between giants. And indeed, the largest models today are backed by the largest companies drawing on massive data, infrastructure and funding. Being a representative for a small country, I know both the challenges but also the advantages that come with size. We do not have limitless resources. But in fact, many small states are already global leaders in digitalization, cybersecurity and tech regulations. These are not accidental achievements. These stem from long-term national strategies that prioritize innovation, citizen trust and smart governance. I would now like to take the opportunity to share with you some perspectives about how Norway is taking significant steps to harness AI in a responsible and innovative way. Our main goal towards 2030 as set out in our national digitalization strategy is for Norway to become the most and best digitalized country in the world. It is ambitious, but I believe it’s not impossible. We want the business sector to have favorable framework conditions for developing and using AI. And we want all our public sector to utilize AI for greater efficiency and to create better services for our citizens already by 2025, but also for 2030 and the future. To support these ambitious goals, we are now building a national infrastructure for artificial intelligence that can be used for research, for business development and for a more modern world. modern public sector, thus placing Norway at the forefront of ethical and safe AI use. We have allocated to the National Library, in cooperation with the state company Sigma2, to train and to make available, free of charge, Norwegian and SĂ¡mi language models. These are based on our Norwegian data and our societal values. We are developing our national infrastructure for high-performance computing, and this will support both public sector and private entities in their effort to develop AI application and utilize AI within different sectors of the economy, but also society. And just last week, we switched on our newest supercomputer. It is called Olivia. It will have 17 times greater computational power than the infrastructure we used until now. And of course, we are also working on the implementation of EU’s Artificial Intelligence Act, with a goal to make it applicable in Norway at the same time as the rest of the EU. The proposal for the necessary legislation will be sent out for public consultation before this summer. To comply with the requirements that we can find in the AI Act, we are also establishing a national supervisory authority and launching what we called AI Norway. AI Norway will be placed in our digitalization agency, and this will be an arena for collaboration, sharing of experience, and also experimenting with AI solutions within different sectors. AI Norway will also, among other things, manage our regulatory sandbox, where Norwegian public sector organizations and companies, especially the SMEs, can experiment with and develop and train AI systems within safe legal frameworks. Also, a couple of weeks ago we allocated 1.3 billion Norwegian kroners to AI-related research. Six newly selected research centres will focus on various societal and technical aspects of developing and applying AI in different fields. The centres will start their operation this summer. The Norwegian School of Economics has also recently published a report on Norwegian AI tool landscape. The rankings in this report offers a unique perspective on the Norwegian AI company landscape, showcasing both established players and emerging companies. Over 350 Norwegian AI tools and companies are described in this report. 30% of these have been founded in 2022 or later, 49 of the companies have 10 or less than 10 employees. So as you see, Norway has a vibrant AI environment and many startups to contribute to this environment with their ideas and their knowledge. We just need to create and sustain favourable conditions for these companies to thrive. Support for early-stage ventures including access to data, talent and sandboxes is critical in that respect, as well as the demand from the public sector to utilise AI in developing better services and solving tasks more efficiently. But we also need to focus on international cooperation, knowledge and sharing and strategic partnerships. Our common goal should be a balanced and inclusive technological landscape that benefits everyone. To conclude, small players can be leaders in this technological shift. not despite the size, but because of their flexibility and innovation capacity, as well as a value-based approach to AI. So, let’s don’t miss up on this opportunity. Let’s work together and build a future that is open, that is fair for the many, not for the few. Thank you for your attention.
Natalie Becker Aakervik: Thank you, Minister Tung. Thank you so much. How small scales can promote innovation and regulation in AI. Thank you for those insights. Now, we’re joined by an entrepreneur and strategist and one of the leading voices in Norway’s startup systems or startup ecosystems, and you’ll recognize him right away. John will show us that size doesn’t have to limit ambition, especially when startups focus on domain strength, agility, and trust. And with concrete examples, he’s going to explain how small players can collaborate with large platforms and sometimes even out-innovate them. That’s an idea. And I’m talking about Jan-Marcus Lervig, who I’m going to introduce in a second and invite onto stage, and he will be followed by Professor Ole-Christopher GrannmĂ¥l. And Professor GrannmĂ¥l will give us a quick tour of the Settler machine, which is a lightweight, high-accuracy model ideal for smaller actors and edge applications. So it’s a reminder that you don’t need to be a superpower to do powerful AI. You need smart, interpretable design. So now, without further ado, a warm round of applause, please, for Mr. Jan-Marcus Lervig from Cognite. Please. Thank you. Thank you. I have a few slides, I think, so if you could put them up, it would be appreciated.
John M Lervik: Yes, excellent. So Cognite was founded at the beginning of 2017, and at that time, we saw a fundamental… need to improve the world’s industries. You know we have a growing world population, we have a climate crisis and also you know lately we have seen the geopolitics which basically creates a demand for us to produce more but using less. So produce more goods, more energy but with less emission. This is really the problem that Cognite is about to solve. How can we make our industries more efficient and more sustainable and safer? Eight years later we are Norway’s first unicorn but not only that we’re also a company that delivers data and AI technologies across the world, across industries. As you can see both in the energy sector, life sciences, pharma sector and many other areas. In many ways we have created a new product category for industrial data management and we are the leader in that. You can say a particular market. So then what do we do when it comes to AI? Yes we use AI in our technology to be more efficient, to make our software more intelligent and also to be able to access data, industrial data in this case, through new and better ways. In the same way that you use chat GPT for your personal lives, basically Cognite provides software and AI to access and use data to optimize how you operate industrial facilities. But we’re not happy with that. We’re not happy to be a global leader in that area. How can we take it to the next level? And of course we all know about the giants in California, you know OpenAI for example, they built chat GPT. There’s also many others that have built large language models which is basically large foundational AI models that use huge training sets from text and grammar to build these large LLMs, so large language models that we all use every day. Going forward you also have companies like Meta or Facebook. They’re also building their own foundation model, both for text, but also for images. They’re investing now $15 billion into scale AI to basically create context for images so you can create large foundational models for images. Of course, there’s no way Cognite, or I would say also Norway, can compete with that scale. It requires too much investments, resources, talent that we can compete. But if you go into a sector like industry, we see Cognite, small Cognite, if you will, have a lot more data than the large players on industrial data. We have three orders of magnitude more than NVIDIA and all of these other cloud providers. So there’s an opportunity for us to create foundational models for industrial data because we have industrial data with context. And then you can start to create another category of AI models beyond the, let’s say, consumer models for text, images, videos, et cetera, and also do the same for industrial data. And with that, you can then also start to optimize industrial assets to make them more efficient, more sustainable in new and better ways without using the conventional machine learning approaches and writing advanced software applications. And also we know, of course, that if you look at the graph to the right, it shows how quickly the cost efficiency of different technologies, the blue one is electricity. It took a number of decades. The second one was internet. And the third one is basically generative AI, how quickly the price curve goes down. So if you have access to unique data, the price curve for using those unique data to build new models is very attractive and can enable us to create something very unique. But again, it’s very hard for the conventional, largest companies in the world to compete with. So what does this mean? for Norway or for another small country for that sake. I think you know one key learning you need to stay close to the problem. Of course in our case the problem is industry, asset intensive industries, which you can argue it’s 30-40% of the world’s GDP but still it’s a particular problem and we have particular competence in Norway around industries, process industries etc. Again as I mentioned we have also access to data, not more text than open AI or more images than Facebook, but we have access to much more industrial data than any of them that we can then use with our competence and in our context. Then of course, so number one and two I believe Norway and Cognite in particular have pretty good control over, then of course we need access to compute. We need GPUs, we need ability to really train these models and continue to retrain them, so that’s also one key area. I heard the minister talking about buying into some large computers which is great but we need a lot more and of course to run these computers we also need energy and that’s another area where I would argue Norway is in a very unique position where we have essentially 100% access to 100% clean energy, green energy and this is something we have to nurture and in particular up north you know the energy is also very cheap, it’s cold so you don’t need as much cooling. So we have an opportunity in Norway by using our unique strengths you could say and fair advantages to build technologies that make us, can be world champions even in some of the bigger and more important areas in the world. Thank you.
Natalie Becker Aakervik: you
Ole Christopher Granmo: the settling machine approach to artificial intelligence my base The mission is to perform groundbreaking artificial intelligence research that transforms society.
Noel Hurley: The business world invests in the Settling Machine approach to AI, the alternative to deep learning. The big challenge that we see in AI is it’s computationally incredibly complex.
Daniel Dykes: That’s really what we’re looking at, right? We’re looking at something that is cheaper, it’s cheaper to train, it’s cheaper to run, up to 10,000 times less electricity used per inference per decision.
Rishad A. Shafik: Settling Machine as an algorithm has intrinsic properties based on logic which makes it really interesting in terms of developing new types of AI algorithms and applications that are by nature energy efficient. It is accurate, it is explainable, and it uses very little energy.
Ole Christopher Granmo: Now is the time to join the new AI paradigm creating breakthroughs and powerful applications. We are watching a revolution going on in real time. It’s a revolution driven by machine learning. It’s powerful algorithms that can learn to perform tasks from data in health, in legal, in public sector, everywhere. And the technology has become so powerful that you can solve almost any task with high accuracy. It’s very tempting to use this for all purposes. Thank you. Also very recently, we have seen large language models. I have been a skeptic for a very long time. I haven’t found the real use for it. But a few weeks ago, when I used DeepSeek to dissect my settle machine, I knew the game had changed. It was scarily good. So if you can live with the hallucinations, it’s truly a powerful tool. I would almost say super intelligent in some cases. However, when you scale up, it breaks down completely. So it can solve the complex task, but still very powerful technology. So we are now in an extremely exciting place in human history. But I have some concerns, which I want to talk about today. And I call these concerns betrayals. And I’m going to talk about three betrayals. The first one, betrayal one, is energy. Because one query with ChatGPPT, it’s extremely energy hungry. It is the same amount of energy as it takes to light one light bulb for 20 minutes. Furthermore, every month, ChatGPPT produces more than 260,000 carbon CO2. So that’s equal to the emission of 260 flights from New York to London. So this is… immense. It’s a huge environmental problem and it raises concerns because we are running out of energy and it’s not good for the planet. So that’s the first betrayal because we are endangering our future. The second betrayal is transparency. Because for the first time in human history we are bringing in technology to use that we do not fully understand. Who would fly a plane that the engineers didn’t understand? The deep learning models, the models that are driving chat GPT and other large language models, they are so complex that we cannot understand what’s going on inside them. And we know that they are unpredictable and they are full of biases and discrimination and so on. And still it’s taken into use. For instance in US algorithms are used to decide the length of sentences and the judges don’t understand them. And we know that studies show that these models are discriminating. For instance black people are automatically flagged as high-risk without any context. Furthermore, another example from India, they use AI to decide who’s going to get welfare. And thousands of legitimate receivers were removed by the AI because of faulty or weak algorithms. So extremely powerful technology but we have to be careful because are endangering the freedom and the rights of people by using it. And the last betrayal, betrayal three, is power. Because suddenly it’s the big tech companies that are becoming extremely powerful because they have produced this technology, they are owning it, and we have to use it. So we are in the pockets of big tech, in my opinion. And that affects everyone because kids have to learn to adapt to the algorithms, to get likes and to be accepted. Governments adapt to the technology they use, for instance for automatic policy, which and then calling it objective by using the AI, which we know is biased, which we know have all these weaknesses. So this is very gloomy, but I also have the solution. Because in Norway we have a new kind of artificial intelligence based on a completely new principle. It goes back to a hidden gem in the history of science. It’s from 1961. It’s a very elegant, extremely efficient model of learning that was invented by a Soviet mathematician Michael Settlin. And it’s kind of hidden and lost, but I saw immediately that this was what I was looking for when I invented the Settlin machine. And it had some very interesting properties. So I took that learning mechanism and then I combined it with propositional logic from philosophy because logic is understandable and that became the second machine. So it’s an efficient and new way to do machine learning and right now we are outperforming deep learning in sepsis alerting, in understanding lung disease, in understanding heart disease and in several domains and this is just the start. Deep learning got this decade, now watch this space. Thank you.
Natalie Becker Aakervik: Thank you so much. Right, thank you also to Jan Lervig for the presentation and then also to Professor Olay Christopher Granmore, professor at University of Agder and director of CAIR. So now I would like to introduce our next keynote speaker who’s also going to be delivering a presentation here. She’s a fellow at the Center for Technology Innovation at the Brookings Institute and one of the world’s most recognized voices here today at the intersection of AI, equity and global governance. Chinassa has been recognized as one of the world’s most influential people in AI by Time. Please join me in giving a warm Oslo welcome to Dr. Chinassa T. Okolo. The stage is yours.
Chinasa T. Okolo: All right, so really happy to be here today and thank you again for the opportunity to speak and so I’ll present briefly on how smaller countries, particularly those in the global majority, also known as Africa, Asia, Latin America, the Caribbean and Oceania, can really advance how they pursue AI and how to really make the most of it. So I’m going to introduce Dr. Chinassa T. Okolo who is a professor at the Center for Technology Innovation at the Brookings Institute and one of the world’s most recognized voices here today at the intersection of AI, equity and global governance. Please join me in giving a warm Oslo welcome to Dr. Chinassa T. Okolo who is a professor at the Center for Technology Innovation at the Brookings Institute and one of the world’s most renowned voices here today at the Brookings Institute. and communities. So first, we know that the global AI divide shows disproportionate impacts in these regions, whether it be from labor, climate, social, and economic risk. There’s been much work published on this by the UN and other entities. I was very fortunate to write for the International AI Safety Report that was steered by Professor Yoshua Bengio, who’s the most cited computer scientist alive. We also know that 50% of AI research is produced from the US and China. This map is from Digital Science. And also, the Stanford AI Index indicates that 80% of all VC funding for AI companies is allocated to just these two countries. Again, looking at this map and many others, we see that this also excludes many countries and regions, like Latin America, Asia, Africa, and beyond. We also know that despite these disparities in infrastructure, education capacity, and talent concentration, this marginalization is actually breeding innovation. We see that small and emerging nations aren’t relegating themselves to the sidelines in a global AI ecosystem. They’re redefining and developing new models for how AI should work for them and their respective needs. While Silicon Valley debates topics like AI alignment and also catastrophic risk, smaller nations like Estonia, Rwanda, and Singapore are reshaping AI development, research, and governance on their own terms. For example, Estonia has built an AI-powered digital government, one of the most prominent in the world. It prioritizes citizens, reduces bureaucracy, and also it advances and engages public sector engagement. Next, we see that Rwanda has led development of the first AI policy and that strategy on the African continent. And they’ve done a really great job in increasing their international engagement and cooperation through efforts like the Global AI Summit on Africa that was hosted in April of this year in Kigali and had the fortune of attending and it was a really great event. And finally, we see that Singapore has really made lots of great efforts to steer regional cooperation throughout ASEAN and beyond and have also really steered these really interesting scientific breakthroughs, particularly when it comes to building LLMs and also through these critical evaluation approaches through red teaming and benchmarking. And this is just only a few. So to end this presentation, I’ll present three pillars that can help enable global transformation, particularly for smaller countries and also those in these marginalized regions. However, this is not exhaustive and also can be applied to larger companies, countries, and institutions as well. So first, data sovereignty can be essential in helping small nations, organizations, et cetera, control the digital resources and increase independence from large tech corporations. Estonia has done a really great job in adapting this and integrating this into the digital government, particularly in redefining how they make contracts with large tech companies and also encouraging regional and local talent to help provide services that their government needs. Next, contextual innovation is really important and something that is promoted in approaches like human-centered computing and interaction more broadly. We know that AI design with local context can leverage efficient methods, for example, benchmarking evaluations and also even small models, which is something that lots of organizations and even companies are pivoting to because they notice that these models actually are more efficient and more accurate in many contexts. And again, it’s really important that we understand that these models and efforts should be integrated with indigenous values and knowledge. And finally, peer-to-peer collaboration is essential for ensuring that we can develop regional networks that bypass these traditional power hierarchies and combine resources to optimize AI development. And these resources can include computing infrastructure and even educational infrastructure by distributing different networks and also research centers as well, so countries and organizations can collaborate and ensure again that they’re creating AI that meets their needs. So thank you so much for listening and looking forward to the panel presentation later.
Natalie Becker Aakervik: Thank you for a great presentation Chinassa, wonderful insights there as well. And we’re really looking forward to diving deeply into the panel discussion with the insights that our speakers have given us today. But now we have a speaker who is going to be talking about or really focusing on AI playbook for small states and what are the main conclusions, looking at the playbook, how can Rwanda be in the forefront to shape the future for AI, and how can data sovereignty, innovation and collaboration unlock opportunity. And she is the Director General of Innovation and Emerging Technologies and she is Esther Kunda. So Esther is coming through. There we go, Esther. Thank you so much. Please, a warm round of applause, we know it’s digital, but please welcome Esther Kunda. Thank you very much. I cannot see the screen on the presentation, but if we can definitely start.
Esther Kunda: Thank you for having me, and let me first thank Shinasa for also talking about Rwanda in her previous presentation. To quickly start, when we did this AI playbook for small states, one of the key considerations, we collaborated with Singapore to work on this particular playbook, and the idea is that small states in this playbook around AI have different key takeaways, but also we have different challenges that we are tackling. If you can go to the next slide, please. So the AI playbook, the digital force, which is a form of small states, on the next slide, thank you. So the digital force was established in 1992 by Singapore, and it’s a platform of 108 small states that discuss common interests, and the digital part of it was actually introduced in October 2022 to ensure that we also continue to collaborate on that. So when we started the playbook in 2023, one of the key areas that we were looking at was to ensure that this serves as a compilation of best practices and experiences from force members on implementing AI strategies and addressing challenges that we face. As small nations, I think one of the key areas that we would all appreciate is that we don’t have the same challenges that everyone has, and what we were looking at is to really understand how small states can navigate these opportunities and the risks that AI poses, but also provide actionable guidance based on global governance, but also best practices from other member states. We were also trying to see if we can, amongst ourselves, create this learning and collaboration and peer collaboration. to address challenges, especially on data, compute resources, funding, and as well as the fact that one of the biggest key drivers for small steps is around small domestic markets, and in some instances also being landlocked countries. If you can go to the next, please. In some of the key recommendations that we came up in the playbook in itself, what we were looking at was capability for countries to build foundational AI capabilities for themselves, and this will look at human resource development. So, how do we upscale our workforce, especially in our workforce in public sector or in the existing workforce, and how do we make sure that we have the right in the existing workforce, because this is one of the key areas where when we talk about AI is going to take jobs from different demographics, this is one of the key areas that everyone talks about. And then we also look at infrastructure development, access to high performance computing and the quality of the data that is available to states, and how do we innovate around that. And lastly, sustainability concerns. I think a couple of speakers have also talked about energy, and this is also one of the key concerns that we looked at and looked at best practices towards that. In terms of the second area we looked at is promoting AI development and use, and here we looked at different areas and different best practices in countries where we, where communities are driving AI co-creation and transparent and fair use and inclusivity. Thirdly, we also looked at how we foster trusted environment, and I believe the Honorable Minister who started talked about Norway creating its own sandbox, and I think this is one of the key areas that are very important. So, interoperability. open research, knowledge exchange, and continuously promote these platforms for all of us. And lastly, I think this is also why we’re here, global partnership and corporations in terms of AI standards, AI systems, and explainability and transparency of the AI systems that we do have. So in Rwanda, how we are doing that, if we can go to the next slide, in Rwanda, what we are looking at today is, we’ve spent the last two years with a strategy that really looks at laying a groundwork for what we want to do as a country. So, first of all, we put in place a strategy and a policy that really puts Rwanda as an innovation lab and continues for it to be an innovation lab. Second, we’re trying to do assessment and work around infrastructure and ecosystem readiness. So we’ve been working very hard in ensuring that we have data that is available. Today, when you also look at connectivity and availability of affordable data, this is also something that we are working on. And recently, actually, government passed a data sharing policy that will enable us to easily share data, but also avail this data to the private sector in one way or the other, so that AI models can be able to be trained on data that is Rwandan and that serves Rwanda. Thirdly, as a country, we continue to be a truthful concept. So we’ve positioned ourselves as a country that wants to allow innovators to test in an environment that is agile, because for us, what we understand is that the technology is evolving very fast. So we have to, as policy makers, we have to work with how fast it is evolving and ensure that as we put a regulation in place, we are aligning with how it is evolving. is going. And lastly, of course, we are ensuring that we have the talent and the skills that is required. And that’s why we continue to partner with academia like Connecticut Mellon University, Africa Leadership University, and our own universities to really create the talent that we need and ensuring that in the next few years you can find AI talent within Rwanda. Lastly, if I go to the last slide, we’re also working in ensuring that data is available, as I was mentioning, and then also pouring into our innovation ecosystem and startups and also continue to make partnerships with other countries, other institutions to ensure that AI is viable and useful to every citizen in Rwanda. Thank you very much.
Natalie Becker Aakervik: Thank you so much, Esther, for those great insights and for your presentation. And now we’re going to be introducing our next two speakers. We’re very pleased to welcome Jeff Bullwinkle. He is the Deputy General Counsel for Microsoft EMEA. Jeff will offer insights into how large platforms like Microsoft are working with small markets and governments to build innovation ecosystems and how regulation and responsibility can go hand in hand. After Jeff has done his remarks, we will hear from Kojo Boyake, VP or Vice President of Public Policy for Africa, the Middle East, and Turkey in META. Kojo brings a valuable perspective on how small players and global platforms can co-create inclusive tech futures, especially in regions where connectivity and access and local innovation all intersect. But please first join me in giving a warm welcome and a round of applause to Microsoft. Jeff, the stage is yours.
Jeff Bullwinkel: Well, thank you very much, Natalie. It’s great to be here in Oslo. Welcome to everybody here in the room. Good afternoon, good morning, good afternoon, good evening to anybody who is following online. And thanks to all of you for the opportunity to offer a couple of perspectives at what I think really is a momentous point in time, a very important moment in history in technology. It is the era of AI, as has been talked about already today. But as we think about that and reflect upon what’s happening in this era of AI, it’s worth, I think, also reflecting on history of technology over the course of time. Think about the moment at which the Amul-type printing press was perfected in the mid-15th century, leading to innovation over the course of time that has really changed the course of humanity in so many positive ways. These innovations over the course of time, the steam engine, ultimately, electricity, of course, as well, the telephone, the combustion engine. Naturally, you get into the era of the PC, the intranet, mobile telephony, the smartphone, and of course, ultimately, now what really is cloud and cloud in the era of AI. These have been the building blocks, really, that have defined what is today a modern civilization. But the focus really is, at the moment, not surprisingly, on this era of artificial intelligence. Of course, we think about that and must recognize that AI is really nothing new. We’ve talked about this at least for 75 years, since Alan Turing devised the famous Turing test back in the 50s, but it really is the moment in time over the past few years, perhaps two and a half years, when I think, as Natalie said at the opening, AI has entered the boardroom. In the era of generative AI, the conversation really has changed. That’s why, as you can see, the adoption curve here changing this very dramatic way. You see things here today that really are. At this point, taken entirely for granted, the internet, the mobile phone, Facebook as a platform, Meta as a platform, Meta is here today as well. These technologies took up to many, many years to reach 100 million users, not so with ChatGPT, which really is the one you see here at the end. It’s a straight line. Practically three months only to reach 100 million users for ChatGPT, when it was first launched into the world about two and a half years ago. It’s not perhaps surprising to think about that because it is, after all, a GPT. Not as in ChatGPT, but a general purpose technology. That is a technology that has the ability to reshape, to reinvent, to improve in so many ways every aspect of the economy. Unlike a single purpose technology, which is very good at one particular thing, like a sewing machine, GPTs, like generative AI, do have the ability, again, to reshape every field of human endeavor in dramatic and exciting ways. We’re also finding, though, as we think about this moment we’re in, that there is this additional technology stack that gets created, a stack that has three fundamental layers to it. One is the infrastructure layer. Of course, you need land. You need power, as has been talked about as well today. You need advanced chips and GPUs. Of course, you need data center infrastructure, including what Microsoft and other companies like Meta are building across Europe, across the global north, across the global south as well. That is the infrastructure layer. But also you have, of course, the model layer, the foundation model layer, which includes, of course, data, the new lifeblood, as they say, the new oil of today’s economy. You have the models themselves, whether large language models or smaller language models. Ultimately, of course, you have tooling as well. You have this model layer as well. Then beyond that, above that, you have, of course, the application layer, the various things that people can do with technology that animates in different ways so many aspects of life in really very exciting ways, and ultimately, of course, end users. Now, when you think about this, you realize there is opportunity for growth, for innovation, for progress in so many ways, up and down every layer of this stack. And I think it is very helpful to think about what Minister Tong said at the beginning in her remarks, because she captured it so well in terms of the ability for a small country, a medium-sized country, a large country, for an individual entrepreneur, for a small company, for a large company, for a non-profit, for a hospital, for a school, all to benefit in remarkable ways from this technology, which is really exciting to think about. And, of course, because we are here in Norway, I’ll just have up on the slide here various things that reflect the ways in which companies involved in logistics, in financial services, in healthcare, in IT, professional services, all are doing very exciting things here in Norway with these new technologies. So that really is remarkable for us to think about in terms of, again, every different aspect of human endeavor. At the same time, though, it’s also worth reflecting on the fact that trust is key. And we are, after all, living in an era of geopolitical volatility. Trust has become an issue. Trust in technology, perhaps, has become an issue as well. And that does mean that companies like Microsoft have to make sure they recognize the responsibilities that come with the role that we occupy. And this is a global audience here in Oslo and online, to be sure. But equally, because we are here in Europe, I thought I’d spend a moment talking about how we’ve thought about our responsibilities in the European context through the announcement quite recently, about a month and a half ago, of a new set of European digital commitments that have these five different elements to them you see in the slide. The first really is a recognition of the fact that we have the opportunity, the responsibility to support a Cloud and AI ecosystem that is broad and diverse. That definitely includes the infrastructure that Microsoft itself is building as a company across the Global North and across the Global South as well. But equally, it involves our work in supporting local European providers as well, and local technology companies and other markets in which we operate around the world. We want a broad and diverse AI ecosystem on a Cloud infrastructure. The second element of our digital commitments is really focused on the need for us to be able to provide what I’ll describe as digital resilience even in an era of geopolitical volatility. This commitment has three different elements to it for us here in Europe where these concerns have become particularly pronounced over the past little while. The first is our commitment that as a company, we will in fact oversee and manage our AI data center infrastructure through boards of directors that are comprised exclusively of European nationals. That’s our number one. A second element of this commitment to resilience is making sure we are committing to our customers, to our partners, to government stakeholders, our preparedness to push back against any order from any government to either cease or suspend Cloud services. This actually has become a fairly common point in conversations that we have. Microsoft, what would you do in the event you were ordered to cease or suspend Cloud services? Through this commitment, we essentially commit, and we will do so contractually to national European governments to resist to fight back against any such order, including with litigation if that proves necessary. The third element of this commitment, however, is focused on our need to more than that. So a customer might come to us and say, Microsoft, thank you for committing to resist an order and for litigating that. What if you lose? What then? And so what we just said here essentially is that we will have a mechanism by which we can provide business continuity in the very unlikely event of that happening. And here we’ve talked about our plan to create a repository of software code sitting in Switzerland that will be overseen by third-party providers that will be able to, again, provide continuity in the event, again, of a very, very unlikely scenario such as the one people are now talking about. A third commitment we have here really is building on what has been years of focusing on the need to protect the privacy, the security, the sovereignty of data in Europe and really data around the world. In Europe, we have already taken significant steps to make sure that our customers’ data is being processed and stored within the European Union and other countries as well, including here in Norway. So that’s been a longstanding investment we’ve made over the course of time. Beyond that, though, we’re doing additional things as well to make sure that we are building in sovereign controls to our own cloud services to address what really are very natural, understandable concerns and questions people have in this moment of geopolitical volatility. And in fact, some may have seen that our CEO, Satya Nadella, was in Amsterdam just last week on Monday, and he gave a speech at that time in which he announced a new set of sovereignty-related controls that you can read about online in a blog written by Justin Althoff, focusing on our commitment to provide a sovereign public cloud, a sovereign private cloud, and also, in some cases, national partner cloud. So that really is our third focus here in terms of making sure we are always focused on the need for sovereignty. Now, cybersecurity also, of course, is top of mind. It is for us and has been for some time. It is also for everyone here, I’m sure, in the room. online, recognizing the increasingly pernicious, malicious threats and attacks in cyberspace, often from nation-state actors. We see this frequently as a company. We have the ability at Microsoft to be able to aggregate data, by the way, using AI, looking at 77 trillion signals every single day to detect how threat vectors are evolving over the course of time and how we can defend against attacks before they become problems for the communities that we serve. We also then, following our initial announcement of these commitments that we made back in April, announced a new European security program focused on making sure we’re doing even more to share threat intelligence and work with governments and other stakeholders in a way that will reduce the threat environment online. And finally, what I would say here that we’re also very focused on is the need to make sure that we are committed to openness. And here, we have a commitment to make sure we’re also doing even more to support open source software development in the context of this era of AI. We announced about a year and a half ago at the Mobile World Congress in Barcelona a set of AI access principles that really can be summarized in three words. One, again, is access. Here, the conversation is very much about making sure that everyone can have access to infrastructure needed to benefit from AI in the way that everyone needs to benefit from AI. So access is number one. Fairness is number two. Making sure that once we are giving access to people to use our infrastructure, we’re treating them fairly and doing so in the context of interoperable open standards as well. And finally, there’s an element there of responsibility. Making sure, again, that we as a company are rising to the challenge of responsibility that comes with the role that we occupy, including in relation to developing our own set of principles around responsible and ethical AI, but ultimately making sure we’re being adherent and compliant with laws that governments enact around the world. So I’ll pause there and look forward to the conversation. in the panel, and with that, invite Kojo to follow me. Thank you.
Kojo Boake: Thank you. Hi, everyone. As some people have mentioned, my name is Kojo Bwachi, I’m the Vice President of Public Policy for Africa, the Middle East, and Turkey here with META. It’s extremely hard to follow these speakers. I thought Dr. or Professor Chanasa’s presentation was fantastic, so I’m going to try my very, very best. Be gentle with me. I have to admit also that I was thrown by the question posed to all the panelists. What do they mean by small states, I thought, in part because I’m mindful that the region I look after, Africa, the Middle East, and Turkey, is full of what some people might deem small states, but they punch well above their weight. The United Arab Emirates had the first AI minister friend, Minister al-Ulema, in 2017, and I’m told people laughed when they said they appointed an AI minister. People have seen what KSA are doing, and from my unique vantage point, I’ve seen all the work that’s being done in places like Nigeria, Ethiopia, and South Africa on AI. So I have a small bias here. I’ll probably be speaking more to small companies and startups than I will to small states. How do we as a company, Meta, think about this era of AI? Our view is that we need to level the playing field, that no one company or government can own the future and the promise of AI, and we attempt to do that by open sourcing our models. Since 2023, we’ve launched Lama models, and now on Lama 4, that have been downloaded one billion times, more than that now, and we believe that the unique advantages and differences between closed models and open models shouldn’t be seen as binary. We know some are more open than others. But the unique differences, most notably things like transparency and the fact that you have access to the weights and can fine-tune as you wish, create advantages that are good for the world, for meta, yes, but for good for many of the small startups and small states that wish to use them and are using them. The advantage of lower compute costs, the advantages of being able to fine-tune as you would to meet your local purpose, national purpose, commercial purpose is amazing. The fact that you can actually see under the hood of how these models are created and as we think about the risks of AI, the fact that we can learn from other people’s attempts to get behind the back of it and use it in uneven means but also to share their learnings in respect of cyber security. I said I would have a bias towards some of the small players that are using open AI and certainly LLAMA to meet national goals and increasingly continental ones. In education and in public health we’ve seen ourselves partner with the Africa Union Development Agency to create Akili AI and in part this was because we were told as a company across the Africa region that small to medium businesses that characterize the region didn’t understand how they might scale or grow or work in other countries, how they might take advantage of the new Africa continental trade agreement and work in Ghana and Nigeria or Kenya and Estuani. Enabling this to happen through an app that they can access information on has proved critically important already and we hope to grow this with more governments. I’m also mindful that Fundimate created as an educational app that now reaches more than 3 million students enabling many to go from primary school or lower school to junior school, upper school and on to university. has proved incredibly successful using LLAMA and the fact that we open-sourced it has been a key driver in that incredible development. Digital Green, again, an SMS service backed up by LLAMA AI is helping farmers across Kenya and other parts of East Africa increase yields and increase outcomes. And Jacaranda Health, cited by many as a stunning example of open-source use and traditional technologies, is helping mothers across Kenya in Swahili and now across Ghana with the Ghanian National Health Service and I’m sure other countries as it grows is helping to create much, much safer outcomes in terms of maternal health. A quick video from Fundamate. I always felt it was better if someone else speaks to some of the advantages rather than I, if it plays. I’m hoping tech might be able to help. Is there someone from tech who can help with my amazing video? Sorry, you’re going to have to miss that one. I’m telling you, it’s a blockbuster. That’s Fundamate explaining how they’ve used open AI, not only to meet the needs of students, as I mentioned, students in junior school, students who want to go to university, and the fact that they’ve been able to scale. And this idea of scaling is super, super important, as you can appreciate. I’ll take a bit more time to quickly say that much of this is done through our investments in a holistic way. Obviously, the billions we spend on infrastructure and developing models is critically important. You’ll hear that from many of the big players. But also, stimulating through the Lama Impact Grant, which has enabled startups from around the world receive thousands of applications launched in 2023 and enable startups from around the world to get onto Lama, use Lama, and to grow their businesses. But also, from things that my team in the African, Middle East, and Turkey region have developed. And if you’re quick enough, you can apply for the Lama Impact Accelerator Program, which will see us have mentorship and skills development for organizations, small organizations that wish to use Lama and open source AI to grow their businesses and accelerate their efforts to meet some of the national and local challenges that they face. Of course, this is my 19th IGF. I’m surprised. None of you say, you meant to say you don’t look old enough to have done 19 IGFs. I didn’t hear that. But this is my 19th IGF. I know a lot is being decided this year on the IGF. And I’m mindful that there are not just me, but a number of staff from Meta here who are here to engage and to collaborate and to build partnership. I’m also mindful that this era of AI, the promise that AI holds, as well as the risks that many of us are concerned about and negating those risks, will only come about if we collaborate and if we build the multi-stakeholder partnerships. I’m here till Thursday, as are many of the team here from Meta. We very much look forward to engaging with you, and I look forward to the panel. Thanks ever so much. Appreciate your time. So we hope you enjoyed these really exciting presentations from our esteemed speakers who’ve once again traveled from far and wide to be here with us today and to give their presentations and their insights.
Natalie Becker Aakervik: Now we are again going to take a deeper dive into what they’ve touched upon during their exciting presentations and we’re going to invite them back on stage for a panel discussion. So lots of new insights to work with here, setting the foundations to explore it more deeply. I would like to invite back on stage Chinasa Teokolo, John Lerwick, Jeff Bullwinkle, Ole Christopher Granmore and Koyo Boyake. Hello, I have to ask you, what on earth is going on here? So you see the role that technology does play in our everyday lives. Norway, of course, is known for its salmon. You would know that from all parts of the world and recognized for that globally. And of course, technology and innovation plays a large part in making that a sustainable industry. So we hope that you enjoyed that video as well, where technology and AI is helping to save the Atlantic salmon. Now, as we have our esteemed speakers and presenters here on stage, we’re going to dive right into the panel discussion. And I would like to start with a question to you, Chinasa. What opportunities do you see for smaller nations and underrepresented regions to really lead in ethical and inclusive AI development?
Chinasa T. Okolo: Great question. And thank you. So many opportunities. I would say for me, something that I mentioned in the presentation, you know, thinking about smaller models. And I think, again, because of the benefits that they hold, particularly when it comes to domains or regions where there are data deserts, I think that can help kind of solve the gap a little bit. And some of the issues that we see currently in terms of current approaches to AI development. And then also, I think just in general, many opportunities to really focus on these contextualized approaches and not really trying to build these general AI models again, which I don’t find or seem to see most beneficial for many contexts. And then I would say finally, again, in terms of leveraging smaller models, also taking advantage of some other approaches like model quantization, edge computing, et cetera, which can really help provide many more opportunities, not only for these rural areas or regions in these. you know, global majority communities, but also in the US, you know, where I’m based, we do have rural communities, and also like more marginalized contexts. And I think these approaches pioneered by smaller countries can actually be more beneficial across the global north, quote unquote, and global south, more equally as well.
Natalie Becker Aakervik: Janessa, thank you so much for your kind feedback on that question. Jon, over to you. From a startup perspective, how can small tech companies compete or collaborate with hyperscalers to create a unique value? We have one here, of course. I think the obvious part of it is, of course, that you have to focus on something particular and be really good at it.
John M Lervik: That’s, let’s say, the easier part. And of course, again, as I talked about, we in Cognite have focused on asset-intensive industries. But the other part of it, I think, is a little bit more particular, maybe. You also need to focus on a problem that is sufficiently big that he cares. Because in most cases, they are not focused, they will be just focused on their own things. So the problem we focus on needs to be sufficiently big. So Microsoft, or whether it’s Amazon, Google, or others, or Metafore, let’s say, also care about it. So you get some, also some good competitive tension, which is, I think, exactly what we have in Microsoft. It’s a fantastic partnership, but also a little bit tension now and then, where they see that we do things that they would like to do, and vice versa. I think that’s the recipe for success.
Natalie Becker Aakervik: Thank you so much for that answer, John. And then, Jeff, how can large companies, tech companies like Microsoft, as John nudged you earlier on, support innovation ecosystems in small states while ensuring fair competition and responsible AI development?
Jeff Bullwinkel: Well, I pick up on the point, I think that John made so well, which is that larger companies, despite that question of scale, in fact, ultimately are platform companies. Microsoft is now and really always has been, first and foremost, a platform technology company. And so we do have a lot of work we’ve done, of course, at the infrastructure layer with data center capacity we’ve built here across Europe, indeed in Africa, across the Americas and Asia as well. And at the infrastructure layer, on top of which you have the model layer and then the application layer. And we’re just very excited about the amount of innovation you’re seeing up and down that stack. And so certainly as a company, one thing that we are clearly focused on trying to do, as I mentioned earlier, is to make sure that there is that broad access that we can provide and that we operate in a way that allows for openness and operability across systems as well. So you have these small, exciting companies that are building on our stack and achieving great success, whether in small states or large states. And indeed, you see this across Africa, which has been talked about a bit today, and I’ve had the privilege of spending some time in Africa over the past year, in Kenya and Tanzania, Rwanda, Egypt. That was just two weeks ago, actually, in Nigeria. And the amount of excitement you see across these countries and the innovation happening in these countries with highly localized applications or models is pretty exciting to see.
Natalie Becker Aakervik: Thank you so much. Thank you so much for your response to that, Jeff. And then, like Christopher, massive AI models demand massive resources. So how can small states, for example, like Norway, leverage energy-efficient AI, like the Settlin model, or the Settlin machine, rather, to compete without relying on big tech’s infrastructure?
Ole Christopher Granmo: So my vision is to build completely sovereign technology, and that involves building things from scratch. And we have two very exciting projects with the Supreme Court of Norway and the Parliament, where we’re going to build a fully Settlin machine stack, and it will solve the black box problems in the critical areas. in society. So by making flagship projects that can inspire others and show that it’s possible, that’s my main strategy. Thank you so much. Now, coming to you Kojo, what are your thoughts
Kojo Boake: on what opportunities do you see for smaller nations, underrepresented regions to lead in ethical and inclusive AI developments? I think I spoke to some of it. I wanted to say a quick shout because I think you asked a great question about small companies and what they seek to do with big players. And I think the answer from my learned friend to the right was to create things that big players are interested in and therefore sparked by competition. I want to give a shout out to those small players that aren’t interested in that. They’re actually interested in resolving, making viable businesses or resolving local issues and contextual issues that may never interest Meta, Microsoft, ChatGPT or whatever else at this point in time, but are extremely interesting to their locality or their nation as a business or as a solution provider. So I just want to kind of give a shout out. It may be very different to Norwegian players than it might be to a player from Djibouti or Mauritania or Ghana at times as well. In terms of what we can do to stimulate or to answer your question, I think hopefully I did a reasonable job of outlining how I think Meta as a company and others who believe in the use of open source, Jeff has spoken about the applications layer and people building on that piece, how we’re enabling small players and states by providing this openly. We’re investing 65 billion this year in infrastructure. As I hope my slide made a good point of doing it, it means that the cost of compute, which is obviously the debilitating barrier that many face, isn’t there. We’re making our weights available so that people can create solutions using our models. We’re enabling people to fine-tune, but at the same time we’re also making telling investments as well. The Lama Impacts grant, which was launched by the company in 2023, saw thousands of applications, two from our region that I mentioned. The team is continuing to be invest through programmatic efforts to work with small companies and, increasingly in the future, governments. So if you’re in the room and you’re interested in using Lama to solve your national problems, come and see us. They can tell any investments to do that as well. So I hope our approach, this idea that we can level the playing field by making massive investments on behalf of the company to provide open source AI is really what’s key there. Thank you so much for those
Natalie Becker Aakervik: insights, Koyo, and for the clarity. Then also, in terms of, let’s say, meta, we’ll come back to that question. I wanted to ask you, like Christopher again, Norway has renewables and you’re energy-sipping AI, right? How do we turn this combo into a global
Ole Christopher Granmo: blueprint, okay, for equitable AI growth, would you say? Yes, great question, and hardware is a key component here. And all the hardware today from NVIDIA and others is rigged for deep learning, matrix multiplications, but we have this pioneering work going on in that University of Newcastle. They build settled machine hardware. And Edge, like you said, extremely promising and measurements. And to really build up green technology, we have to create an alternative to the NVIDIA technology, for instance, from the bottom up. So if you can manage to do that, that would be like a big breakthrough in the energy area, yes. Thank you so much for answering
Natalie Becker Aakervik: that question. Chinansa, I want to pose a question to you as well. How do you see these countries and regions potentially avoiding the challenges experienced by larger countries and
Chinasa T. Okolo: companies in scaling AI development? Yeah, great question. I think it’s a bit tough to say because we, again, do see disproportionate impacts occur in these smaller countries that are really just trying to have a foot in the AI race. I don’t like to use that term, but more broadly. And so I would say really it’s just that, again, really focusing on these contextualized models, and then also understanding how AI can benefit different sectors within their respective countries or regions. Again, AI doesn’t need to be and should not be adopted for every little single thing. In many cases, these basic general algorithms can work much better than AI-optimized ones or just straight AI models in general. And then also, I would say, again, it’s just understanding that there’s the different downstream impacts, whether it relates to labor, which we’ve seen these disproportionate impacts in Latin America, East Africa, particularly in Kenya, and also throughout Southeast Asia, and trying to shift away from these extractive models to more so community-centered models. Again, that center and value these indigenous frameworks that understand community, and I would say value building and et cetera, et cetera.
Natalie Becker Aakervik: Thank you so much for that. Do you want to add anything in terms of, let’s say, ethical concerns differing between smaller nations and larger ones as it pertains to AI research and development?
Chinasa T. Okolo: Yeah, and so something I mentioned a lot is that because when we consider the computer science, fairness literature, I’m an academic by training, we see that a lot of these issues are focused on, let’s say, Western concepts. And so when we consider things like race, which isn’t relevant in many African countries, aside from South Africa, and also throughout global majority countries, I think that this also provides a limited understanding of how AI models can exacerbate bias in these respective settings. And so there’s a lot of interesting work emerging on caste, particularly within the South Asian context. excuse me, which I think can really provide interesting insights into how we can ensure that these models don’t discriminate on this respective social identity aspect, along with other things around gender, tribal affiliation, and the intersections, which is something that’s really important because these societies are so diverse. And so this is why I’m really in favor of these countries. As you consider investing in AI development or forthright, you also have to bolster your respective academic ecosystem to support the socio-technical research that can really understand all these dimensions of AI development.
Natalie Becker Aakervik: Thank you, Chinansa. And now talking about policy and framework, Jeff, over to you. How should smaller nations and underrepresented regions adapt their governance frameworks to meet their local contexts? Well, it’s interesting to see how the global conversation about AI regulation is developing.
Jeff Bullwinkel: I would say, for starters, that as a company, we certainly recognized before the AI Act in Europe was even part of the conversation, our own responsibility to make sure that we’re developing and deploying solutions that are adherent to a set of clear principles that equate to responsible AI. Things like fairness, of course, transparency, accountability, safety, security, reliability, these sorts of things for us are paramount in what we do and how we do it. Equally, we’re just one company in one sector, and it’s ultimately up to governments to tell us what the rules are. And so there has been a lot of discussion globally that seems to be leading towards something of a consensus in this area. A group of seven countries a couple of years ago in the so-called Hiroshima process during the Japanese presidency developed some really good ideas in this respect that were then built on during the Italian presidency and now Canada as well. That’s helping to drive also a bit of a global conversation. The OECD, the UN itself more broadly, has been involved in a way that is, I think, very helpful in getting us closer to what would be sort of a global cohesive approach to AI. regulation that is based upon a risk framework that will create the right guardrails, but ultimately be pragmatic and allow for AI adoption. And this is something again, as I’ve had the benefit of meeting with policymakers across countries in Africa, for instance, you see what is a strong interest in what’s happening in Europe. Is that the right model or not? Do we want to make sure there’s a model that’s going to create the right rules, again the right safety frameworks, but also not hinder adoption? And one comment earlier was made, I think by Chinasa, in relation to what’s happening in Singapore, where the government has taken, in fact, a fairly light-touch approach relative to some countries in Europe, which might indeed become what you see happening elsewhere in the world too, so people don’t really hinder diffusion of AI, which is so critical. Thank you so much for that input, Geoff. And now, John, over to you. I may
Natalie Becker Aakervik: combine two questions, and you can speak to the parts of them that you would like to. Opportunities, which of them do you see for smaller nations and underrepresented regions to lead an ethical inclusive AI development? And then also the question is, what constraints should these countries be aware of as they aim to increase their participation in the global AI ecosystem? You want to share a reflection? Good questions. I think also as a follow-up to
John M Lervik: what you just said, I think I would say to generalize a little bit, here in Europe, we are starting with the cart in front of the horse in many ways. We started to talk about ethical use and privacy and stuff like that, and it’s a fact that our friend to the left would never have been there if they started with privacy. You know, Facebook is not a privacy company. They basically create a value. So, you know, I think about it like doing things right versus doing the right thing. So, we need to start with understanding how do we create value from AI? This is also what Microsoft did when they invented Azure and all these things, right? How do we create value, not how do you support, you know, privacy, if you will. So, I think this is super important. Also, you know, to comment about Singapore. We need to focus on the value and then of course we need the guardrails but not opposite because then we will never get to the value. Secondly, I think also referring to the comment from my friend here in Meta which I agree to, of course there’s tons of value to be had in small countries like Norway by sitting on the shoulders of the giants, the two of you. We as a nation we need to leverage that and improve the efficiency of the Norwegian government, of companies, all those things. But I think also we need to have, my last point is, we need to aspire beyond that as well. We cannot just be a country which leverages other people’s IP. So that’s why I also, of course we are lucky in Norway, we have a lot of energy, both green and brown if you will. We export all the brown one or all the gases more or less to Europe. But we also need to take those unfair advantages, industry, energy, and convert that into our own value IP as well in the future where we can also export. Absolutely. And not just sit on the shoulders of Microsoft and Meta which we are very happy to do but we
Kojo Boake: want to do more. Absolutely. Thank you so much. Just to make that super clear, the only thing I
Natalie Becker Aakervik: was trying to bring was balance. I don’t want to fly BA82 or 74 back to Ghana or Nigeria and be lambasted for saying we only want to solve local problems. There are obviously companies that want to go much broader and compete with us and we welcome that competition. But I did want to flag because it’s missing from this particular panel that there are small companies as well. Can I just
Kojo Boake: quickly add a couple of things? Please because you were next. I wanted to ask you, you had a
Natalie Becker Aakervik: choice of a question. How is Meta prioritizing local capacity building research and development when building open source models for the global AI ecosystem? However, I see that you want to respond. I think that would, if I answered that question I’d be at a
Kojo Boake: risk of repeating. Okay, so. Which helps. Please go ahead and respond to some of the points made by Jeff and my friend to the right about how we ensure we don’t have a cookie cutter approach. I think he was, Jeff was extremely diplomatic about some of the problems. And my friend was about some of the problems Europe has faced by what Mr. Draghi calls overregulation, whether that be in respect of GDPR or, or AI and the threat of AI. And we saw very recently that huge players, and I suspect small ones, had seen so much uncertainty by that form of regulation that they held off launching some of the products that would be so valuable. So, for example, Meta delayed the launch of Meta AI on WhatsApp and Facebook and whatever until it had more clarity. So, I think when I travel around and I speak to regulators and heads of state and ministers, whether that be in the Middle East, Africa, Turkey, Azerbaijan, they’re very mindful that they need, they don’t want a cookie cutter approach. So, that’s really, really important. The other piece, and I think this is what the IGF lends itself to and why I’m always so eager to come here, is that to really tackle those ethical problems and challenges and create the value that we think AI can have or believe it can have, we need to have multi-stakeholder conversations like this. Impactful ones, you have the kind of ones that don’t go anywhere, but the super impactful ones need to involve the big players, the CSOs, the medium-sized companies that want to be big players, and the small-sized companies just want to operate in their region and solve their issues, and Dr. Okolo, the academics and everybody else that needs to get involved as well, and I think that’s really, really important. I just want to stress that point. Thank you so much, and with five minutes left, I’m going to give you each 30 to 60
Natalie Becker Aakervik: seconds for last thoughts, parting words to leave the audience with, if you would like to. Where shall I start? Any takers? Okay, so, does anybody want to touch on the opportunity? Opportunities, also frameworks. I see that we’ve covered a lot of ground here, actually. Everybody’s been really good with time. Okay, 30 seconds. No, but I think my perspective is that AI is changing everything and we need to lean in, whether it’s from global companies like Microsoft or nations like Norway or industrial companies and there’s no time to lose. You know, this is happening. Thank you. Jeff.
Jeff Bullwinkel: I might build on John’s comment by saying that I’m reading a book currently by a professor called Jeffrey Ding in Washington, D.C. called Technology and the Rise of Great Powers. And his premise in the book fundamentally is that it may not be so much about where a particular technology originated, where it was invented for the very first time, but rather the degree to which a country is successful in adopting it, integrating it across every aspect of society and leading to this widespread diffusion. That’s what you hear people wanting to do and talking about across the world, whether the global north or the global south. It’s up to us as companies to provide for that, up to governments to create clarity in relation to the regulatory environment and we hope a level of pragmatism for sure. And also up to everyone to work together in making sure you have also the right level of skills to make sure people can actually embrace these technologies in the way that they want to.
Natalie Becker Aakervik: Thank you, Jeff. Other Christopher, anything you’d like to add to the conversation? So I will add a critical point and that is the essence.
Ole Christopher Granmo: Today we don’t fully understand the AI. If we don’t understand the AI, the AI controls us. We have to turn it around. We have to fully understand the AI so they become a tool for us so that we are in control.
Natalie Becker Aakervik: That is essential. Yes. Thank you for that. Chinasa.
Chinasa T. Okolo: I didn’t get to speak much on governance, which is my focus as a fellow at Brookings. But again, I think there are also many opportunities not just to solely innovate in the actual development of AI, but really understanding how it can and should be governed, particularly for smaller nations. Just as we shouldn’t rely on these big tech companies to be the standard of AI development, we also should not rely on these bigger regional blocks. or countries to also be the model for AI governance as well. And so I think there are many opportunities to innovate in that sector as well.
Natalie Becker Aakervik: Thank you so much. Kodwo, do you want the last word?
Kojo Boake: Not much to add. It’s one of those panels where everybody’s almost in complete agreement about the promises that AI hold and the fact that we need to create policy frameworks and commercial ones, opportunities and stuff that enable us to seize those promises. Even as I understand it, Tom, those who think those promises may come from a different technology. We’re all in agreement on that piece. And I think what that means for me is ultimately what we just discussed, that we don’t want to get in the way of seizing those opportunities. And as my own bias, I’ve been doing policy and regulation for 22 years now, 23, going 23. Again, no one says you don’t look old enough, so it shows I do. We just don’t want that to get in the way. And I think that’s what’s most important at this point in time. That’s why I’m so thankful to have a forum like this, the Internet Governance Forum, and to be sitting amongst such learned people as I am. And I hope to find solutions to whatever concerns, fears, challenges may get in the way of us seizing that promise.
Natalie Becker Aakervik: Kodwo, thank you so much. Thank you, Ander Christopher. Thank you, Jeff. Thank you, John. Thank you, Chinasa. We really appreciate your input. A big round of applause for our wonderful panel, ladies and gentlemen. Thank you so much for this great conversation. Before you leave the stage, we’re going to ask you to please stand here for a group photo. Thank you so much. And I’ll make some announcements as to what is happening in the rest of the day. But thank you. I know a photographer is in the house, so we’re going to have a group photo. OK. There we go. We have a number of photographers. Right. And then, ladies and gentlemen, we invite you back to our conference hall for the rest of the week. Sessions presented by IGF host country, Norway. meet us right back here for the rest of the week and very engaging conversations as you have seen. We’re also invited to explore, thank you so much, a rich and diverse program of sessions covering a wide spectrum of crucial topics from AI and sustainability. Don’t forget to visit the open village just outside the hall and all, for everything else, the panels, the workshops, the networking opportunities, please check out the IGF 2025 app for the latest updates. On behalf of the organizing team and our hosts here in Norway, we wish you a rewarding and inspiring and thought-provoking week of dialogue and insight and collaboration, continuing to build digital governance together. Thank you so much. Thank you. ♪
Karianne Tung
Speech speed
123 words per minute
Speech length
1036 words
Speech time
504 seconds
Small states can become global leaders in digitalization and tech regulation through long-term national strategies that prioritize innovation, citizen trust and smart governance
Explanation
Minister Tung argues that despite limited resources, many small states are already global leaders in digitalization, cybersecurity and tech regulations. These achievements stem from deliberate long-term national strategies rather than being accidental.
Evidence
Norway’s goal to become the most digitalized country in the world by 2030, allocation of 1.3 billion Norwegian kroners to AI research, establishment of six research centers, and over 350 Norwegian AI tools and companies described in a recent report
Major discussion point
Small States and Startups Leveraging AI Opportunities
Topics
Development | Legal and regulatory
Agreed with
– John M Lervik
– Chinasa T. Okolo
– Esther Kunda
– Natalie Becker Aakervik
Agreed on
Small states and players can leverage unique advantages and agility to compete in AI despite resource constraints
AI must serve the public good rather than become a playground for the powerful, with small players often well-positioned to drive innovation with purpose
Explanation
Minister Tung emphasizes that AI should not be dominated by powerful entities but should serve broader public interests. She argues that small players, including labs, startups, and public agencies, are particularly well-positioned to drive meaningful innovation.
Evidence
Many groundbreaking and impactful AI innovations come from small labs, agile startups and public agencies, though some receive support from big tech
Major discussion point
Trust, Transparency and Responsible AI Development
Topics
Human rights | Legal and regulatory
Agreed with
– Jeff Bullwinkel
– Kojo Boake
– Chinasa T. Okolo
– Natalie Becker Aakervik
Agreed on
Collaboration and partnerships are essential for AI development and governance
John M Lervik
Speech speed
167 words per minute
Speech length
1458 words
Speech time
523 seconds
Small players should focus on particular problems and ensure they’re sufficiently big that large companies also care about them to create competitive tension
Explanation
Lervik argues that startups need to focus on specific, substantial problems that are large enough to attract attention from major tech companies. This creates beneficial competitive tension and partnership opportunities.
Evidence
Cognite’s focus on asset-intensive industries and their partnership with Microsoft, which creates both collaboration and competitive tension
Major discussion point
Small States and Startups Leveraging AI Opportunities
Topics
Economic | Development
Agreed with
– Karianne Tung
– Chinasa T. Okolo
– Esther Kunda
– Natalie Becker Aakervik
Agreed on
Small states and players can leverage unique advantages and agility to compete in AI despite resource constraints
Small companies can leverage unique data access in specific domains like industrial data to compete with giants who have more general consumer data
Explanation
Lervik explains that while small companies cannot compete with the scale of large tech companies in general data, they can excel by having superior access to specialized data in particular sectors. This allows them to create foundational models for specific industries.
Evidence
Cognite has three orders of magnitude more industrial data than NVIDIA and other cloud providers, enabling them to create foundational models for industrial data
Major discussion point
Small States and Startups Leveraging AI Opportunities
Topics
Economic | Infrastructure
Norway’s combination of 100% clean energy and cold climate creates unique advantages for energy-efficient AI development
Explanation
Lervig argues that Norway has distinctive advantages for AI development through its access to 100% clean, cheap energy and cold climate that reduces cooling needs. These natural advantages should be leveraged to build world-class AI technologies.
Evidence
Norway has essentially 100% access to clean energy, particularly cheap energy up north, and cold temperatures that reduce cooling requirements for data centers
Major discussion point
Energy-Efficient and Alternative AI Technologies
Topics
Infrastructure | Development
Agreed with
– Daniel Dykes
– Ole Christopher Granmo
– Noel Hurley
– Rishad A. Shafik
Agreed on
Energy efficiency in AI is a critical concern requiring alternative approaches
AI regulation should focus on creating value first rather than starting with privacy and ethical constraints, as value creation enables proper governance
Explanation
Lervig argues that Europe has approached AI regulation backwards by prioritizing privacy and ethics before establishing value creation. He suggests focusing first on how to create value from AI, then adding appropriate guardrails.
Evidence
Facebook/Meta would never have succeeded if they started with privacy concerns, as they fundamentally create value first. Microsoft similarly focused on creating value with Azure before adding privacy protections
Major discussion point
AI Governance and Regulatory Frameworks
Topics
Legal and regulatory | Economic
Agreed with
– Jeff Bullwinkel
– Kojo Boake
– Chinasa T. Okolo
Agreed on
AI governance should be pragmatic and avoid hindering innovation while ensuring responsible development
Disagreed with
– Jeff Bullwinkel
Disagreed on
Approach to AI regulation – value creation first vs. ethics first
Daniel Dykes
Speech speed
149 words per minute
Speech length
33 words
Speech time
13 seconds
The Tsetlin Machine offers an energy-efficient alternative to deep learning, using up to 10,000 times less electricity per inference while maintaining accuracy and explainability
Explanation
Dykes presents the Tsetlin Machine as a revolutionary alternative to current AI approaches that is significantly more energy efficient. This technology offers comparable accuracy while being much cheaper to train and operate.
Evidence
The Tsetlin Machine uses up to 10,000 times less electricity per inference per decision compared to traditional deep learning approaches
Major discussion point
Energy-Efficient and Alternative AI Technologies
Topics
Infrastructure | Development
Agreed with
– John M Lervik
– Ole Christopher Granmo
– Noel Hurley
– Rishad A. Shafik
Agreed on
Energy efficiency in AI is a critical concern requiring alternative approaches
Ole Christopher Granmo
Speech speed
108 words per minute
Speech length
1027 words
Speech time
565 seconds
Current AI technology like ChatGPT is extremely energy hungry, with one query consuming the same energy as lighting a bulb for 20 minutes
Explanation
Granmo highlights the massive energy consumption of current AI systems as a major environmental concern. He presents specific data showing the enormous carbon footprint of popular AI services.
Evidence
One ChatGPT query uses the same energy as lighting a light bulb for 20 minutes, and ChatGPT produces more than 260,000 tons of CO2 monthly, equivalent to 260 flights from New York to London
Major discussion point
Energy-Efficient and Alternative AI Technologies
Topics
Development | Infrastructure
Agreed with
– John M Lervik
– Daniel Dykes
– Noel Hurley
– Rishad A. Shafik
Agreed on
Energy efficiency in AI is a critical concern requiring alternative approaches
Alternative hardware designed specifically for Tsetlin machines could provide a breakthrough in green AI technology
Explanation
Granmo argues that current hardware from companies like NVIDIA is optimized for deep learning, but new hardware designed specifically for Tsetlin machines could create a fundamental breakthrough in energy-efficient AI.
Evidence
University of Newcastle is building Tsetlin machine hardware that shows extremely promising measurements for edge computing applications
Major discussion point
Energy-Efficient and Alternative AI Technologies
Topics
Infrastructure | Development
Current AI systems are black boxes that we don’t fully understand, creating risks when deployed in critical areas like criminal justice and healthcare
Explanation
Granmo warns about the dangers of deploying AI systems that are too complex to understand fully. He argues this creates serious risks of bias and discrimination in critical applications.
Evidence
US algorithms used to decide sentence lengths discriminate against Black people who are automatically flagged as high-risk without context; AI systems in India removed thousands of legitimate welfare recipients due to faulty algorithms
Major discussion point
Trust, Transparency and Responsible AI Development
Topics
Human rights | Legal and regulatory
Understanding and controlling AI technology is essential – if we don’t understand AI, then AI controls us rather than serving as our tool
Explanation
Granmo emphasizes the fundamental importance of maintaining human control over AI systems through understanding. He argues that incomprehensible AI systems reverse the proper relationship between humans and technology.
Major discussion point
Trust, Transparency and Responsible AI Development
Topics
Human rights | Legal and regulatory
Disagreed with
– Kojo Boake
Disagreed on
Technology transparency and control philosophy
Chinasa T. Okolo
Speech speed
152 words per minute
Speech length
1516 words
Speech time
598 seconds
Smaller nations can lead by focusing on contextualized AI approaches rather than trying to build general AI models
Explanation
Okolo argues that smaller countries should avoid trying to compete in building general AI models and instead focus on developing AI solutions tailored to their specific contexts and needs. This approach can be more beneficial and achievable.
Evidence
Smaller models and approaches like model quantization and edge computing can benefit rural areas and marginalized contexts in both global majority and global north communities
Major discussion point
Small States and Startups Leveraging AI Opportunities
Topics
Development | Infrastructure
Agreed with
– Karianne Tung
– John M Lervik
– Esther Kunda
– Natalie Becker Aakervik
Agreed on
Small states and players can leverage unique advantages and agility to compete in AI despite resource constraints
The global AI divide shows disproportionate impacts on regions like Africa, Asia, and Latin America despite these areas breeding innovation through marginalization
Explanation
Okolo describes how certain regions face disproportionate negative impacts from AI development while being excluded from its benefits. However, she notes that this marginalization is actually spurring innovative approaches to AI development.
Evidence
50% of AI research comes from US and China, 80% of VC funding goes to these two countries, yet marginalized regions are developing new models for AI that work for their needs
Major discussion point
Global AI Equity and Inclusive Development
Topics
Development | Human rights
Small and emerging nations are redefining AI development on their own terms rather than relegating themselves to the sidelines
Explanation
Okolo argues that despite infrastructure and resource disparities, smaller nations are not accepting a passive role in AI development. Instead, they are actively creating new approaches that work for their specific contexts and needs.
Evidence
Estonia built an AI-powered digital government, Rwanda developed the first AI policy on the African continent, Singapore is leading regional cooperation and scientific breakthroughs in LLMs
Major discussion point
Global AI Equity and Inclusive Development
Topics
Development | Legal and regulatory
Data sovereignty, contextual innovation, and peer-to-peer collaboration can help smaller countries control digital resources and increase independence
Explanation
Okolo presents three key pillars that can enable smaller countries to transform their AI capabilities while maintaining control over their digital resources and reducing dependence on large tech corporations.
Evidence
Estonia has integrated data sovereignty into digital government and redefined contracts with large tech companies; contextual innovation leverages efficient methods and indigenous values; peer-to-peer collaboration creates regional networks that bypass traditional power hierarchies
Major discussion point
Global AI Equity and Inclusive Development
Topics
Legal and regulatory | Development
Agreed with
– Karianne Tung
– Jeff Bullwinkel
– Kojo Boake
– Natalie Becker Aakervik
Agreed on
Collaboration and partnerships are essential for AI development and governance
Countries should innovate in AI governance models rather than solely relying on bigger regional blocks or countries as standards
Explanation
Okolo argues that just as smaller countries shouldn’t rely solely on big tech companies for AI development standards, they also shouldn’t simply copy governance models from larger countries or regional blocks. Innovation in governance is equally important.
Major discussion point
AI Governance and Regulatory Frameworks
Topics
Legal and regulatory | Development
Agreed with
– John M Lervik
– Jeff Bullwinkel
– Kojo Boake
Agreed on
AI governance should be pragmatic and avoid hindering innovation while ensuring responsible development
Esther Kunda
Speech speed
144 words per minute
Speech length
1006 words
Speech time
416 seconds
Small states should position themselves as innovation labs and testing environments with agile regulatory frameworks
Explanation
Kunda argues that small states can leverage their agility advantage by positioning themselves as testing grounds for AI innovation. This requires regulatory frameworks that can evolve quickly alongside rapidly advancing technology.
Evidence
Rwanda has positioned itself as an innovation lab and created agile regulatory frameworks; government passed a data sharing policy to enable AI model training on Rwandan data
Major discussion point
Small States and Startups Leveraging AI Opportunities
Topics
Legal and regulatory | Development
Agreed with
– Karianne Tung
– John M Lervik
– Chinasa T. Okolo
– Natalie Becker Aakervik
Agreed on
Small states and players can leverage unique advantages and agility to compete in AI despite resource constraints
Rwanda has developed a comprehensive AI strategy focusing on data sharing policies, regulatory sandboxes, and partnerships with academia
Explanation
Kunda outlines Rwanda’s systematic approach to AI development, which includes policy frameworks, infrastructure development, and talent development through academic partnerships. This represents a holistic national strategy for AI adoption.
Evidence
Rwanda has an AI strategy and policy, data sharing policy, partnerships with Carnegie Mellon University and Africa Leadership University, and assessment of infrastructure and ecosystem readiness
Major discussion point
National AI Strategies and Infrastructure Development
Topics
Development | Legal and regulatory
Countries need access to high-performance computing, quality data, and skilled workforce development to build foundational AI capabilities
Explanation
Kunda identifies the key building blocks that countries need to establish before they can effectively leverage AI. She emphasizes that foundational capabilities must be developed systematically across multiple areas.
Evidence
Rwanda is working on connectivity, affordable data access, data sharing policies, partnerships with universities for talent development, and infrastructure readiness assessments
Major discussion point
National AI Strategies and Infrastructure Development
Topics
Infrastructure | Development
Jeff Bullwinkel
Speech speed
175 words per minute
Speech length
2861 words
Speech time
977 seconds
Large platforms should provide broad access, fair treatment, and interoperable open standards while maintaining responsibility
Explanation
Bullwinkel outlines Microsoft’s approach to supporting smaller players through their AI access principles. He emphasizes that large platforms have a responsibility to enable broad participation in AI development while maintaining ethical standards.
Evidence
Microsoft’s AI access principles focus on three areas: access to infrastructure, fairness in treatment with interoperable open standards, and responsibility in developing ethical AI principles and legal compliance
Major discussion point
Open Source AI and Platform Collaboration
Topics
Infrastructure | Legal and regulatory
Agreed with
– Karianne Tung
– Kojo Boake
– Chinasa T. Okolo
– Natalie Becker Aakervik
Agreed on
Collaboration and partnerships are essential for AI development and governance
Large tech companies must build sovereign controls, resist government orders to suspend services, and maintain data privacy and security
Explanation
Bullwinkel describes Microsoft’s commitments to digital resilience, including governance structures, resistance to government interference, and data sovereignty measures. These commitments address concerns about geopolitical volatility and trust in technology.
Evidence
Microsoft commits to European-only boards of directors for AI infrastructure, contractual commitments to resist orders to cease services, business continuity mechanisms through Swiss code repositories, and sovereign cloud options
Major discussion point
Trust, Transparency and Responsible AI Development
Topics
Legal and regulatory | Cybersecurity
Disagreed with
– John M Lervik
Disagreed on
Approach to AI regulation – value creation first vs. ethics first
Success in AI adoption may depend more on widespread diffusion and integration across society rather than where the technology originated
Explanation
Bullwinkel references research suggesting that the key to benefiting from AI may not be inventing the technology first, but rather successfully adopting and integrating it throughout society. This perspective offers hope for countries that are not AI originators.
Evidence
Reference to Jeffrey Ding’s book ‘Technology and the Rise of Great Powers’ which argues that successful adoption and diffusion matters more than original invention
Major discussion point
Global AI Equity and Inclusive Development
Topics
Development | Economic
Agreed with
– John M Lervik
– Kojo Boake
– Chinasa T. Okolo
Agreed on
AI governance should be pragmatic and avoid hindering innovation while ensuring responsible development
Kojo Boake
Speech speed
164 words per minute
Speech length
2185 words
Speech time
796 seconds
Open source models like Llama enable smaller players to fine-tune AI for local purposes while reducing compute costs and increasing transparency
Explanation
Boake argues that Meta’s open source approach levels the playing field by providing access to AI models that can be customized for local needs. This approach offers advantages in cost, transparency, and flexibility that are particularly beneficial for smaller players.
Evidence
Llama models have been downloaded one billion times; advantages include lower compute costs, ability to fine-tune for local purposes, transparency through access to model weights, and shared learning on cybersecurity
Major discussion point
Open Source AI and Platform Collaboration
Topics
Infrastructure | Development
Disagreed with
– Ole Christopher Granmo
Disagreed on
Technology transparency and control philosophy
Meta’s open source approach has enabled applications like educational tools reaching 3 million students and agricultural SMS services for farmers
Explanation
Boake provides concrete examples of how open source AI models are being used to create impactful applications in education, agriculture, and healthcare across Africa. These examples demonstrate the practical benefits of open source AI for development.
Evidence
Fundimate educational app reaches 3 million students; Digital Green SMS service helps farmers in Kenya increase yields; Jacaranda Health helps mothers in Kenya and Ghana with maternal health in local languages; Akili AI partnership with African Union Development Agency
Major discussion point
Open Source AI and Platform Collaboration
Topics
Development | Sociocultural
Smaller nations should avoid cookie-cutter regulatory approaches and develop frameworks suited to their local contexts rather than copying larger regions
Explanation
Boake warns against simply copying regulatory frameworks from larger jurisdictions like Europe, noting that overregulation can delay valuable AI deployments. He advocates for context-appropriate regulation that doesn’t hinder innovation.
Evidence
Meta delayed launch of Meta AI on WhatsApp and Facebook in Europe due to regulatory uncertainty; regulators and heads of state in Middle East, Africa, and Turkey are mindful of avoiding cookie-cutter approaches
Major discussion point
AI Governance and Regulatory Frameworks
Topics
Legal and regulatory | Development
Agreed with
– John M Lervik
– Jeff Bullwinkel
– Chinasa T. Okolo
Agreed on
AI governance should be pragmatic and avoid hindering innovation while ensuring responsible development
Multi-stakeholder collaboration involving big players, medium companies, small regional operators, and academics is essential for effective AI governance
Explanation
Boake emphasizes that addressing AI’s ethical challenges and realizing its value requires inclusive collaboration across all types of stakeholders. He argues that the Internet Governance Forum provides an ideal platform for such multi-stakeholder engagement.
Evidence
Need for impactful conversations involving big players, CSOs, medium-sized companies, small regional companies, academics, and other stakeholders; IGF provides platform for such collaboration
Major discussion point
AI Governance and Regulatory Frameworks
Topics
Legal and regulatory | Development
Agreed with
– Karianne Tung
– Jeff Bullwinkel
– Chinasa T. Okolo
– Natalie Becker Aakervik
Agreed on
Collaboration and partnerships are essential for AI development and governance
Natalie Becker Aakervik
Speech speed
148 words per minute
Speech length
2247 words
Speech time
908 seconds
Innovation doesn’t always come from size but from agility, trust, deep knowledge and smart collaborations
Explanation
Aakervik argues that while the biggest AI models require enormous resources concentrated in few major players, innovation can still emerge from smaller actors through their unique advantages. She emphasizes that small actors can leverage agility, trust-building capabilities, specialized knowledge, and strategic partnerships to compete effectively in the AI landscape.
Major discussion point
Small States and Startups Leveraging AI Opportunities
Topics
Development | Economic
Agreed with
– Karianne Tung
– John M Lervik
– Chinasa T. Okolo
– Esther Kunda
Agreed on
Small states and players can leverage unique advantages and agility to compete in AI despite resource constraints
Small actors can move from being small players to strategic shapers of the digital world through partnerships and collaboration
Explanation
Aakervik emphasizes that partnerships and collaboration have emerged as key themes and actionable takeaways from discussions. She argues that through the right kind of partnerships, small actors can transform from passive participants to active shapers of digital innovation that is inclusive, global, and sustainable.
Evidence
Partnerships and collaboration have come up very strongly as actionable takeaways throughout the day’s discussions
Major discussion point
Small States and Startups Leveraging AI Opportunities
Topics
Development | Economic
Agreed with
– Karianne Tung
– Jeff Bullwinkel
– Kojo Boake
– Chinasa T. Okolo
Agreed on
Collaboration and partnerships are essential for AI development and governance
Technology and AI play crucial roles in making traditional industries like salmon farming sustainable
Explanation
Aakervik highlights how Norway’s globally recognized salmon industry benefits from technology and AI integration to maintain sustainability. This demonstrates how AI can be applied to traditional sectors to solve environmental and operational challenges.
Evidence
Video showing how technology and AI is helping to save the Atlantic salmon, with Norway being globally known for its salmon industry
Major discussion point
AI Applications in Traditional Industries
Topics
Development | Infrastructure
Noel Hurley
Speech speed
110 words per minute
Speech length
29 words
Speech time
15 seconds
The Tsetlin Machine approach offers a computationally simpler alternative to deep learning that is cheaper to train and operate
Explanation
Hurley presents the Tsetlin Machine as a revolutionary alternative to current AI approaches that addresses the major challenge of computational complexity in AI. This technology offers significant cost advantages in both training and operational phases while maintaining effectiveness.
Evidence
The Tsetlin Machine is cheaper to train, cheaper to run, and uses up to 10,000 times less electricity per inference per decision compared to traditional approaches
Major discussion point
Energy-Efficient and Alternative AI Technologies
Topics
Infrastructure | Development
Agreed with
– John M Lervik
– Daniel Dykes
– Ole Christopher Granmo
– Rishad A. Shafik
Agreed on
Energy efficiency in AI is a critical concern requiring alternative approaches
Rishad A. Shafik
Speech speed
132 words per minute
Speech length
45 words
Speech time
20 seconds
The Tsetlin Machine algorithm has intrinsic logic-based properties that make it naturally energy efficient, accurate, and explainable
Explanation
Shafik argues that the Tsetlin Machine’s foundation in logic gives it inherent advantages over other AI approaches. These properties make it particularly suitable for developing new types of AI algorithms and applications that prioritize energy efficiency without sacrificing performance or interpretability.
Evidence
The algorithm is based on logic which makes it energy efficient, accurate, and explainable by nature
Major discussion point
Energy-Efficient and Alternative AI Technologies
Topics
Infrastructure | Development
Agreed with
– John M Lervik
– Daniel Dykes
– Ole Christopher Granmo
– Noel Hurley
Agreed on
Energy efficiency in AI is a critical concern requiring alternative approaches
Agreements
Agreement points
Small states and players can leverage unique advantages and agility to compete in AI despite resource constraints
Speakers
– Karianne Tung
– John M Lervik
– Chinasa T. Okolo
– Esther Kunda
– Natalie Becker Aakervik
Arguments
Small states can become global leaders in digitalization and tech regulation through long-term national strategies that prioritize innovation, citizen trust and smart governance
Small players should focus on particular problems and ensure they’re sufficiently big that large companies also care about them to create competitive tension
Smaller nations can lead by focusing on contextualized AI approaches rather than trying to build general AI models
Small states should position themselves as innovation labs and testing environments with agile regulatory frameworks
Innovation doesn’t always come from size but from agility, trust, deep knowledge and smart collaborations
Summary
All speakers agreed that small states and companies can compete effectively in AI by leveraging their unique advantages like agility, specialized focus, and strategic positioning rather than trying to match the scale of large players
Topics
Development | Economic
Energy efficiency in AI is a critical concern requiring alternative approaches
Speakers
– John M Lervik
– Daniel Dykes
– Ole Christopher Granmo
– Noel Hurley
– Rishad A. Shafik
Arguments
Norway’s combination of 100% clean energy and cold climate creates unique advantages for energy-efficient AI development
The Tsetlin Machine offers an energy-efficient alternative to deep learning, using up to 10,000 times less electricity per inference while maintaining accuracy and explainability
Current AI technology like ChatGPT is extremely energy hungry, with one query consuming the same energy as lighting a bulb for 20 minutes
The Tsetlin Machine approach offers a computationally simpler alternative to deep learning that is cheaper to train and operate
The Tsetlin Machine algorithm has intrinsic logic-based properties that make it naturally energy efficient, accurate, and explainable
Summary
Multiple speakers emphasized the urgent need for energy-efficient AI solutions, with several promoting the Tsetlin Machine as a viable alternative to current energy-intensive approaches
Topics
Infrastructure | Development
Collaboration and partnerships are essential for AI development and governance
Speakers
– Karianne Tung
– Jeff Bullwinkel
– Kojo Boake
– Chinasa T. Okolo
– Natalie Becker Aakervik
Arguments
AI must serve the public good rather than become a playground for the powerful, with small players often well-positioned to drive innovation with purpose
Large platforms should provide broad access, fair treatment, and interoperable open standards while maintaining responsibility
Multi-stakeholder collaboration involving big players, medium companies, small regional operators, and academics is essential for effective AI governance
Data sovereignty, contextual innovation, and peer-to-peer collaboration can help smaller countries control digital resources and increase independence
Small actors can move from being small players to strategic shapers of the digital world through partnerships and collaboration
Summary
All speakers emphasized the importance of collaborative approaches to AI development, whether through public-private partnerships, multi-stakeholder governance, or international cooperation
Topics
Development | Legal and regulatory
AI governance should be pragmatic and avoid hindering innovation while ensuring responsible development
Speakers
– John M Lervik
– Jeff Bullwinkel
– Kojo Boake
– Chinasa T. Okolo
Arguments
AI regulation should focus on creating value first rather than starting with privacy and ethical constraints, as value creation enables proper governance
Success in AI adoption may depend more on widespread diffusion and integration across society rather than where the technology originated
Smaller nations should avoid cookie-cutter regulatory approaches and develop frameworks suited to their local contexts rather than copying larger regions
Countries should innovate in AI governance models rather than solely relying on bigger regional blocks or countries as standards
Summary
Speakers agreed that AI governance should prioritize enabling innovation and value creation while being tailored to local contexts rather than copying one-size-fits-all approaches
Topics
Legal and regulatory | Development
Similar viewpoints
Both speakers expressed concerns about the risks and inequities of current AI systems, emphasizing the need for more transparent and inclusive approaches to AI development
Speakers
– Ole Christopher Granmo
– Chinasa T. Okolo
Arguments
Current AI systems are black boxes that we don’t fully understand, creating risks when deployed in critical areas like criminal justice and healthcare
The global AI divide shows disproportionate impacts on regions like Africa, Asia, and Latin America despite these areas breeding innovation through marginalization
Topics
Human rights | Legal and regulatory
Both speakers advocated for specialized, domain-specific approaches to AI rather than trying to compete in general-purpose AI development
Speakers
– John M Lervik
– Chinasa T. Okolo
Arguments
Small companies can leverage unique data access in specific domains like industrial data to compete with giants who have more general consumer data
Smaller nations can lead by focusing on contextualized AI approaches rather than trying to build general AI models
Topics
Development | Economic
Both representatives from major tech companies emphasized their commitment to enabling smaller players through open access, fair treatment, and transparent approaches
Speakers
– Jeff Bullwinkel
– Kojo Boake
Arguments
Large platforms should provide broad access, fair treatment, and interoperable open standards while maintaining responsibility
Open source models like Llama enable smaller players to fine-tune AI for local purposes while reducing compute costs and increasing transparency
Topics
Infrastructure | Development
Unexpected consensus
Tech industry representatives advocating for regulatory restraint
Speakers
– John M Lervik
– Jeff Bullwinkel
– Kojo Boake
Arguments
AI regulation should focus on creating value first rather than starting with privacy and ethical constraints, as value creation enables proper governance
Success in AI adoption may depend more on widespread diffusion and integration across society rather than where the technology originated
Smaller nations should avoid cookie-cutter regulatory approaches and develop frameworks suited to their local contexts rather than copying larger regions
Explanation
Unexpectedly, both large tech company representatives and startup leaders agreed on the need for more flexible, innovation-friendly regulatory approaches, suggesting industry-wide concern about overregulation hindering AI development
Topics
Legal and regulatory | Economic
Academic and industry alignment on alternative AI technologies
Speakers
– Ole Christopher Granmo
– Daniel Dykes
– Noel Hurley
– Rishad A. Shafik
– John M Lervik
Arguments
Current AI technology like ChatGPT is extremely energy hungry, with one query consuming the same energy as lighting a bulb for 20 minutes
The Tsetlin Machine offers an energy-efficient alternative to deep learning, using up to 10,000 times less electricity per inference while maintaining accuracy and explainability
Norway’s combination of 100% clean energy and cold climate creates unique advantages for energy-efficient AI development
Explanation
There was unexpected consensus between academic researchers promoting alternative AI technologies and industry practitioners on the urgent need for energy-efficient AI solutions, suggesting broader recognition of sustainability challenges
Topics
Infrastructure | Development
Overall assessment
Summary
The discussion revealed strong consensus on several key themes: the potential for small states and companies to compete effectively in AI through strategic focus and partnerships; the critical importance of energy-efficient AI development; the need for collaborative, multi-stakeholder approaches to AI governance; and the importance of pragmatic regulation that enables innovation while ensuring responsible development
Consensus level
High level of consensus with remarkable alignment between different stakeholder groups (government, industry, academia, civil society) on fundamental principles. This suggests a mature understanding of AI challenges and opportunities across the community, with implications for more coordinated and effective AI governance and development strategies globally
Differences
Different viewpoints
Approach to AI regulation – value creation first vs. ethics first
Speakers
– John M Lervik
– Jeff Bullwinkel
Arguments
AI regulation should focus on creating value first rather than starting with privacy and ethical constraints, as value creation enables proper governance
Large tech companies must build sovereign controls, resist government orders to suspend services, and maintain data privacy and security
Summary
Lervik argues Europe has approached AI regulation backwards by prioritizing privacy and ethics before establishing value creation, suggesting value should come first then guardrails. Bullwinkel emphasizes Microsoft’s commitment to responsible AI principles from the start, including privacy, security, and ethical frameworks as foundational elements.
Topics
Legal and regulatory | Economic
Technology transparency and control philosophy
Speakers
– Ole Christopher Granmo
– Kojo Boake
Arguments
Understanding and controlling AI technology is essential – if we don’t understand AI, then AI controls us rather than serving as our tool
Open source models like Llama enable smaller players to fine-tune AI for local purposes while reducing compute costs and increasing transparency
Summary
Granmo advocates for complete understanding and control of AI systems, warning against black box technologies. Boake promotes open source as sufficient transparency, arguing that access to model weights and fine-tuning capabilities provide adequate transparency without requiring complete understanding of internal mechanisms.
Topics
Human rights | Legal and regulatory | Infrastructure
Unexpected differences
Role of competition with large tech companies
Speakers
– John M Lervik
– Kojo Boake
Arguments
Small players should focus on particular problems and ensure they’re sufficiently big that large companies also care about them to create competitive tension
Multi-stakeholder collaboration involving big players, medium companies, small regional operators, and academics is essential for effective AI governance
Explanation
Unexpectedly, Lervik advocates for creating competitive tension with large tech companies as a strategy for small players, while Boake emphasizes collaboration and partnership. This disagreement is surprising given that both represent the startup/platform ecosystem and might be expected to have similar views on industry dynamics.
Topics
Economic | Development
Sufficiency of current AI transparency approaches
Speakers
– Ole Christopher Granmo
– Jeff Bullwinkel
Arguments
Current AI systems are black boxes that we don’t fully understand, creating risks when deployed in critical areas like criminal justice and healthcare
Large platforms should provide broad access, fair treatment, and interoperable open standards while maintaining responsibility
Explanation
Granmo fundamentally rejects current AI approaches as insufficiently transparent and dangerous, while Bullwinkel suggests that responsible AI principles and governance frameworks are adequate. This disagreement is unexpected given both speakers’ technical backgrounds and shared concern for AI safety.
Topics
Human rights | Legal and regulatory
Overall assessment
Summary
The main areas of disagreement center on regulatory philosophy (value-first vs. ethics-first), the level of transparency and control required for AI systems, and whether small players should compete with or collaborate with large tech companies. Most speakers agreed on the potential for small states and companies to succeed in AI through specialized approaches.
Disagreement level
The level of disagreement was moderate but philosophically significant. While speakers largely agreed on goals (enabling small players in AI, ensuring responsible development), they had fundamental differences on approaches and priorities. These disagreements reflect deeper tensions in the AI ecosystem between different models of development, governance, and industry structure that could significantly impact how AI develops globally.
Partial agreements
Partial agreements
Similar viewpoints
Both speakers expressed concerns about the risks and inequities of current AI systems, emphasizing the need for more transparent and inclusive approaches to AI development
Speakers
– Ole Christopher Granmo
– Chinasa T. Okolo
Arguments
Current AI systems are black boxes that we don’t fully understand, creating risks when deployed in critical areas like criminal justice and healthcare
The global AI divide shows disproportionate impacts on regions like Africa, Asia, and Latin America despite these areas breeding innovation through marginalization
Topics
Human rights | Legal and regulatory
Both speakers advocated for specialized, domain-specific approaches to AI rather than trying to compete in general-purpose AI development
Speakers
– John M Lervik
– Chinasa T. Okolo
Arguments
Small companies can leverage unique data access in specific domains like industrial data to compete with giants who have more general consumer data
Smaller nations can lead by focusing on contextualized AI approaches rather than trying to build general AI models
Topics
Development | Economic
Both representatives from major tech companies emphasized their commitment to enabling smaller players through open access, fair treatment, and transparent approaches
Speakers
– Jeff Bullwinkel
– Kojo Boake
Arguments
Large platforms should provide broad access, fair treatment, and interoperable open standards while maintaining responsibility
Open source models like Llama enable smaller players to fine-tune AI for local purposes while reducing compute costs and increasing transparency
Topics
Infrastructure | Development
Takeaways
Key takeaways
Small states and startups can compete in AI by focusing on specific domains where they have unique data advantages and deep expertise, rather than trying to match the scale of tech giants
Energy-efficient AI alternatives like the Tsetlin Machine offer opportunities for smaller players to develop sovereign AI capabilities using significantly less computational resources
Success in AI adoption depends more on widespread integration and diffusion across society than on where the technology was originally invented
Open source AI models enable smaller players to fine-tune solutions for local contexts while reducing costs and increasing transparency
AI governance frameworks should prioritize value creation first, then add appropriate guardrails, rather than starting with restrictive regulations that may hinder adoption
Multi-stakeholder collaboration involving companies of all sizes, governments, academia, and civil society is essential for inclusive AI development
Data sovereignty, contextual innovation, and peer-to-peer collaboration can help smaller countries maintain independence from big tech dominance
Small nations can lead in AI governance innovation by developing frameworks suited to their local contexts rather than copying larger regional approaches
Trust and transparency in AI systems are critical – current black box models create risks when deployed in sensitive areas like justice and healthcare
Resolutions and action items
Norway committed to implementing EU’s AI Act with national supervisory authority and AI Norway initiative including regulatory sandboxes
Norway allocated 1.3 billion Norwegian kroners to AI research through six newly selected research centers starting operations in summer
Meta invited governments and organizations to collaborate on using Llama models for national problem-solving
Microsoft announced European digital commitments including sovereign cloud services and cybersecurity programs
Rwanda committed to continuing partnerships with academia and other countries to develop AI talent and innovation ecosystem
Participants encouraged to apply for Meta’s Llama Impact Accelerator Program for mentorship and skills development
Unresolved issues
How to balance AI regulation that ensures safety without hindering innovation and adoption, particularly for smaller countries
The challenge of developing truly explainable AI systems that can be understood and controlled rather than operating as black boxes
How to address the massive energy consumption of current AI systems and scale energy-efficient alternatives
The question of whether smaller countries should focus primarily on leveraging existing AI platforms or invest in developing sovereign AI capabilities
How to ensure equitable global AI development when 80% of VC funding goes to just the US and China
The tension between open source AI benefits and potential security/misuse risks
How to develop AI governance frameworks that are contextually appropriate rather than one-size-fits-all approaches
Suggested compromises
Small countries should both leverage existing AI platforms from tech giants AND develop their own sovereign capabilities in areas of competitive advantage
AI regulation should focus on risk-based frameworks that create appropriate guardrails while allowing pragmatic adoption and innovation
Large tech companies should provide open access and interoperability while maintaining responsibility for safety and security
AI development should combine the scale advantages of large companies with the agility and contextual knowledge of smaller players through strategic partnerships
Countries should collaborate through peer-to-peer networks and regional cooperation while maintaining data sovereignty and local control
Thought provoking comments
AI must not become a playground for the powerful, it must serve the public good. And small players are often well positioned to drive innovation with purpose.
Speaker
Karianne Tung (Norway’s Minister of Digitalization)
Reason
This comment reframes the entire AI discussion from a technical competition to a values-based imperative. It challenges the assumption that bigger is necessarily better and positions small actors as potentially more aligned with public interest rather than just market dominance.
Impact
This set the moral and strategic foundation for the entire session, establishing that the conversation wasn’t just about competing with tech giants, but about creating AI that serves broader societal needs. It influenced subsequent speakers to focus on purpose-driven innovation rather than just scale.
Today we don’t fully understand the AI. If we don’t understand the AI, the AI controls us. We have to turn it around. We have to fully understand the AI so they become a tool for us so that we are in control.
Speaker
Ole Christopher Granmo
Reason
This comment cuts to the heart of a fundamental paradox in AI development – we’re deploying technology we don’t fully comprehend. It challenges the entire premise of black-box AI systems and introduces the concept of explainable AI as not just desirable but essential for human agency.
Impact
This comment introduced a critical tension into the discussion about transparency versus performance. It shifted the conversation from ‘how can we compete’ to ‘how can we maintain control,’ adding a philosophical dimension that influenced other speakers to address the transparency and governance aspects of AI development.
We are starting with the cart in front of the horse in many ways. We started to talk about ethical use and privacy and stuff like that… We need to start with understanding how do we create value from AI?
Speaker
John M Lervik
Reason
This comment challenges the European approach to AI regulation and suggests a fundamental reordering of priorities. It’s provocative because it suggests that focusing on ethics first might actually hinder innovation and value creation.
Impact
This sparked a nuanced discussion about the balance between regulation and innovation. It led other speakers, particularly Jeff Bullwinkel and Kojo Boake, to address the ‘overregulation’ concern and discuss how to create frameworks that enable rather than constrain AI development.
Just as we shouldn’t rely on these big tech companies to be the standard of AI development, we also should not rely on these bigger regional blocks or countries to also be the model for AI governance as well.
Speaker
Chinasa T. Okolo
Reason
This comment extends the sovereignty argument beyond technology to governance itself, suggesting that smaller nations shouldn’t just copy existing regulatory frameworks but should innovate in governance approaches tailored to their contexts and values.
Impact
This deepened the discussion beyond technical capabilities to include governance innovation as a competitive advantage. It reinforced the theme that small players can lead rather than just follow, and influenced the conversation toward more nuanced approaches to AI policy.
I want to give a shout out to those small players that aren’t interested in [competing with big tech]. They’re actually interested in resolving, making viable businesses or resolving local issues and contextual issues that may never interest Meta, Microsoft, ChatGPT… but are extremely interesting to their locality or their nation.
Speaker
Kojo Boake
Reason
This comment challenges the assumption that all innovation should aim to compete with or attract big tech. It validates local, contextual solutions as valuable in their own right, not just as stepping stones to global scale.
Impact
This comment broadened the definition of success in AI development and validated different paths to innovation. It helped shift the conversation from a binary view of ‘compete or collaborate with big tech’ to recognizing multiple valid approaches to AI development.
One query with ChatGPT… is the same amount of energy as it takes to light one light bulb for 20 minutes. Furthermore, every month, ChatGPT produces more than 260,000 carbon CO2… equal to the emission of 260 flights from New York to London.
Speaker
Ole Christopher Granmo
Reason
This comment provides concrete, relatable metrics that make the abstract concept of AI’s environmental impact tangible and shocking. It reframes AI development as an environmental justice issue, not just a technological one.
Impact
This introduced environmental sustainability as a critical factor in AI development strategy, influencing other speakers to address energy efficiency and green computing as competitive advantages for smaller nations with renewable energy resources.
Overall assessment
These key comments fundamentally shaped the discussion by challenging conventional assumptions about AI development and competition. Rather than accepting that small players must simply adapt to big tech’s paradigms, the speakers collectively built a case for alternative approaches based on values, sustainability, explainability, and local context. The comments created a progression from identifying the problem (AI as playground for the powerful) to proposing solutions (energy-efficient models, contextual innovation, governance innovation) to validating different definitions of success (local solutions vs. global competition). This transformed what could have been a defensive conversation about ‘how to survive’ into an empowering discussion about ‘how to lead’ in AI development.
Follow-up questions
How can Norway and other small countries build sufficient compute infrastructure and access to GPUs needed for training AI models?
Speaker
John M Lervik
Explanation
Lervig identified compute access as a critical need beyond having unique data and competence, noting that while Norway has advantages in industrial data and clean energy, more computing infrastructure is needed to compete globally in AI development.
How can smaller nations develop AI governance frameworks that avoid over-regulation while still ensuring ethical AI development?
Speaker
Kojo Boake and Jeff Bullwinkel
Explanation
Both speakers highlighted the challenge of creating regulatory frameworks that don’t hinder AI adoption and innovation, with Boake specifically mentioning how European over-regulation has caused delays in product launches.
What specific mechanisms can enable effective peer-to-peer collaboration between smaller countries in AI development?
Speaker
Chinasa T. Okolo
Explanation
Okolo mentioned peer-to-peer collaboration as essential for bypassing traditional power hierarchies, but the specific implementation mechanisms for such collaboration networks need further exploration.
How can the Tsetlin Machine hardware development at University of Newcastle be scaled to create a viable alternative to NVIDIA’s deep learning-optimized hardware?
Speaker
Ole Christopher Granmo
Explanation
Granmo identified the need to build alternative hardware from the ground up to support energy-efficient AI, but the path to scaling this pioneering work into a commercial alternative requires further research.
How can smaller countries effectively balance leveraging existing big tech platforms while developing their own sovereign AI capabilities?
Speaker
John M Lervik
Explanation
Lervig emphasized that Norway cannot just ‘sit on the shoulders’ of Microsoft and Meta but needs to develop its own IP and value creation, raising questions about the optimal strategy for this balance.
What are the specific socio-technical research needs for understanding AI bias in non-Western contexts, particularly around caste, tribal affiliation, and other local social identities?
Speaker
Chinasa T. Okolo
Explanation
Okolo highlighted that current AI fairness literature focuses on Western concepts like race, but more research is needed on how AI models can discriminate based on social identities relevant to global majority countries.
How can the regulatory sandbox model be optimized to support SMEs and startups in AI development across different national contexts?
Speaker
Karianne Tung and Esther Kunda
Explanation
Both speakers mentioned regulatory sandboxes as important tools, but questions remain about best practices for implementation and how to make them most effective for small players.
What are the practical steps for implementing data sovereignty while maintaining international collaboration in AI development?
Speaker
Multiple speakers including Jeff Bullwinkel and Esther Kunda
Explanation
While data sovereignty was identified as crucial, the specific mechanisms for achieving it while still enabling beneficial international partnerships and data sharing need further exploration.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event
