High Level Session 3: AI & the Future of Work

25 Jun 2025 09:30h - 11:00h

High Level Session 3: AI & the Future of Work

Session at a glance

Summary

This discussion from the 20th Annual Internet Governance Forum focused on AI and the future of work, examining how artificial intelligence is reshaping employment, education, and society. The session brought together government officials, tech industry representatives, and policy experts to explore both opportunities and challenges presented by AI adoption across different sectors.


Several speakers highlighted AI’s transformative potential in healthcare, agriculture, and public services. Examples included AI-powered tuberculosis detection in Lesotho’s rural areas, agricultural chatbots helping farmers in local languages, and AI tools reducing administrative burdens for healthcare workers. Norwegian officials emphasized AI’s role in maintaining welfare states amid aging populations, while acknowledging the need for international cooperation on regulation.


A significant tension emerged around data ownership and worker compensation. Actor and entrepreneur Joseph Gordon-Levitt argued that tech companies are generating enormous value from human-created data without consent or compensation, advocating for workers to have economic stakes in their digital contributions. This contrasted with industry representatives who emphasized AI’s role as a productivity enhancer rather than job replacer.


The discussion revealed concerns about digital divides and inequality. Speakers stressed the importance of ensuring equitable access to AI tools, particularly for women, rural populations, and developing countries. Education emerged as a critical battleground, with debates over whether AI should primarily serve as a learning aid or risk creating dependency that undermines critical thinking skills.


Regulatory approaches varied significantly, with the US advocating for minimal restrictions to foster innovation, while European representatives emphasized the need for trust-building frameworks. The session concluded with calls for public-private partnerships to ensure AI development serves broader societal interests rather than solely corporate profits.


Keypoints

## Major Discussion Points:


– **AI’s Current Impact and Future Potential in Various Sectors**: Speakers highlighted how AI is already transforming healthcare (TB detection in Lesotho, medical diagnostics), agriculture (crop disease identification), education (personalized learning), and creative industries, while emphasizing this is just the beginning of a broader transformation.


– **Balancing Innovation with Worker Rights and Fair Compensation**: A central tension emerged around ensuring AI development doesn’t exploit workers’ data and creative output without consent or compensation, with particular focus on how training data is sourced and whether creators should be paid for their contributions to AI systems.


– **Equity and Access Concerns**: Multiple speakers emphasized the risk of AI widening existing inequalities, particularly around digital divides, gender disparities in AI adoption, and ensuring that benefits reach developing countries and rural communities rather than concentrating in wealthy tech hubs.


– **Education System Transformation and Skills Development**: Discussion centered on how education must evolve to prepare workers for an AI-driven economy, including the importance of maintaining critical thinking skills, avoiding over-dependence on AI tools, and ensuring both AI literacy and fundamental human capabilities.


– **Regulatory Approaches and International Cooperation**: Speakers debated different philosophical approaches to AI governance, from the US emphasis on innovation-friendly policies to European focus on risk-based regulation, with consensus that international cooperation is essential for effective AI governance.


## Overall Purpose:


The discussion aimed to explore how artificial intelligence can be harnessed to support decent work and fair economic transitions while addressing the challenges of job displacement, inequality, and the need for new educational and policy frameworks. The session sought to bring together diverse perspectives from government officials, tech companies, civil society, and creative professionals to identify strategies for ensuring AI benefits are broadly shared rather than concentrated among a few actors.


## Overall Tone:


The discussion maintained a cautiously optimistic tone throughout, with speakers acknowledging both the tremendous potential and serious risks of AI. While there were moments of tension—particularly around issues of data rights and regulatory approaches—the conversation remained constructive and collaborative. The tone became more urgent toward the end as speakers emphasized the need for immediate action to shape AI’s development responsibly, but concluded on a hopeful note with the moderator drawing parallels to successfully navigating previous technological transitions like the early internet.


Speakers

– **Jonathan Charles** – Moderator, strategic communications advisor to presidents, governments, and multinational companies; former executive committee member of the European Bank for Reconstruction and Development; former BBC News foreign correspondent and anchor


– **Tomas Norvoll** – State Secretary at Norway’s Trade, Industry and Fishery Ministry


– **Junha Li** – UN Under-Secretary General for Economic and Social Affairs


– **Sandro Gianella** – Representative from OpenAI


– **Chris Yiu** – Director of Public Policy for Northern Europe for META


– **Nthati Moorosi** – Minister of Communications, Science and Technology from Lesotho


– **Joseph Gordon-Levitt** – Actor, producer and founder of HitRecord


– **Jennifer Bacchus** – Acting Head of Bureau for the Bureau of Cyberspace and Digital Policy at the US State Department


– **Ishita Barua** – Author, Chief Health AI Officer and PhD AI in Medicine


– **Juha Heikkila** – Advisor on AI at DG Connect at the European Commission


Additional speakers:


None – all speakers mentioned in the transcript are included in the provided speakers names list.


Full session report

# AI and the Future of Work: Comprehensive Discussion Report


## 20th Annual Internet Governance Forum


### Executive Summary


This comprehensive discussion from the 20th Annual Internet Governance Forum brought together a diverse panel of government officials, technology industry representatives, policy experts, and civil society voices to examine how artificial intelligence is reshaping employment, education, and societal structures. The session explored both the transformative potential and significant challenges presented by AI adoption across different sectors.


The discussion featured opening remarks from key stakeholders followed by a moderated panel discussion. Several key themes emerged, including AI’s current transformative impact across healthcare, agriculture, and public services; the critical need for equitable access to prevent widening inequalities; debates over data ownership and compensation; and the importance of international cooperation in developing governance frameworks that balance innovation with worker protection.


### Opening Remarks: Setting the Stage


#### AI as Present Reality, Not Future Concern


Tomas Norvoll, State Secretary at Norway’s Trade, Industry and Fishery Ministry, opened by challenging the premise that AI represents a future concern, asserting that “AI is just as much at work today as it is about the future, because AI is already here.” He emphasized that AI is already embedded in daily tools and transforming sectors including energy, healthcare, and agriculture, drawing a historical parallel: “When the Sumerians invented the wheel, surely there was someone who worried that it could have negative consequences for those who were used to carrying things on their back.”


Norvoll announced Norway’s significant investment: “Norway committed to investing 1 billion kroner to establish six national AI research centres to study societal impacts and strengthen innovation.” He also warned of the “risk of widening digital divides if access to AI tools and training is not equitable across populations.”


#### UN Perspective on Global Cooperation


Junha Li, UN Under-Secretary General for Economic and Social Affairs, framed AI as representing “a social and political revolution, demanding collective leadership.” He emphasized the global dimensions of AI governance and stressed that “international cooperation essential to ensure AI bridges rather than deepens global divides between developed and developing nations.” Li also highlighted the importance of the WSIS plus 20 review in addressing these challenges.


#### Industry Perspectives on AI Development


Sandro Gianella from OpenAI provided concrete examples of AI applications, noting collaborations “with Moderna and Sanofi” in pharmaceuticals, work “with Estonian government” on public services, and the “Amazon GPT project.” He highlighted accessibility initiatives, including the “1-800-CHAT-GBT landline service” and noted that the OpenAI Academy has “trained 1.4 million people since launch.” Gianella argued that “AI offers transformative potential in pharma, scientific research, education, and climate work, with tools now accessible to small businesses and individuals.”


Chris Yiu, Director of Public Policy for Northern Europe for META, provided educational context by explaining different types of AI: “artificial intelligence, generative AI, and artificial general intelligence.” He emphasized META’s open-source approach, noting that their “Llama models being ‘open weights models'” have been “downloaded more than a billion times.” Yiu argued that “open source AI development democratises access and ensures technology isn’t controlled by few large corporations.”


### Panel Discussion: Key Themes and Debates


#### The Data Ownership and Compensation Debate


One of the most significant tensions emerged around data ownership and fair compensation for human contributions to AI systems. Joseph Gordon-Levitt, actor, producer, and founder of HitRecord, provided pointed criticism of current practices, arguing that “AI companies take people’s creative work without permission or compensation to train valuable models, threatening economic incentives for creativity.”


Gordon-Levitt fundamentally reframed the discussion by challenging AI value creation narratives: “The sleight of hand that’s going on in that statement, though, is the idea that the AI is generating all this economic value, when in fact there is no economic value without all the human contributions that were hoovered up into these machine learning models… your digital self, and in the context of this panel, your digital work belongs to you.”


He also referenced recent policy developments: “There was a report that was put out by the Copyright Office in my country that says that in the opinion of the Copyright Office, it’s probably illegal most of the use cases of this training data being used without consent and compensation. The very next day after that report was put out, the head of the Copyright Office was fired.”


This critique created notable tension with industry representatives, who focused primarily on AI’s role as a productivity enhancer without directly addressing the compensation concerns raised by Gordon-Levitt.


#### Regulatory Approaches: US vs EU Perspectives


The discussion revealed significant philosophical differences in regulatory approaches. Jennifer Bacchus, Acting Head of Bureau for the Bureau of Cyberspace and Digital Policy at the US State Department, articulated a strongly pro-innovation stance: “US opposes excessive regulation that could strangle AI innovation and will block authoritarian misuse whilst promoting pro-innovation policies.”


In contrast, Juha Heikkila, Advisor on AI at DG Connect at the European Commission, advocated for a more structured approach: “EU supports innovation-friendly, risk-based regulation that only intervenes where necessary to build trust for AI adoption.” He also described the “AI Continent Action Plan focusing on skills as one of five pillars, utilising digital innovation hubs.”


Despite these differences, there was consensus on the need for international cooperation, with Norvoll emphasizing that “individual nations cannot address AI challenges alone.”


#### Developing Country Perspectives and Applications


Nthati Moorosi, Minister of Communications, Science and Technology from Lesotho, provided concrete examples of AI applications addressing local challenges. She highlighted the “LAWA/LAVA app for farmers” and described “AI-powered chatbot development with ITU and FAO” as well as “e-government services chatbot” initiatives.


Moorosi emphasized that “AI is helping tackle healthcare challenges like TB detection and agricultural issues through locally developed applications,” demonstrating how AI can address specific local needs when properly implemented. However, she also noted that the “digital divide excludes rural populations from AI innovations, and privacy concerns require robust data protection.”


She advocated for “human-centred AI policies that align with cultural values and don’t leave teachers and workers behind,” highlighting the importance of culturally appropriate AI governance approaches.


#### Healthcare Applications and Equity Concerns


Ishita Barua, Chief Health AI Officer, provided comprehensive insights into AI’s role in healthcare, describing how “AI is restoring healthcare by addressing care debt through scribes, diagnostics, and patient communication tools.” She highlighted AI’s potential to address systemic healthcare challenges while emphasizing the importance of maintaining human expertise.


However, Barua also raised critical equity concerns, warning of the “risk of hard-coding existing inequalities if AI tools are only deployed in wealthy settings with narrow datasets.” She provided important gender analysis, citing Nordic studies showing that “women are adopting tools like ChatGPT more slowly than men, not because of a lack of competence, but due to differences in digital confidence.”


Barua further noted that “women in high-tech industries, they are more susceptible to AI-driven change… three times more exposed to automation risks,” adding a crucial dimension to discussions about AI’s differential impacts.


#### Education and Cognitive Concerns


The discussion of education revealed complex tensions between embracing AI tools and preserving essential human capabilities. Barua introduced philosophical concerns about cognitive outsourcing, stating: “I write to think. I don’t truly understand something until I’ve worked through it on page, failed and revised it, and clarified it… scribo, ergo cogito, ergo sum. I write, therefore I think, and therefore I am.”


She argued that “quality education must value domain expertise and critical thinking, not just tool adoption, to prevent cognitive outsourcing.” This perspective elevated the discussion beyond practical tool access to fundamental questions about human intellectual development.


Heikkila reinforced these concerns, noting “concerns about de-skilling as increased reliance on AI may cause people to lose essential capabilities.”


In contrast, other speakers emphasized AI’s educational potential. Moorosi highlighted how “AI can personalise learning for students in different environments and languages, supporting overworked teachers and under-resourced students.” Yiu similarly argued that “AI can level educational playing field by providing personalised learning tools and reducing administrative burden for educators.”


#### Future of Work and Economic Transformation


The discussion of AI’s employment impact revealed nuanced perspectives beyond simple job displacement narratives. Gianella emphasized that “AI primarily provides task-level automation rather than job-level replacement, enhancing productivity whilst creating new roles.” Bacchus reinforced this view: “AI should boost worker productivity, improve job quality, and create new roles like AI trainers and human-machine teaming managers.”


Heikkila provided a more complex assessment: “Jobs will be replaced, changed, and created simultaneously, with routine tasks most at risk but new opportunities emerging.” This acknowledged both disruptive and creative aspects of AI’s employment impact.


Norvoll offered an interesting perspective on AI’s role in supporting welfare states: “AI can help governments maintain welfare states more efficiently, particularly in healthcare and education sectors facing demographic challenges.”


Gianella also noted that the “combination of work and learning is intertwined, with AI tools helping people learn new skills throughout their careers,” suggesting AI might facilitate more fluid career transitions and continuous skill development.


#### Corporate Responsibility and Market Limitations


Technology industry representatives faced challenges about corporate responsibility and market-driven solutions. Gordon-Levitt directly questioned structural limitations of relying on private companies for public good: “With respect, Meta cannot prioritise what’s good for the world. It’s not built to do that. It’s a for-profit company, and it has to prioritise value for its shareholders… This is a false dichotomy, this contrast to say that innovation is the opposite of rules.”


This exchange highlighted fundamental tensions between market-driven innovation and public interest considerations, with Gordon-Levitt arguing for “public-private partnership with rules” rather than relying solely on corporate self-regulation.


### Government Policy Initiatives


Government representatives outlined various policy responses aimed at harnessing AI benefits while addressing challenges. Bacchus highlighted US workforce development initiatives: “Investment in STEM education, AI scholarships, and large-scale workforce reskilling through apprenticeships and vocational programmes.”


The discussion revealed different national approaches to AI governance, from Norway’s research investment strategy to the EU’s comprehensive regulatory framework to the US emphasis on innovation-friendly policies.


### Unresolved Issues and Future Directions


Several significant tensions remain unresolved. The fundamental disagreement about data ownership and compensation for AI training presents ongoing challenges for both innovation incentives and creator rights. The regulatory divide between US and EU approaches creates potential challenges for international cooperation despite broad agreement on its necessity.


Questions about equitable access and preventing digital divides require further development of practical mechanisms for ensuring AI benefits reach developing countries, rural populations, and marginalized communities.


The balance between AI assistance and human skill preservation remains contentious, with legitimate concerns about de-skilling competing with enthusiasm for productivity gains.


### Conclusion


The discussion demonstrated both the complexity of AI governance challenges and the potential for constructive dialogue across stakeholder groups. While AI presents tremendous opportunities for addressing societal challenges and enhancing human capabilities, realizing these benefits requires deliberate policy intervention and international cooperation.


Key areas requiring ongoing attention include developing frameworks for data ownership and compensation, creating mechanisms for equitable AI access, balancing innovation with appropriate regulation, and ensuring AI development serves broader societal interests. The conversation highlighted both the urgency of these challenges and the potential for collaborative solutions that benefit all stakeholders.


Session transcript

Jonathan Charles: Good morning, ladies and gentlemen. Thank you for getting out of bed so early for this. Distinguished delegates, esteemed colleagues and guests joining us here in Lillestrom, and of course, online around the world. Welcome to this high-level leaders’ track session of the 20th Annual Internet Governance Forum. I’m Jonathan Charles. As the announcement says, I advise presidents, governments, and multinational companies on strategic communications. I’m a former executive committee member of the European Bank for Reconstruction and Development, and a former BBC News foreign correspondent and anchor. It’s my honour to moderate today’s vital conversation on AI and the future of work. We gather, of course, at a moment of rapid and relentless transformation. Artificial intelligence, robotics, machine vision technologies have long passed the point where they’re confined to the lab or the works of science fiction. They are in warehouses, factories, offices, hospitals, classrooms, reshaping industries, redefining job roles, and raising urgent questions about inclusion, security, equity, and human dignity in the world of work. This session will explore what this future means, not in the abstract, but in the real-world context, where policies are made, businesses adapt, and people’s lives are affected. We’ll hear from political leaders, the companies driving the technology, policymakers, and creative thinkers about how societies can harness AI to empower and not replace workers, and how we can design transitions that are fair, inclusive, and forward-looking. Over the next 90 minutes, we’re going to discuss strategies for preparing today’s workforce for tomorrow’s economy, policies that can safeguard rights while embracing innovation, and the kind of global collaboration that will be essential to shaping a shared digital future. If you are in any doubt about the urgency of this issue, then consider these real-life stories. I’m told that in some companies, and this is hard to believe, but obviously true, Gen Zers often fake looking busy all the time. So worried are they that their lower-level jobs will be replaced by AI. One large investment bank CEO recently told me he’s worried about how his younger staff will progress if they can’t build their professional expertise and judgment when AI is taking over all or part of their roles. He worries, in other words, about the talent of the future. We’re going to be hearing from our impressive panel a little later on. I’ll be introducing them in a few minutes. But to open our session, I’m honored to first invite a national leader from our host country. Please join me in welcoming Tomas Norvoll, State Secretary at Norway’s Trade, Industry and Fishery Ministry, for his opening remarks. Tomas.


Tomas Norvoll: Thank you, Jonathan, and good morning, everyone. I’m really glad to be here at Lillestrøm with you for the IGF conference. This is certainly the place to be this week. There are so many interesting, important and necessary discussions taking place here. But I think few of them have so many consequences as our topic today, AI and the future work. I’m going to start by disagreeing with the premise somewhat. AI is just as much at work today as it is about the future, because AI is already here. It is deeply embedded in tools we use every day. Here in Norway, we have companies using AI to accelerate the green transition. They are optimizing wind and hydropower, predicting energy demand and creating smarter, more sustainable shipping. And when I meet with businesses and workers, my impression is that most think that AI is not a threat, but a wave of opportunity. At the same time, it is important to see AI as more than just another tool. It is a platform for transformation, one that will impact virtually every sector of our economy and every part of our society. And like all major technological shifts, like electricity, like the Internet, like the mobile phone, it brings opportunities, but also disruption. This is why we recently decided to invest a billion kroner to establish six national research centers on artificial intelligence. The centers will conduct research on how AI affects society, they will study the development of new technology, and they will make suggestions for how we can strengthen innovation and value creation, both in business and in the public sector. One can easily imagine that as long as there has been work at all, there has been a debate about the future of work. And my guess is, there has always been optimists and always been those who fear for their jobs and their way of life. When the Sumerians invented the wheel, surely there was someone who worried that it could have negative consequences for those who were used to carrying things on their back. Professor Judy Weikman at LSE points out that there are, at any given point in time, several things happening at once. Some jobs are replaced, some jobs will change, and as always with new technology, new kinds of jobs are created. The point is, I guess, it is that change is really easy. That is why we have high expectations for our new research centers, because there are serious questions to ponder when we talk about AI and work. Questions about jobs, about ethics, about competitiveness and security, and not least about inclusion. Those who know how to work with AI will be in high demand, and those without access to tools or training risk being left behind. We need to make sure that we don’t widen the digital divides, and that workers are empowered, not marginalized. If we make the right decisions now, and remember to always put people in the center of AI policy, we can set the course for the future of work that is both safer, greener, and more productive. The biggest risk is not acting in time. I look forward to the upcoming presentations and to the panel discussions, where I hope that we can really look into both opportunities and the challenges that lie ahead. Thank you for the attention so far. Thank you all.


Jonathan Charles: Thank you. Thank you very much indeed, Tomas. Let’s go to our next set of opening remarks now, and I welcome the UN Under-Secretary General for Economic and Social Affairs, Mr. Junha Li.


Junha Li: Thank you. Good morning. Good to see you again in this plenary hall. Before I’ll distinguish the panel, starting there, sharing their thoughts, perhaps I could just share a few words from the UN perspective. We met at a very critical juncture. Artificial intelligence is no longer on the horizon. It is here, actively reshaping our economy, our future, our societies. This transformation is evident in our daily work, from the data we analyze to the countries we serve. AI is changing how governments operate and deliver public service, while raising new questions of capacities, ethics, and equity, especially for developing countries. This revolution extends beyond job displacement. It fundamentally alters how value is created, who benefits, and who risks being left behind. AI is entering every sector one sought from automation, including health, education, logistics, law, and finance, performing tasks that require judgment, coordination, even creativity. The potential is immense, so are the risks. AI can help us to address urgent skill shortages, make the work safer, smarter, more productive, and unlock innovation in every corner of the world. However, the risks are equally significant. Widespread job displacement, absolute skills, and widening inequalities. and many more. We need to build an inclusive ecosystem between workers, companies and nations. That is why we must act collectively. Technology itself does not determine our future, but our policies and choices do that. We need to build an inclusive ecosystem from education and training to infrastructure and governance. We must ensure that the fair labor transition and modernize our social protecting systems. The core principle must be that AI serves the people, not the other way around. To achieve this, we must invest in digital literacy, especially for women, youth and workers in the informal economy, while promoting transparency, accountability and fairness in the workplace. International cooperation is also very much essential, because no country can navigate this AI transformation alone. AI must be the tool to bridge the divide, rather than to deepen the divide. This is not just a technological revolution, it is a social and political revolution, demanding collective leadership. Let us use this discussion to share, reflect and build a common direction, grounded in dignity, equity and sustainability. Very much looking forward to hearing your input, which would contribute to the Norway IGF message to this year’s WSIS plus 20 review, which will be adopted by the General Assembly later this year. Thank you so much.


Sandro Gianella: Good morning, everyone. Thank you so much for the opportunity to speak and share a few words before we have a discussion on the panel today. To share our thoughts about AI and how we see this impacting our jobs, and as previous speakers said, not just about the future, but how it’s already doing that today. It’s especially meaningful to gather in Oslo, a city that embodies thoughtful governance, deep respect for social trust and a rich tradition of balancing innovation and inclusion. Norway’s values reflect the kind of AI future that we also want, one rooted in equity, openness and collective progress. The IGF is also one of the truly global spaces where technologists, policymakers, civil society activists come together, not just to talk about technology, but to really shape how it should serve people. It’s very aligned also with the mission of OpenAI, which is to ensure that AGI benefits everyone. We believe in the transformative potential of AI. It’s a general purpose technology that, not unlike the steam engine or electricity, can really enable and empower people and businesses whilst accelerating human progress. It goes without saying, though, so allow me for a second to touch on it, that all of this work and what we’ll talk about starts with a strong commitment to safety. Our team at OpenAI has been working hard at developing industry-leading safety infrastructure, starting with our preparedness framework, which guides how we anticipate, monitor and mitigate the risks from these models. We also were the first to publish detailed system cards, which offer transparency into how these models behave and where their limitations lie. AI does have vast potential to revitalize our economies, to improve education, to increase our collective capabilities and to help us live longer and healthier lives. But unlike previous technologies, this one is unfolding much, much faster. But its shape is not predetermined. That’s where we, as policymakers, as technologists and stakeholders, come in to help shape that future into one where everyone can benefit. As said, today AI is already contributing to economic growth. Forecasts differ between 0.2 and 2 percentage points of GDP per year already today. But beneath these sort of macro figures, we already see the transformative potential of AI in some of society’s most important sectors today. In pharma, we have a collaboration with Moderna and Sanofi here in Europe, where our technology is helping to accelerate vaccine development. In scientific research, our models are being used by leading European laboratories at Sinospoor or at Max Planck University. In education, we’re happy to work with the Estonian government, who are thinking through how to bring AI to schools and students in a responsible way. And as for climate and the environment, we’ve worked with the Federal University of the Amazon in Brazil to launch Amazon GPT, where our tools are helping universities’ computer science department to generate conservation and health insights to help preserve the largest rainforest in the world. But, and I think this is one that’s especially meaningful for me and for us at OpenAI, you don’t need to be a leading pharma company or a business school to access and benefit from these tools today. These tools were used to be once only accessible, as previous speakers said, to R&D labs are now being used by small businesses, by startups, by individuals, by NGOs all across the world. Entrepreneurs can use AI to analyze markets, generate product ideas, brainstorm, build prototypes and all of that without really needing this full-stack team to just get started behind them. One practical example of our effort to really improve access to this technology is an integration we have with our friends from WhatsApp, a platform that’s already embedded in the daily life across many people in the globe. This is part of our effort to make sure that even in places where there isn’t high-speed internet or where there isn’t an access to advanced hardware, people find an easy way to interact with these tools and to get the most of them. With such potential and opportunities in sight, we do want to work with policymakers to ensure that AI’s benefits are shared responsibly and equitably across society. The AI area is an unmissable opportunity to drive growth, and I think successful nations will turn these resources into competitive advantages. In this AI age, or the age of intelligence, as our CEO Sam Altman likes to say, the resources are compute, they’re data, they’re energy, and they’re talent. And obviously for the discussion today, we’ll have a big focus on talent, which I think is right and calls for an important discussion. Even with the best hardware and data, the AI leadership ultimately depends on people, on researchers, on engineers, and of all of us, and on informed users. To future-proof their workforce, and I know we’ll touch more on it in the panel, but I wanted to offer a couple of ideas or things that we see governments and societies do that we think are worth exploring. The first one is an obvious one, broad and equitable access. No one left behind. We really need to make sure that there isn’t a widening gap of people that have access or don’t have access to this technology. So from primary school onwards, I think we’re inspired by the focus and effort that the Estonian government is putting on it, but we’re seeing loads of government across the world making sure that their citizens and people have access to this tech. Governments also must significantly expand investment in STEM education and specialized AI scholarships on the research side. Equally crucial, of course, is large-scale workforce reskilling through apprenticeship, vocational program, really enabling current workers to adapt their skill for AI-driven and evolving roles. But even simpler than all of that, I think there’s a lot of content and ways to learn about technology that’s already out there, whether you’re using the tools themselves to learn about the tools or you look at something that we launched to play our part called the Open AI Academy. It’s part of our commitment to invest in people, not just in technology. It equips policymakers, civil society leaders, and future practitioners with the knowledge to understand, to govern, and to apply AI responsibly, turning transparency into shared capacity and expanding AI fluency beyond just a few global capital. We offered this platform just at the beginning of the year and already have trained 1.4 million people since we launched it, and we’re looking forward for your feedback and ideas on how we can improve. As previous speakers said, I think history shows us that as technology shifts, so will the way that we work. It’s always been true and it will continue to be true. Take radiology. AI tools now help detect cancer a lot earlier, but we don’t need fewer radiologists. We need tools to help them see more patients and see them faster, and we need tools that reduce waiting lists and get people into treatment faster, expanding output. So today, AI is primarily a productivity enhancer and a tool that we as humans can leverage, and it offers task-level automation, not job-level automation. At Collar Health, for example, our models help clinicians reduce time spent on paperwork, which allows them to spend more time focused on care. They are enabling doctors to spend less time on these routine, time-consuming tasks and spend more time interacting with patients. AI is also providing tools to scale the impact of workers in countries like India or Kenya. One example that we really like is a company called Multimodal, Digital Green, where they’re delivering multimodal, multilingual, and tailored chatbots for farmers to interact with each other and to get live information. The interesting thing is that if you think about these agricultural extension service, by delivering these through low latency and through our tools, they have reduced the prices by 100x. So we’ve moved from $35 per farmer to 0.3% per farmer to get the same information. Technological shifts will, of course, also result in the revolution of existing roles or creations of new roles entirely. If we tried to explain to our grandparents or even our great-great parents what our jobs are like today, I think they would have really struggled to try and understand what what it is that we do, and what we define as work and as jobs. Our work has changed and evolved over time. Much of that is due to new technologies creating new jobs and changing the nature of our work. AI to us is just the latest stage in that constant process of evolution and change, and that is why the future-proof efforts of skills, literacy, and education I explained earlier are so vital. To conclude, AI’s impact, as we’ve heard, is not just about growth. It’s about the kind of growth that we choose to pursue as a global community, and it’s how we manage its transformative impact on society. We can choose to enable a more productive, a more inclusive, and a more innovative future, but we must, of course, get the balance right on safety, on access, and on innovation. I think most importantly, which is why it’s so good to have these debates at a forum like the IGF, we all must work together. There’s no single actor, no single company, no single government even, that can do this alone. So, with that, I’d like to thank you for your attention, and I look forward to the discussion later on.


Jonathan Charles: Sandro Gianella, thank you very much. Some fascinating examples there of what AI can achieve. I’m sure we’ll hear some more examples as well now from our next speaker. I’d like to invite Chris Yiu to come up, Director of Public Policy for Northern Europe for META. Chris. Okay, thank you, Jonathan.


Chris Yiu: Good morning, everyone. Real pleasure to be here with you today to talk about such an important topic around AI, the technology, and where it’s going. We’ve heard some really inspiring examples already this morning, and also a good outline of the topics that we want to cover. What I thought I would do is just try to ground the conversation a little bit in a few things. First, just to speak a bit about the terms and what we mean when we say artificial general intelligence, for example. Talk a bit about, in practical terms, what this technology can do. Then really focus on META’s commitment to open source and making sure that the AI technologies that we are building are as available and accessible as possible to people all around the world. Now, at META, we know that people use our technologies to connect with the things that matter the most to them, friends, family, pursue their interests, find new experiences. META has been a pioneer in AI for more than a decade. It’s the technology historically that we’ve used to help people find relevant content on our platforms. It’s also the technology that we use to find and deal with harmful content on our platforms, which is incredibly important to us. Now, all of you know that the recent advances in AI have captured the public’s imagination. We’ve seen the potential for huge advances in people’s ability to be productive, to be creative, and also, I think, to drive important societal progress, particularly in fields like medical and scientific research, which I am very optimistic about the contributions that will make to all of us and the world ahead. We’ve also heard, I think, that AI has the potential to be a leveling technology. Only one thing that we see is that for small businesses using our platforms, our AI tools help those small businesses and those individual entrepreneurs to compete with larger businesses and bring their products to market in ways that weren’t possible previously. I think we will see more empowerment of small businesses, entrepreneurs, and individuals from this technology as things move forward. But I think just to ground us all, one of the first questions that people often have about AI is, what exactly are we talking about? So here’s how I like to think about it. When we talk about artificial intelligence, we are thinking about systems and technologies which are able to do things that traditionally maybe were the preserve of human beings. So in our world, this would be things like image classification or speech recognition where we’ve had a lot of recent advances. You then have generative AI technologies. So these are systems which learn statistical representations of patterns and structures in data and then can use these to generate new content, to have a conversation, and so on and so forth. And then looking ahead, a lot of the debates and I think very relevant to this session is this question of artificial general intelligence where we’re talking about systems that are able to perform feats of cognition, reasoning, planning, perception, which really are taking us to human capabilities and beyond. And I think as we look forward, that is where a lot of the important conversations with policymakers need to be located. When it comes to Meta’s contribution, our flagship generative AI technologies are released as a family of foundation models that we call Lama. Meta has a rich and long history of contributions to the open source community and I know that open source and open standards and so on are something which has been an important topic of discussion in the IGF community and elsewhere over many, many years. And our Lama models continue this tradition of openness. So these models are what we call open weights models. This means that any developer can download and deploy the models for themselves in their own projects with full customization and control of the systems that they build. I will say a bit more about why open source is so important for this conversation in a moment. I won’t go through all the detail that’s up here, you can look it up after the session, but we have a range of models available in this family and the point of this is to make sure that there are different technologies available for people and communities wanting to use it for different purposes. We have small models which are optimized to run on devices so that if you have only access to low capability hardware or maybe you don’t even have an internet connection, you still have the ability to benefit from this technology. We have models which are able to reason over images as well as text and we have more advanced models which provide state of the art reasoning capabilities but still designed to run efficiently and quickly. And these models, just to give you a sense of the widespread use, have been downloaded more than a billion times. There are people all around the world in the open source community building and refining with these technologies. Just to help us then with the conversation about the future of work, some of the things which these open source models can help anybody to do with the hardware that they have access to. I won’t touch everything on here but just to pick out a few examples that really resonate with me. I think language translation using this technology is incredibly powerful in terms of bringing the world together and breaking down barriers and this really is an area where the ability to do this quickly and efficiently is tremendously powerful. You see the development now of assistants and agents that can help people manage schedules, manage tasks, manage reminders, help people to be more productive in whatever it is that they are choosing to pursue. You’ll be very familiar with large advances in coding and software development, that these generative models can reason over very large code bases, they can find and spot patterns and nuances and interrelationships that maybe elude human developers but when software developers work with the technology as an empowering technology, they’re able to do more than they could on their own. I think in the sphere of education and learning, which I think we’ll touch on in the panel as well, for me, tremendous potential here to offer students opportunities for far more tailored educational content and supports, personalized assessments and really just break the trade-off that we’ve had in the past between trying to give everybody an education and helping to ensure that what everybody has access to is personalized to them and their particular situation and needs and this technology helps us along that journey and to break some of those constraints. Just to talk a little bit more about open source and why this matters so much to us at Meta. Our CEO wrote about this a year ago and talked in depth about our commitment to this way of doing AI development and there’s really three legs to this. We think that open source is good for the software developers building the technology and trying to innovate. It’s good for us but most importantly, we think it is good ultimately for the world. In terms of the developer community, these open models, the ability to download and then fine-tune and distill the models for your own purposes, we think is tremendously powerful because it gives you real control over making sure the model is customized to fit your particular needs. Wherever you are in the world, whatever sort of organization you are, whatever scale you are at, we think it’s important that you have that ability to innovate on your own terms. Also in some circumstances, for people who have particular requirements around how their data is used and processed, so if you run the LLAMA models on your own infrastructure, the prompts that you put in and the outputs that come back out can remain entirely within your own purview. This is good for us because it helps us as we release these models to stay at the forefront of innovation. But most importantly, this is good for the world. So we really believe that if we go down this open source route, we can do a few things. One, we can make sure that the widest possible number of people and communities around the world have access to this technology. Number two, it helps to ensure that this powerful technology is not solely sitting in the hands of a few large corporations, but rather is something which is belonging to the community. And we think that by doing this, the pace of AI adoption can proceed more evenly and more safely than it would otherwise. So I will wrap up there. There’s a code here you can use to find more information on the models and experiment with them if you want to want to do that. But for now, I know we have an important conversation to get to. So thank you very much for your attention. And I look forward to the debates.


Jonathan Charles: Chris, thank you very much indeed. I enjoyed learning about the llama, so thank you. If you stay up here and take your place on the end here and then I’ll introduce and call up the other members of our distinguished panel this morning. First, let me welcome Nthati Moorosi Minister of Communications, Science and Technology from Lesotho. Nthati, good morning. If you take the seat next to me just here. Thank you. And next, Tomas Norvoll, who we’ve already met this morning, State Secretary from the Norwegian government. And we’ve also met Sandro, who, if I could ask you to come up and take your seat from OpenAI. And I’m delighted to welcome this morning, the actor, producer and founder of HitRecord, Joseph Gordon-Levitt. And we move on to Jennifer Bacchus, Acting Head of Bureau for the Bureau of Cyberspace and Digital Policy at the US State Department. Next, it’s Ishita Barua, the author, Chief Health AI Officer and a PhD AI in Medicine. And finally, to join our panel, Juha Heikkila, Advisor on AI at DG Connect at the European Commission. So without further ado, let’s move on to our panel discussion here. We have about an hour or so to discuss some of these pretty difficult and challenging issues, but ones that offer great hope for the future of our economy. So let’s start and think about AI, obviously, in the way it’s being used. And I’m delighted to welcome to the stage, the CEO and founder of HitRecord, Joseph Gordon-Levitt. So let’s start and think about AI, obviously, in the way it’s transforming how we work, live and interact right now. And I’d like to ask all of you from each of your perspectives, what do you think are the most powerful shifts that you’re seeing, not just in jobs, but across society? Where are the biggest new opportunities, the biggest risks? It’s a lot to answer. I’d ask you to keep your answers fairly brief, because we’ve got a lot to get through over the next few minutes. And Nthati, perhaps I could ask you, first of all, to respond to that.


Nthati Moorosi: Thank you, Programme Director, the moderator, and thank you for affording me this opportunity to talk a bit about what we are doing in Lesotho. Artificial intelligence, as everybody has said this morning, is a disruptive technology, but with a lot of good. It is transforming society, reshaping how public services are delivered. In Lesotho, we are experiencing the positives of AI. Although our digital development remains low compared to our peers in the region, we have made significant strides in certain areas. And I just want to talk about some of those. In healthcare, AI is helping us tackle high tuberculosis rates, particularly in the rural highlands. Since 2022, Partners in Health, which is a local NGO and our ministry, they are using an AI technology named Cure, which is also using another one called QXR. Both are AI-powered tools. They analyse chest X-rays and detect signs of TB. By flagging potential cases early, this tool ensures timely treatment and in areas where radiologists ask us, especially in the rural areas, reaching even the most rural and remote communities. We also have some positives noted under agriculture, where AI is driving improvements in food security. A locally developed application called LAWA, or LAVA, empowers farmers, particularly smallholder farmers, to upload crop photos and ask questions in the local language, Sesotho. The AI analyses the images, identifies issues such as tomato blight, and provides tailored solutions that help farmers protect their crops and boost their yields. However, the developers face challenges of monetising the app and accessing accurate weather data, which are critical for maximising this impact. There are many, many opportunities. The Ministry of Agriculture, in collaboration with ITU and FAO, they are developing an AI-powered chatbot. This innovative tool delivers real-time advice to agriculture extension workers, offering guidance on crucial issues like pest control, ultimately improving their ability to support farmers across the country. Similarly, we are developing a public service chatbot to assist citizens. This is now the e-government services delivery. It assists citizens with practical queries, such as applying for birth certificates, registering deaths, and streamlining access to essential services and boosting efficiency worldwide. However, I just want to talk a bit about some of the challenges that we are experiencing. The digital divide. For us, it’s real. It means that many rural farmers and patients still lack internet access, and this leaves them excluded from these innovations. Privacy concerns are another critical issue, requiring robust safeguards to protect sensitive data, whether it’s patients’ records or farmers’ information. Lastly, the changing jobs landscape driven by AI demands, re-skilling programs to ensure our workforce remains competitive is part of the challenges that we see. So I would like to just conclude and say that AI is making Lesotho services faster, smarter, and more inclusive, from clinics to farmers. By bridging access gaps, safeguarding privacy, and fostering local innovation, we can harness the full potential of AI to transform lives while marginalizing and managing its risks. Thank you very much.


Jonathan Charles: Nthati, thank you very much indeed. Let’s look at it now from a content creator’s point of view. Joseph Gordon-Levitt.


Joseph Gordon-Levitt: I get to go next. Cool. Thank you. Thanks for having me. Well, I’ll talk about, you asked, what are some of the great things that are happening? Let’s talk about something that’s super impressive in my world. You’ve maybe seen some of these new generative video products that can just make a video seemingly out of nothing. And they’re incredibly impressive. And from a creative standpoint, I think there’s something deeply inspiring about the idea that without a huge budget, without the resources that a traditional Hollywood production would require, you can make something that looks, at this point, close to as good as anything. And in the coming years, it will be, I think, indistinguishable from large-budget content creation productions. And putting that in the hands of anybody, whether it’s a kid growing up in a suburb of LA like I was, or anybody else around the world, that’s, again, a really deeply inspiring prospect to put these kinds of creative tools in the hands of so many people. Now, to the risk and the potential downside that I see of products like this, the first thing you have to talk about is, well, how does a product like this really work? Did a company create a tool that can just, out of nowhere, make an amazing-looking video? And the answer’s no. They didn’t do that. What they did was, they made a tool that is fed videos that were made by millions of people, millions and billions of videos that were made by people, and algorithmically sort of crunches the data that make up those videos, and then can output sort of pattern-following videos. But it all comes from videos that people made. The tech product would not be able to make anything at all if it weren’t for all the videos that people made that were ingested into this model. So where did those videos come from? Well, they came from people. Were those people asked permission? No. No, the companies that are producing these tools don’t ask permission, and in fact, they don’t even, at this point, disclose what data they’ve used to train their models. Are people paid for these incredibly valuable products that are now generating enormous economic value off the backs of the creations of all the people whose videos were taken? No. No one is paid. And so right now there are a number of lawsuits of various content creators that are suing these companies and, you know, in my country those lawsuits have yet to be decided. We’ll see what happens. There was a report that was put out by the Copyright Office in my country that says that in the opinion of the Copyright Office, it’s probably illegal most of the use cases of this training data being used without consent and compensation. The very next day after that report was put out, the head of the Copyright Office was fired. The executive administration that fired her would not give a reason why she was fired. But again, it was the next day after this report was put out. And I want to zoom out for a bit and talk about how this same principle applies to our entire future of work since we’re here talking about the future of work. This is not just about the creation of videos. It’s true that I have made my living throughout my life making film and TV and certainly my fellow entertainment industry workers are concerned about this. But this same thing is going to happen and is already happening and will continue to happen on a greater scale throughout our economy. It’s not just videos that are being stolen and used to train these valuable tools. This same thing will happen whether you’re working in content creation or you’re working in education, whether you’re in academia, you work in marketing, you work in logistics, you work as an engineer or an architect, anybody that is delivering their work digitally is I think threatened by this. And if we go by the basic principle that big tech companies are allowed to take people’s data without permission and without compensation and use them to make money, what kind of economy are we headed for? What economic incentive do people have to be creative, to do great things, to work hard? I really think that if we want to have an economy in the future where people are incentivized to compete, to strive, to be excellent, we need to incentivize that hard work by compensating people when they create something of value. And I’ll leave it there for now. Thank you.


Jonathan Charles: Thank you very much. Some fundamental issues raised there and actually, you know, so that’s a challenge for governments really Tomas Norvoll, isn’t it, you know, balancing up the opportunities and those sort of issues.


Tomas Norvoll: Yeah, it is definitely and coming from the Ministry of Trade and Industry, I should be most into how this can reshape the industry. But I would like to say that I believe that AI is a huge possibility for us as a government also to uphold the welfare state and thereby kind of upholding the democracy. Because if you look into how at least this part of the world is developing, we see that we are lacking people because we are getting older and older, one year every year. And so and we are using too much resources and we cannot just put money into any kind of problem that will occur in the public sector. So we have to find a more efficient way to work, especially in healthcare and education. The problem is that, first of all, that is maybe the sectors where you will find most conservatism, because we have strong professions that might have some kind of issues taking new tools into their work. But it’s also the sectors where you will meet real risks concerning privacy and real risks of discrimination. So we have to kind of find this balance to make sure that we are not afraid of using new tools to work more efficient, but also that we protect our people, that we protect our kids and that we protect the rights that people have. I’m very sure that we address these challenges best with global cooperation. I don’t think that every single nation can make up some kind of framework that will protect people from any kind of risk. We have to find international regulations. We have to invest in our people, making them resilient to the threats that AI can represent. And if we don’t manage to do that, we will come in a situation where what can be a very, very important tool that can do a lot of good for mankind will be something that people are sceptical about and will be scared to take in use.


Jonathan Charles: Tomas, thank you very much indeed. Well, how does it look from Washington? Jennifer Bacchus, thank you for joining us. U.S. State Department.


Jennifer Bacchus: Thanks. First and foremost, I just want to thank the Kingdom of Norway for hosting IGF. It’s a real pleasure to be here. I think the enthusiasm for Norway hosting is evident by the number of people who traveled out here for this event. So thanks, first and foremost, to the government, because we know it’s a really big lift to put on one of these conferences. But to the topic of AI, I think the U.S. view of AI as really an incredibly revolutionary technology that has applications and impacts in economic innovation, job creation, national security, healthcare, freedom of expression and beyond is well known. As policymakers, one of our biggest concerns relates to efforts to restrict AI’s development, which from our point of view could mean paralyzing one of the most promising technologies that we have seen in generations, promises that we heard from my colleague from Lesotho. We want to embark on the AI revolution before us with a spirit of openness and collaboration to truly harness the benefits that AI has to offer. We need regulatory regimes around the world that foster the creation of AI technology rather than strangle it. In terms of risk, the United States is troubled by reports that some foreign governments, including here in Europe, are using policies that could tighten the screws on U.S. tech companies with international footprints, and we will not accept that, and we think it’s a terrible mistake. We need to focus now on the opportunities to unleash our most brilliant innovation and use AI to improve the well-being of our nations and their people. Excessive regulation on the AI sector could kill a transformative industry before it can really take root, and we need to make every effort to encourage a pro-innovation, pro-growth, deregulatory AI policies. Another major concern of the United States is that some authoritarian regimes have stolen and used AI to strengthen their military’s intelligence and surveillance capabilities, capture foreign data, create propaganda to undermine other nations’ national security, and violate human rights. The United States will block such efforts, and we will safeguard American AI and chip technologies from theft and misuse, work with our allies and partners to strengthen and extend these protections, and close pathways to adversaries attaining AI capabilities that will threaten all of our people. I would be remiss if I didn’t just note to all of our international colleagues and friends that partnering with such regimes never pays off in the long term, despite incentives that they may offer in the short term. So the United States is committed to making sure that our AI is the gold standard and that we are the partner of choice for the world. Thank you.


Jonathan Charles: Jennifer, thank you very much indeed. Let’s move to the health sector. Ishita, how do you see this balance of risks and opportunities?


Ishita Barua: Thank you for having me. As someone working at the intersection of medicine and AI, I see this technology entering healthcare at a very opportune time. We often talk about how AI is disruptive, but in healthcare, when applied thoughtfully, I think that AI isn’t just disruption, it’s restoration. Because for decades, healthcare systems across the world have been quietly accumulating what I define as care debt or care deficit. This is a growing gap between the care that people actually need and the care our systems can realistically provide due to rising patient volumes, more complex conditions, and a workforce that simply hasn’t scaled to meet the demands. It’s not that healthcare professionals don’t care. They care deeply, but they are exhausted from caring too much for too many and with too few resources. For years, patients have been remarkably patient. They’ve tolerated queues, waiting lists, rushed appointments, and a lack of follow-up, not because they didn’t notice, but because they understood that the system was completely overwhelmed and under strain. Now, for the first time, we have tools that can actually help both the patients and physicians and healthcare professionals and repay that care debt and go beyond it. AI scribes, for instance, are freeing doctors from the burden of documentation. AI-supported diagnostics are catching disease earlier. Language models are helping patients understand their care in more compassionate and clear ways. But the real frontier, I think, the one I find most hopeful, is where AI is not just optimizing healthcare, but also transforming it. We need to improve both. care delivery, but also medical discovery. Take AlphaFold, which has mapped the structure of over 200 million proteins, the building blocks of life, accelerating our understanding of biology and drug discovery in ways we couldn’t imagine a decade ago. So everything from developing the future cancer vaccines, plastic degrading enzymes, so on and so forth. Or robotic surgery where AI systems are utilizing imitation learning, which means robots watching surgery videos and then performing simple procedures with the skill level of human surgeons. This is research led by John Hopkins University, offering a possibility to meet the global demand for surgeons that are hard to train. And perhaps most astonishing, brain-computer interface in combination with AI. With the help of these devices, people with paralysis can regain the ability to move, speak, and even write through direct decoding of brain activity. So there are a lot of opportunities here, and I think that the biggest change can be to move from a reactive healthcare system to a proactive one. And from managing illness to anticipating and preventing it. From treating symptoms to understanding biology at its deepest level. But we have to be intentional. I think we have to mitigate risks. And we are certainly very accustomed to that in healthcare. If these tools are only deployed in wealthy hospitals and wealthy places with a lot of resources, trained on narrow, imbalanced datasets, designed without equity in mind, we risk hard-coding existing inequalities into the future of care. So the real opportunity here isn’t just about speed or scale, it’s about equity. And about giving more people access to the kind of care they always deserved. That is timely, proactive, and highly personal.


Jonathan Charles: Thank you very much, Ishita. I don’t think that’ll be the last mention of equity on this panel, and the importance of that. Let’s turn to one of the areas where obviously there’s a fair amount of discussion going on about regulation and how this should be dealt with. Juha Heikkila, perhaps I could ask you to speak. You’re an advisor, obviously, to the European Commission, which is paying great interest in these issues.


Juha Heikkila: Thanks very much. So, first of all, the European Commission thinks actually that AI as a technology is a technology, or set of technologies, with great potential. And it can bring us many opportunities and many potential positive effects, as has already been mentioned by previous speakers. That’s why we have very strong and increasing support in the European Union for innovation in this area. But we also think that we need trust, because trust is the sine qua non for take-up, and take-up is the sine qua non for benefits, for the benefits of AI to materialise. And that’s why we have then innovation-friendly risk-based regulation, which only intervenes where necessary. And that’s why we consider it to be innovation-friendly and supportive. As regards then the labour markets and the impact on work, some of the questions are still quite similar than they were nine, ten years ago, before we even know about generative AI. So the jury is out. We don’t know exactly what the net effect will be on jobs. We cannot really put numbers on it. Studies are quite different in terms of the quantification of the impact. But of course, what we do know is that it’s mainly the routine tasks that are at risk. And in many ways, this line may now be moving up, because generative AI has proved to be very powerful. So those who are affected, they may be more numerous now than they were before. And in that regard, those who benefit also from this, that may be a slightly decreasing set of people. However, I think it’s also important to bear in mind that jobs will also be created. The job creation is often distributed in its nature, whereas job losses may be more concentrated and more visible as a result. And if something can be automated, if a job can be automated, it doesn’t necessarily have to be. And many jobs do not lend themselves very well for that, particularly if we talk about robotics. I want to specifically include robotics here, because AI is not just on the internet, as I had stated yesterday. It’s also embedded in robots and autonomous vehicles, etc. In those cases, of course, we are talking about a different aspect. And robots are not yet as dexterous as humans are in many ways to carry out jobs which are very important, which seem banal for us. Think about folding garments or waiting tables, doing haircuts and stuff like that. So there are many, many very complicated questions related to this. I would like to, however, highlight one aspect which I think is quite important and interesting and may not always be presented in these discussions, which is the question of de-skilling. I’m old enough to remember the time before navigators, and I sort of have noticed that the current generation, I don’t know where we are now with the alphabet, is it Z or are we already in AA? So they don’t necessarily know how to read maps. So there is this overall risk of course that the more we rely on AI and automation, we kind of become also heavily dependent on it. And we may not necessarily have then the required skills when we need them, if that service is not available. So from time to time, we should maybe use those skills as well and keep that memory alive, if you like, which enables us to carry out our tasks, which of course we most of the time are happy to let AI do for us, simply because it’s much more convenient, faster and also more efficient. So this is something that I think is worth directing some attention to, because it’s after all an important aspect. If we don’t find home anymore, once we don’t have AI, that becomes a problem of course. So I’ll stop there. Thank you.


Jennifer Bacchus: Thank you very much indeed. Let me move on to our next question now, question two. So what guiding principles should shape national and global policies to ensure that AI supports decent work and fair transitions, rather than deepening existing inequalities? It’s something that Joseph Gordon-Levitt touched upon, the fairness of the transition. But let’s hear


Jonathan Charles: first of all from you, Jennifer.


Ishita Barua: Look, I think we can all recognize and we can all say we have clearly a changing technological environment and we need to adapt to this new reality without destroying our way of life. I think this is something we can all agree on. And we need to not disinherit workers. We seek, in the most basic terms, to secure our economy, restore our middle class and uphold America as the planet’s best home for innovators. As Vice President Vance has said, the United States will maintain a pro-worker growth path for AI, so it can be a potent tool for job creation in the United States. AI will facilitate and make people more productive. I think this is something we can all agree on. But it’s also not going to replace human beings. We refuse to view AI as a purely disruptive technology that will inevitably automate away our labor force. And I think we’re hearing the ways that it can help us and not just harm us. We believe and we will fight for policies that ensure that AI is going to boost worker productivity, whether in the healthcare sector, in the agricultural sector, et cetera, that it’s going to improve job quality and working conditions and unlock cutting edge economic potential. We expect that American workers will reap the rewards with higher wages, better benefits, and safer and more prosperous communities. AI is creating new roles like AI trainers, data analysts, and human machine teaming managers. Generative AI can level the playing field for access to jobs, making it easier to build the technical knowledge and skills that have historically excluded otherwise very qualified workers. We also see potential for AI to help with hiring. If designed transparently, AI systems can provide a record of how employment decisions are made and can help us ensure fairness is impeded in the process. As AI creates new jobs and industries, our governments, businesses, and labor organizations have an obligation to work together to empower workers all over the world. To that end, for all major AI policy decisions coming from the federal government, the Trump administration will guarantee American workers a seat at the table. President Trump will always center American workers in our AI policy, and we want to work with all of you to emulate this internationally. Thank you.


Jonathan Charles: Thank you very much. Joseph Gordon-Levitt, you touched on unfairness in your previous answer. How do you see it in this case?


Joseph Gordon-Levitt: Thank you. So your question is, what can we do? I think there’s a basic principle that we ought to adhere to, and this is a perfect place for us to be talking about it. That basic principle is that your digital self, and in the context of this panel, your digital work belongs to you. A human being that produces some data, whether that data is their work, is their ideas, is their connections, is something they wrote down, is some digital deliverable from their job, any of that data, the human being should have some economic stake in that data. It shouldn’t be allowed for a tech company to take that data and not compensate or not get consent from that human. Now, I’m not saying that the tech companies shouldn’t be able to make money. They should be able to make money. I very much admire the work that Meta or OpenAI are doing and the tools being provided, but it’s pretty clear that these companies are generating huge, huge economic value. And so it doesn’t make sense to me why all of and a lot of that economic value should go to the tech companies and none should go- 0% should go to the human beings whose data are being taken without consent or compensation. I think if you set up a system whereby people can be compensated for their creativity, for their work, for their data, now you establish a vibrant and vital market. And this is what we want for our economy. This is what we historically know works well for economies. We’re sitting here in a Western, democratic, capitalist country. I believe in that. But if we want that to continue to thrive, then we have to set up a way to compensate workers for good work that they do. If you ask some of the leaders in Silicon Valley where we’re headed, they talk about something called universal basic income. Because people won’t be able to make money anymore because AI will be providing all of the economic value. The sleight of hand that’s going on in that statement, though, is the idea that the AI is generating all this economic value, when in fact there is no economic value without all the human contributions that were hoovered up into these machine learning models. And so I think if we really want to have a pro-worker stance, I admire the United States’ advocacy to make workers central for our policy moving forward. If we want to really do that, though, then we’ve got to set up a way to compensate people for the value that they’re creating. I don’t think that has to strangle the innovation. I think that that is the innovation that we should strive for, and that we can meet that challenge with pride and positivity to say, hey, let’s build something that really is good for everybody.


Jonathan Charles: Thank you very much indeed. Well, that was fairly clear. So a good time, I think, to turn to open AI. Sandro?


Sandro Gianella: I think to me, when we think about the way we as humans and workers are using these tools and how we think about the future of work and what it means to work and what economic value we are creating, what the right balance is between the economic value these tools are creating, how we will use them, I think there’s a lot of joint learning that we are still doing and that is happening right now. I think there’s three things that to me are important to add to the discussion. One, and I think the minister touched on it really well, was making sure that there was access to these tools. Because I think one of the things that the IGF community is so good about and has been fighting the good fight for is to make sure that we have equitable access and that we’re not adding to a digital divide that we know we all are concerned about and we want to make sure we mitigate. One of the things we did to maybe more as a test, but it turned out to be really successful, was what are the different ways for people that might not be comfortable using a phone or a computer to access this technology. I touched on one in my speech earlier, but another one we did was we set up a landline to be able to call into and access this technology called 1-800-CHAT-GBT. A lot of people thought it was a bit of a joke when we launched it, but I think it speaks to something that is important to us, which is let’s lower the barriers for access to this technology as much as we can. The second one, when I think about the future of work, the combination of work and learning to me is very much intertwined. For as long as humanity existed, we have learned new skills and that we have applied those skills to things that we found meaningful as humans, that help our communities, that provide economic value. And I think one of the things that excites me about the way that this technology is being used is that it can be used to learn itself. We know from the way that people are using this technology is that they’re actually finding great ways to learn new skills, to learn about the world, to learn about our communities, and to learn new skill sets. So I think that’s one where I hope we can be creative in working with governments and others to rolling out ways to make sure people have access and they then can use them to learn and to reskill. And I think the last point I’ve made is, as I think was mentioned in different parts of this panel, this is a general purpose technology. This is posing important, optimistic, sometimes difficult questions about the rights balance that we’re striking, about how different communities, different industries are grappling with the changes that are coming. I think the minister touched on it well in his speech. And I think that shows the how of this technology being implemented. How are, whether they’re industries, whether they’re specific communities that we’re working with, the shape and the way in which we allow this technology to be used, the principles that we rely on and that is important to us. Obviously, there’s a difference in using these tools from healthcare to the creative sectors. And I think what we’re seeing is that all of these communities are grappling with these questions. We hope we can do our part by being part of these discussions, by showing up, by being here, and by doing that in as transparent a way as we can.


Jonathan Charles: Sandro, thank you very much indeed. From the Norwegian government’s point of view, Tom.


Tomas Norvoll: Yeah, well, thank you. Well, just two points. First of all, it is extremely important that we make the tools accessible. I mean, some of the AI tools, they can play a very democratic role because it will give virtually everybody access to all kinds of information. If you look 10 years back, 20 years back, there could be a gap between those who had access to information, who had long education, and those who didn’t. That also means that with using the right tools, people without extraordinary skills can do extraordinary things if you have the right tool. And that opens some kind of a democratic area that is important. But we have to make sure that we do this in a responsible way. We have to make sure that the systems reflect our values, that they protect our rights, and that they promote the dignity in work. For example, for me, it would be great to have an AI system that could, in a way, monitor my health, making sure that I make the right decisions on how to take care of my health. But under no circumstances would I like the insurance company to get the same data. That would be just catastrophic in my example. And we also have to find systems to make sure that my data, either if I create something or it is just kind of data about me, they are mine. We have to find a system where that can be some kind of ownership, making sure that nobody can just steal it and use it as their own, as a way to make profit out of it. And I think that is one of the big things that we have to find a solution to. As I said earlier, the only way to do that is to do it internationally. I think the EU is doing a great job in trying to find a framework through the AI Act to see how can we make sure that we put people first when we are discussing AI. But we still have a very, very long way to go before we have an optimal system into this.


Jonathan Charles: Tomas, thank you very much. I’m very glad I wasn’t wearing an AI health monitor as I had my very large cooked breakfast this morning, actually. Right, let’s move on to question three. Education systems are under pressure to evolve alongside the change in technology. How can AI expand access to quality education? What does quality education even look like in a changed labour landscape? Nhtati, let me start with you.


Nthati Moorosi: Thank you very much once again. I think AI is coming at the time where Lesotho, we are just dealing with our education system, trying to find ways of improving the education system. Lesotho’s education system faces tremendous challenges. We still have students who walk many, many kilometres to get to a school. When they get to a school, it’s one teacher to a minimum of about 60 students in one class. And some classes have two classes in one room. So AI, I see it as a tool that is going to change our education system considerably. We have challenges from basic education to tertiary education. But I want to confine my responses to the challenges and opportunities in the basic education system. Challenges include, as I said, overcrowded classrooms with one teacher for over 60 students in some cases. We also have a very specific case of vulnerable groups such as head boys who live in the highlands, who most of the time they miss school to attend to livestock. These conditions hurt the education to many young people each year. Some even drop out. AI offers exciting opportunities to change this. It enables the creation of personalised learning experiences for every student, whether they are in a busy city, school or a remote highland village. A learner in the city and a head boy in the rural areas can receive lessons tailored to their specific pace and learning style and language preference. Such as Sisutu or Isiputi or Ikosa, which are the minority languages in the country. For instance, AI applications can provide mathematics lessons that adapt to each student’s specific needs. Many head boys struggle with English fluency. Thank you all for joining us today. I’m very pleased to be here to talk about AI-based learning. AI-based learning is a new approach to learning, yet most subjects are taught in English language. This approach not only supports overworked teachers, but also helps under-resourced students, making education more accessible to everyone. However, we must ensure that AI solutions align to our culture. We must ensure that teachers are not left behind, and they leave them behind. That’s why our AI policy emphasizes building a human-centered AI ecosystem. To sum up, AI can expand access to education in Lesotho by personalizing learning and reaching remote areas, but only if it’s culturally relevant. AI can improve quality of our education.


Jonathan Charles: Thank you. Nthati, thank you very much indeed. Let’s turn to the European Commission, Juha Heikkila.


Juha Heikkila: Thank you, Jonathan. Indeed, as the Minister said, AI can actually improve access to education, and it can allow sort of the building of tailor-made courses, tailor-made study programs, and in that sense, it can be a great asset. However, in that sense, of course, I think it’s preferable to see AI in a support role rather than taking over, but it can be a significant contributor to access to training and education. More generally, of course, people should be ready to work with AI, and in some cases, it can be a great asset. But it’s not the case that all people who are trained in AI or have some kind of AI literacy will have to be part of their education. This is already because there will be increased co-working with AI, whether it’s standalone or physically embedded in robots or other kinds of autonomous systems, and it is important for the workforce also to be able to have the opportunity to up-skill and to be able to be part of that. So I think it’s important for all of us to be ready to work with AI, and we have just recently launched what we call the AI Continent Action Plan in which we focus on skills as one of the five pillars. So promoting AI literacy is crucial there, and also we want to capitalize on the network of what we call digital innovation hubs which we have built over the years, which are basically AI hubs, which is a hub where people can come together and participate in the digital transition. So we are working with governments and the ecosystems in regions to do the digital transition and have the necessary wherewithal so that they can actually take part in the digital transition and benefit from it. So we have focused on that quite strongly now in our most recent policy statements, and we will be building on measures on that as well.


Jonathan Charles: Thank you. Thank you, Jonathan.


Chris Yiu: So I think this question of the AI opportunity in education is significant, and I think the Minister and others have touched on some of the potential ways that it can benefit people. The way that I like to think about it is it’s both for educators and for learners. So for the educators here, having access to tools and technologies that help them do their job better and more effectively and more productively is tremendously important. There are a wide range of challenges around the world that people face in schools and other educational environments. There are many things that AI can do. There are many things that AI can’t yet do, and where human connection and the human touch is important. And so if you can use the technology to do that, I think it’s a great opportunity to lift some of the burden for educators, to take away some of the administration, to handle some of the work they spend when they’re not face-to-face with students. That is tremendously important in many, many contexts. And for the learners themselves, to be in an environment where for the first time now you have the opportunity perhaps not to be in a one-size-fits-all learning environment, but maybe to have access to additional tools and support that speak to you, your interests, your passions, your needs, that, again, is something which historically has been the preserve of a few and now ought to be available to a far wider range of students and learners around the world in different settings. So it should be a great and I think will be a great leveling technology, and we see tremendous examples of our AI models and others being used around the world for this, and particularly today. The minister’s point about language and culture, one of the things which is so dear to our hearts around our open-source approach here is we see people in different settings taking the models that we produce and invest in, but then fine-tuning those to reflect local language, culture, dialects, nuance, all of which is incredibly important for the different communities that we serve. So I think in all of this, it’s important to think about what we’re doing. We have to equip our young people both to make the best of these tools and also to be fluent in them when they go out into the wider world. Thank you.


Jonathan Charles: Chris, thank you very much indeed. Ishita?


Ishita Barua: Thank you. In a world where AI can generate content faster than we are actually able to consume it and read it, I think the true strength of an education system lies not in the fact that we have the tools, but in the fact that we have the tools to read it. I think the true strength of an education system lies not in how seamlessly we’re able to adopt new tools, but in how deeply it values domain expertise, cultivates human judgment, and actively resists de-skilling and cognitive outsourcing. Because we don’t need systems that simply automate every task. We need systems that sharpen our ability to think independently, reason critically, and build deep understanding. How are we incentivising this? How are we incentivising this? How are we incentivising that? How are we incentivising healthcare professionals to cultivate their skills and domain expertise if you can simply ask any other language model to do the task for you? And also, what really worries me is that we’re seeing a quiet shift. More and more people read summaries instead of full text. They write by prompting a language model with a few keywords and receive a polished paragraph in return. Where is the learning in that? It may feel very productive, but it’s changing something very fundamental about how we learn. And for me, for instance, writing isn’t just a way to express thought. I write to think. I don’t truly understand something until I’ve worked through it on page, failed and revised it, and clarified it. So, the French philosopher Descartes famously said, cogito, ergo sum. I think we need a revision of that. My take on that is scribo, ergo cogito, ergo sum. I write, therefore I think, and therefore I am. APPLAUSE I think that quality education in this new labour landscape means creating learning environments where people learn to use AI. Yes, that is important. But it’s not enough. Learn to use AI, yes, that is important, but also when to question it, override it and also think beyond it. And it’s about forming thinkers, not just prompt engineers.


Jonathan Charles: I think that goes back to the question at the, thank you very much, Ashita, at the very beginning. APPLAUSE That I raised where chief executives are worried about how their junior staff will learn to make the judgments and to have the experience which will allow them to become more senior in their careers. Thank you very much. We have about eight minutes left, and fortuitously we have eight panellists, so that tells me without the use of AI that we have about a minute for each of you. I would suggest that we make this a quick fire round. Let’s sum up then from each of you how you see everyone working together to try to get this landscape right, and any final reflections that you have on how we ensure that the benefits are shared across sectors and not just concentrated in tech companies or tech hubs. A minute each. Ishita, you get the privilege of going first.


Ishita Barua: I think there is a dimension that we haven’t spoken about, and that is the dimension of gender. There are two Nordic studies that show women are adopting tools like ChatGPT more slowly than men, not because of a lack of competence, but due to differences in digital confidence and how technology is actually introduced in their work environments. And at the same time, women particularly in high-tech industries, they are more susceptible to AI-driven change. In fact, women in high-tech and high-income countries are three times more exposed to automation risks because they are overrepresented in roles that are highly susceptible to AI-driven change, administrative support, healthcare, education, and if we don’t address this explicitly, AI risks becoming a new layer of structural inequality reinforcing old patterns under the guise of innovation, and that’s why inclusive growth must also mean gender-responsive AI strategies in workforce development, access to tools, and in who gets to shape the systems that we are actually building.


Jonathan Charles: Well done, Ishita, you got it within a minute. Thank you very much indeed. Sandro, you’ve heard a lot here. Summarising that in a minute will probably be quite difficult, but off you go. I’ll give it a shot.


Sandro Gianella: I think the two things I would add is curiosity and agency. If we think about learning and the future of work, I think making sure our institutions, our tools, our technology is able to feel the agency that they have in shaping the environment that we live in, and they remain curious, and I’d like to pick up the point that Ishita made around are we sure that we can still learn with these tools, and I think that is a very important question that we think quite a lot about. We have an entire team of education practitioners that is working through what are the ways in which learners and students and teachers have found ways to tailor the tools so they don’t jump into the answer, so they push back on our thinking, so they push back and and reflect critically about how we’re thinking about a certain subject. And I hope, and that’s my plea, is that we find ways to use these technologies not to narrow the scope of the things that we think about and we know, but that we broaden the scope and we allow these tools to help us to think more critically about kind of wrong biases, around reflections, and I think there’s a way to do that.


Jonathan Charles: Sandro, thank you very much. And that was just very interesting, wasn’t it? There was a PISA survey out recently which showed that since 2010, our ability to learn has started dropping off quite substantially. And of course, we all know what started in 2010. Let’s move on to you, Juha, from the European Commission.


Juha Heikkila: Right, so training is very important, of course, as was already mentioned and highlighted. So we do need to be prepared in order for us to reap the benefits that we, and I think this panel has made it very clear, we need to be ready, we need to be prepared and that is key. And I think that it is also important to sort of channel funding and research towards public objectives. Health, in particular, it’s been already mentioned many times here, immensely important, obviously, benefits businesses, society at large, and individuals, of course, in many ways, but other public services as well. And therefore, we are very closely monitoring this in the European Commission to see how these things develop, how things pan out and what will be the impact on the workforce and the workplace in general. I think what is encouraging here is that there is increasing attention to this. So a panel like today’s is not an exception. We see these different events. So I think that there is this common awareness that this is an important aspect we need to focus on.


Jonathan Charles: Thank you very much. Tomas, so your final reflection on Tomas Norvoll and where we stand.


Tomas Norvoll: Well, first of all, I think it is important that we dare to use these new tools. It is also important that we make sure that everybody has access, whether you, like me, live above the Arctic Circle, or if you live in a metropolitan city in Europe or in the United States, you have to have access. So it’s not just the elite that have access to the new tools that are emerging. From a governmental side, it is extremely important that we have a framework, that we have rules, regulations that put people first into the discussion. And finally, I think it is important that we arrange more IGFs and other arenas so that we have a place to discuss how we are going to cope with the new challenges that are ahead of us.


Jonathan Charles: Tomas, thank you very much indeed. Jennifer.


Jennifer Bacchus: So in my position where I’ve been for three years, I’ve traveled a lot around the world, I’ve talked to lots of people, and the number one request I’ve gotten up to now has always been about digital and cyber workforce development. Is it the center of what everyone in the world wants? And I think what we’re seeing is an evolution where we’re now talking about AI workforce development. And just like cyber and digital workforce development where the United States led the way, this administration is focused on making sure that our workforce will be prepared for this new economy, that we can demonstrate how you do it, how you think about these things in a deliberate way, so that in fact we can have workers that can use this technology, that can be the human in the loop, that can question things, not just accept the answers, but look at it. I think these points are incredibly important that these are enabling technologies, not meant to replace humans, but meant to help us be more productive. Just like calculators were a tool that I have to tell you, my children are still learning how to do math. They don’t like it. They wanna do math on the calculator. But just like kids are still learning how to do math, kids are still gonna learn how to reason, how to write, how to do these basic things. And so how do you then say, okay, now you know those basic skills, and now you’re gonna use AI to enhance that. So I just have to say, first of all, I am optimistic that we are gonna be able to figure out a way to do this, that we do have people around the world looking to promote policies and academies where you can do the AI education. But I think just to conclude, what we’ve seen is too often, regulations are really being designed to try to control AI, rather than to unleash it. And that ultimately, what we need to do is we need to look at AI as a tool of prosperity, and we need to not clip the wings of the new companies. We need to embolden our innovators so that we can have all of the positive benefits that we talked about, and ultimately, consumers and workers alike will be able to benefit. So all of our efforts should be aimed at supporting the innovation that really will deliver real-world benefits.


Jonathan Charles: Thanks. Jennifer, thank you very much indeed. Nhtati.


Nthati Moorosi: Thank you very much. I think the biggest words for me in closing is inclusivity is collaboration. AI-driven economic growth requires all of us to act. Governments, international bodies, and other stakeholders to ensure equitable benefits across all regions and sectors. Governments have to build AI enablers, prioritize governance and ethical policies. For example, a whole-of-society model have to be engaged. Academia, civil society, industry, to create sustainable AI ecosystems that foster transparency, inclusivity, adaptable to local contexts like agriculture and small businesses. International partners in the same vein need to support capacity-building efforts that are sensitive to local contexts, including AI tools, local languages models, and global support is needed for technical training that can equip young people with skills necessary for high-quality AI jobs rather than entry-level positions, thereby promoting economic equity. And private sector in the same way should collaborate with governments and create ethical AI solutions that respect human rights and preserve cultural heritage. Thank you.


Jonathan Charles: Nhtati, thank you very much indeed. Chris Yiu from Meta.


Chris Yiu: Okay, a couple of things to conclude here. Number one, I think we all know in the end, technological progress is the route to innovation, it’s the route to prosperity and a safer and more secure world. The AI technology that we’re talking about at the moment is very much in its infancy, and so innovation is important. We should pay attention to the questions around rules and regulation, but we mustn’t get too far ahead of this to a place where our ability to unlock these benefits is overly constrained. I think governments around the world are talking about that a lot now, and we think it’s a very important thing for people to stay focused on. The second is just to say, I think sometimes this can be a very abstract conversation about AI, and I just wanna kind of bring us back to the human side of this. I have a pair of AI glasses with me. These can translate people speaking in other languages so that I can understand them. They can describe things for people who are blind or visually impaired. This is a tremendous humanizing technology, and that’s one of the reasons why innovation is so important.


Jonathan Charles: Fascinating. Thank you very much, Chris. Very useful for Americans trying to understand British people, I’m sure. Last but not least, Joseph Gordon-Levitt.


Joseph Gordon-Levitt: Thank you. I wanna pick up on something, Chris, my fellow panelist from Meta said about this technology not only being good for the companies, but wanting it to be good for the world. With respect, Meta cannot prioritize what’s good for the world. It’s not built to do that. It’s a for-profit company, and it has to prioritize value for its shareholders. That’s what it has to do. I ran a startup, obviously much smaller than Meta, but I know what it’s like to have investors, to have shareholders, and to have to move your numbers. The way this technology will work is in partnership with innovative and proactive companies like Meta or OpenAI, working together with policy makers who do set up rules. I think this is a false dichotomy, this contrast to say that innovation is the opposite of rules, that if we don’t have any rules, then the competitive market dynamic will force these companies to build stuff that is bad for the world, that harms the world, that they can’t do it themselves. The private sector can’t do it themselves. That’s why I take a lot of heart in being here with so many people doing great work in the public sector, because we need a partnership. We need a partnership between the private sector and the public sector. We need to have great companies building great things, and then we need to have rules of the game that can help that benefit everybody. That’s what we need, I think.


Jonathan Charles: Thank you. Thank you very much indeed, Joseph Gordon-Levitt. Thank you to our panel. I’m really struck by this discussion of the past hour and a half, because it reminds me very much of the discussions that I chaired in the early days of the Internet Governance Forum, the first forums 20-odd years ago, where we were then discussing how to get through the challenges of the internet in its early days. Somehow we found a way through that. I guess that gives us hope we can find a way through the Internet Governance Forum. That gives us hope we can find a way through this as well. But many of the things that were said today remind me exactly of the sort of arguments that we were discussing then. I’d like to thank our panel. As William Shakespeare might have said, and certainly wrote, times-winged charity has got away from us. You can look up William Shakespeare on AI, by the way, if you’re not sure who he is. Thank you to the panel. Thank you to you in the audience here in Lillestrom. Thank you to the audience online, and I hope you have a good morning. Thank you. Thank you. Thank you. Thank you.


T

Tomas Norvoll

Speech speed

142 words per minute

Speech length

1409 words

Speech time

593 seconds

AI is already embedded in daily tools and transforming sectors like energy, healthcare, and agriculture – not just a future technology

Explanation

Norvoll argues that AI is not merely a future concern but is already deeply integrated into current operations. He emphasizes that AI is actively being used today to optimize wind and hydropower, predict energy demand, and create more sustainable shipping solutions.


Evidence

Norwegian companies using AI to accelerate green transition by optimizing wind and hydropower, predicting energy demand, and creating smarter, more sustainable shipping


Major discussion point

AI’s Current Impact and Transformation of Work


Topics

Future of work | Sustainable development


Agreed with

– Sandro Gianella
– Nthati Moorosi
– Ishita Barua
– Junha Li

Agreed on

AI is already transforming current work and society, not just a future concern


Risk of widening digital divides if access to AI tools and training is not equitable across populations

Explanation

Norvoll warns that without proper access to AI tools and training, society risks creating greater inequality between those who can leverage AI and those who cannot. He stresses the importance of ensuring that AI empowers rather than marginalizes workers.


Evidence

Those who know how to work with AI will be in high demand, and those without access to tools or training risk being left behind


Major discussion point

Risks and Challenges of AI Implementation


Topics

Digital access | Future of work


Agreed with

– Sandro Gianella
– Nthati Moorosi
– Junha Li

Agreed on

Need for equitable access to AI tools to prevent widening digital divides


Need for international cooperation and frameworks that protect people while enabling innovation, as individual nations cannot address AI challenges alone

Explanation

Norvoll advocates for global collaboration in developing AI governance frameworks, arguing that the challenges posed by AI are too complex for any single nation to address effectively. He emphasizes the need to balance innovation with human protection and rights.


Evidence

Investment of a billion kroner to establish six national research centers on artificial intelligence to study AI’s effects on society and strengthen innovation


Major discussion point

Governance and Regulatory Approaches


Topics

Data governance | Future of work


Agreed with

– Junha Li
– Juha Heikkila
– Nthati Moorosi

Agreed on

Importance of international cooperation and governance frameworks for AI


AI can help governments maintain welfare states more efficiently, particularly in healthcare and education sectors facing demographic challenges

Explanation

Norvoll presents AI as a solution to demographic challenges, particularly aging populations that strain public resources. He argues that AI can help governments deliver services more efficiently while maintaining democratic welfare states, especially in conservative sectors like healthcare and education.


Evidence

Europe is getting older one year every year, lacking people and using too much resources, requiring more efficient ways to work in healthcare and education


Major discussion point

Future of Work and Economic Impact


Topics

Future of work | Online education


S

Sandro Gianella

Speech speed

181 words per minute

Speech length

2451 words

Speech time

809 seconds

AI offers transformative potential in pharma, scientific research, education, and climate work, with tools now accessible to small businesses and individuals

Explanation

Gianella highlights AI’s broad applications across multiple sectors, emphasizing that these powerful tools are no longer limited to large corporations but are accessible to smaller entities and individuals. He argues this democratization of AI tools represents a significant shift in who can benefit from advanced technology.


Evidence

Collaboration with Moderna and Sanofi for vaccine development, work with European laboratories at Sinospoor and Max Planck University, partnership with Estonian government for AI in schools, Amazon GPT project with Federal University of the Amazon in Brazil


Major discussion point

AI’s Current Impact and Transformation of Work


Topics

Future of work | Online education | Sustainable development


Agreed with

– Tomas Norvoll
– Nthati Moorosi
– Ishita Barua
– Junha Li

Agreed on

AI is already transforming current work and society, not just a future concern


AI primarily provides task-level automation rather than job-level replacement, enhancing productivity while creating new roles

Explanation

Gianella argues that current AI applications focus on automating specific tasks within jobs rather than replacing entire positions. He contends that this approach enhances human productivity and creates new types of work opportunities, citing examples where AI reduces routine work to allow focus on higher-value activities.


Evidence

Radiology example where AI helps detect cancer earlier but increases need for radiologists to see more patients faster; Collar Health example where AI reduces paperwork time for clinicians; Multimodal Digital Green reducing agricultural extension service costs from $35 to $0.3 per farmer


Major discussion point

Future of Work and Economic Impact


Topics

Future of work


Agreed with

– Jennifer Bacchus
– Chris Yiu
– Ishita Barua

Agreed on

AI enhances rather than replaces human work through task-level automation


Disagreed with

– Joseph Gordon-Levitt

Disagreed on

Compensation for data used in AI training


Need for broad, equitable access to prevent widening gaps between those with and without access to AI technology

Explanation

Gianella emphasizes the importance of ensuring AI tools are accessible to all populations to prevent the creation of new forms of digital inequality. He advocates for innovative approaches to reach underserved communities and make AI tools available regardless of technical infrastructure limitations.


Evidence

Integration with WhatsApp for areas without high-speed internet or advanced hardware; 1-800-CHAT-GBT landline service; OpenAI Academy training 1.4 million people since launch


Major discussion point

Equity and Inclusion Concerns


Topics

Digital access | Future of work


Agreed with

– Tomas Norvoll
– Nthati Moorosi
– Junha Li

Agreed on

Need for equitable access to AI tools to prevent widening digital divides


Combination of work and learning is intertwined, with AI tools helping people learn new skills throughout their careers

Explanation

Gianella argues that AI tools can serve dual purposes as both work aids and learning platforms, enabling continuous skill development. He suggests that people can use AI to learn about the technology itself and acquire new capabilities that remain relevant as the job market evolves.


Evidence

People using AI technology to learn new skills, learn about the world, and learn new skill sets; team of education practitioners working on ways to tailor tools to push back on thinking and reflect critically


Major discussion point

Education and Skills Development


Topics

Online education | Future of work


J

Joseph Gordon-Levitt

Speech speed

142 words per minute

Speech length

1522 words

Speech time

640 seconds

AI companies take people’s creative work without permission or compensation to train valuable models, threatening economic incentives for creativity

Explanation

Gordon-Levitt argues that AI companies are building valuable products by using content created by millions of people without obtaining consent or providing compensation. He contends this practice undermines the economic foundation that incentivizes creative work and could lead to a system where people have no motivation to create or work hard.


Evidence

Generative video products that create content from videos made by millions of people without permission or payment; Copyright Office report suggesting most training data use is illegal, followed by the firing of the Copyright Office head the next day


Major discussion point

Risks and Challenges of AI Implementation


Topics

Intellectual property rights | Future of work


Disagreed with

– Sandro Gianella

Disagreed on

Compensation for data used in AI training


Digital work and data should belong to workers, with compensation systems needed to maintain economic incentives for human creativity

Explanation

Gordon-Levitt advocates for a fundamental principle that individuals should have economic ownership of their digital contributions and data. He argues that establishing compensation mechanisms for human-generated data would create a vibrant market economy while ensuring that technological advancement doesn’t eliminate economic incentives for human creativity and innovation.


Evidence

Proposal that tech companies should share economic value with humans whose data is used; criticism of universal basic income concept as masking the reality that AI value comes from human contributions


Major discussion point

Future of Work and Economic Impact


Topics

Intellectual property rights | Future of work | Consumer protection


Disagreed with

– Chris Yiu

Disagreed on

Role of private sector vs public sector in AI governance


N

Nthati Moorosi

Speech speed

126 words per minute

Speech length

1063 words

Speech time

504 seconds

AI is helping tackle healthcare challenges like TB detection and agricultural issues through locally developed applications

Explanation

Moorosi describes how AI is being successfully implemented in Lesotho to address specific local challenges, particularly in healthcare and agriculture. She emphasizes that these applications are making services more accessible to rural and remote populations who previously had limited access to expert assistance.


Evidence

AI technology named Cure and QXR analyzing chest X-rays to detect TB since 2022; LAWA/LAVA app allowing farmers to upload crop photos and ask questions in Sesotho language; AI-powered chatbot for agriculture extension workers developed with ITU and FAO


Major discussion point

AI’s Current Impact and Transformation of Work


Topics

Digital access | Sustainable development | Multilingualism


Agreed with

– Tomas Norvoll
– Sandro Gianella
– Ishita Barua
– Junha Li

Agreed on

AI is already transforming current work and society, not just a future concern


Digital divide excludes rural populations from AI innovations, and privacy concerns require robust data protection

Explanation

Moorosi identifies the digital divide as a major barrier preventing rural farmers and patients from benefiting from AI innovations. She also highlights privacy concerns as a critical issue requiring strong safeguards to protect sensitive personal and medical data.


Evidence

Many rural farmers and patients lack internet access; need for robust safeguards to protect sensitive data including patients’ records and farmers’ information


Major discussion point

Risks and Challenges of AI Implementation


Topics

Digital access | Privacy and data protection


Agreed with

– Tomas Norvoll
– Sandro Gianella
– Junha Li

Agreed on

Need for equitable access to AI tools to prevent widening digital divides


Need for human-centered AI policies that align with cultural values and don’t leave teachers and workers behind

Explanation

Moorosi advocates for AI development that respects local culture and ensures that existing workers, particularly teachers, are supported rather than displaced. She emphasizes the importance of creating AI solutions that are culturally relevant and inclusive of local languages and customs.


Evidence

AI policy emphasizing building a human-centered AI ecosystem; need for AI solutions to align with culture and ensure teachers are not left behind; personalized learning in local languages like Sisutu, Isiputi, or Ikosa


Major discussion point

Governance and Regulatory Approaches


Topics

Cultural diversity | Multilingualism | Future of work


AI can personalize learning for students in different environments and languages, supporting overworked teachers and under-resourced students

Explanation

Moorosi argues that AI can address educational challenges by providing personalized learning experiences tailored to individual students’ needs, pace, and language preferences. She sees this as particularly valuable for supporting overwhelmed teachers and reaching students in diverse circumstances, from urban to rural settings.


Evidence

Overcrowded classrooms with one teacher for over 60 students; head boys in highlands who miss school to attend livestock; AI applications providing mathematics lessons adapted to each student’s specific needs and language preferences


Major discussion point

Education and Skills Development


Topics

Online education | Multilingualism | Digital access


Inclusive growth requires whole-of-society engagement and international support for capacity-building in local contexts

Explanation

Moorosi calls for comprehensive collaboration involving governments, academia, civil society, and industry to create sustainable AI ecosystems. She emphasizes the need for international support that is sensitive to local contexts and focuses on building high-quality skills rather than just entry-level capabilities.


Evidence

Need for whole-of-society model engaging academia, civil society, industry; international support for AI tools in local languages and technical training for high-quality AI jobs rather than entry-level positions


Major discussion point

Equity and Inclusion Concerns


Topics

Capacity development | Cultural diversity | Future of work


Agreed with

– Tomas Norvoll
– Junha Li
– Juha Heikkila

Agreed on

Importance of international cooperation and governance frameworks for AI


I

Ishita Barua

Speech speed

176 words per minute

Speech length

1454 words

Speech time

494 seconds

AI is restoring healthcare by addressing care debt through scribes, diagnostics, and patient communication tools

Explanation

Barua argues that AI is arriving at an opportune time to address the accumulated ‘care debt’ in healthcare systems worldwide. She contends that AI tools can help repay this deficit by reducing administrative burdens on healthcare workers and improving patient care quality and accessibility.


Evidence

AI scribes freeing doctors from documentation burden; AI-supported diagnostics catching disease earlier; language models helping patients understand care more compassionately; AlphaFold mapping 200 million protein structures; robotic surgery using imitation learning; brain-computer interfaces helping paralyzed people regain movement and speech


Major discussion point

AI’s Current Impact and Transformation of Work


Topics

Future of work | Sustainable development


Agreed with

– Sandro Gianella
– Jennifer Bacchus
– Chris Yiu

Agreed on

AI enhances rather than replaces human work through task-level automation


Risk of hard-coding existing inequalities if AI tools are only deployed in wealthy settings with narrow datasets

Explanation

Barua warns that without intentional equity considerations, AI implementation in healthcare could perpetuate and amplify existing disparities. She argues that if AI tools are primarily developed and deployed in well-resourced settings with limited demographic representation, they risk embedding current inequalities into future healthcare systems.


Evidence

Tools deployed only in wealthy hospitals with resources, trained on narrow, imbalanced datasets, designed without equity in mind


Major discussion point

Risks and Challenges of AI Implementation


Topics

Digital access | Future of work


Quality education must value domain expertise and critical thinking, not just tool adoption, to prevent cognitive outsourcing

Explanation

Barua argues that true educational quality lies in developing deep thinking skills and domain expertise rather than simply adopting new AI tools. She warns against cognitive outsourcing where people become overly dependent on AI for tasks that require human judgment and critical analysis.


Evidence

Concern about people reading summaries instead of full text, writing by prompting language models with keywords; personal example of writing as a way to think and understand, referencing Descartes’ ‘cogito ergo sum’ and proposing ‘scribo, ergo cogito, ergo sum’ (I write, therefore I think, therefore I am)


Major discussion point

Education and Skills Development


Topics

Online education | Future of work


Women adopt AI tools more slowly due to digital confidence differences and face higher automation risks in their predominant work roles

Explanation

Barua highlights a gender dimension in AI adoption, noting that women are slower to adopt AI tools not due to lack of competence but due to differences in digital confidence and workplace introduction methods. She warns that women face disproportionate risks from AI-driven changes due to their concentration in roles most susceptible to automation.


Evidence

Two Nordic studies showing women adopting ChatGPT more slowly than men due to digital confidence differences; women in high-tech industries three times more exposed to automation risks due to overrepresentation in administrative support, healthcare, and education roles


Major discussion point

Equity and Inclusion Concerns


Topics

Gender rights online | Future of work


J

Jennifer Bacchus

Speech speed

168 words per minute

Speech length

930 words

Speech time

331 seconds

US opposes excessive regulation that could strangle AI innovation and will block authoritarian misuse while promoting pro-innovation policies

Explanation

Bacchus articulates the US position that regulatory approaches should foster rather than restrict AI development, viewing excessive regulation as potentially paralyzing a transformative technology. She also emphasizes US commitment to preventing authoritarian regimes from misusing AI while promoting American AI as the global standard.


Evidence

Concerns about foreign governments using policies to tighten screws on US tech companies; reports of authoritarian regimes stealing AI for military intelligence, surveillance, and propaganda; commitment to block such efforts and safeguard American AI technologies


Major discussion point

Governance and Regulatory Approaches


Topics

Data governance | Cyberconflict and warfare


Disagreed with

– Juha Heikkila

Disagreed on

Regulatory approach to AI development


AI should boost worker productivity, improve job quality, and create new roles like AI trainers and human-machine teaming managers

Explanation

Bacchus presents the US vision of AI as fundamentally supportive of workers rather than replacing them. She argues that AI will enhance productivity, create better working conditions, and generate entirely new categories of employment that didn’t previously exist.


Evidence

AI creating new roles like AI trainers, data analysts, and human machine teaming managers; generative AI leveling playing field for job access; AI helping with transparent hiring decisions; guarantee of American workers having a seat at the table for major AI policy decisions


Major discussion point

Future of Work and Economic Impact


Topics

Future of work


Agreed with

– Sandro Gianella
– Chris Yiu
– Ishita Barua

Agreed on

AI enhances rather than replaces human work through task-level automation


J

Junha Li

Speech speed

105 words per minute

Speech length

407 words

Speech time

231 seconds

AI is fundamentally altering how value is created and who benefits, extending beyond job displacement to affect judgment, coordination, and creativity

Explanation

Li argues that AI’s impact goes far beyond simple job replacement, fundamentally changing the nature of value creation in the economy. He emphasizes that AI is entering sectors requiring complex human skills like judgment and creativity, raising questions about who will benefit from this transformation.


Evidence

AI entering sectors including health, education, logistics, law, and finance, performing tasks requiring judgment, coordination, and creativity; potential for widespread job displacement, obsolete skills, and widening inequalities


Major discussion point

AI’s Current Impact and Transformation of Work


Topics

Future of work


Agreed with

– Tomas Norvoll
– Sandro Gianella
– Nthati Moorosi
– Ishita Barua

Agreed on

AI is already transforming current work and society, not just a future concern


International cooperation essential to ensure AI bridges rather than deepens global divides between developed and developing nations

Explanation

Li emphasizes that no single country can navigate AI transformation alone and calls for collective action to ensure AI serves as a tool for reducing rather than increasing global inequalities. He stresses the need for international cooperation in education, training, infrastructure, and governance.


Evidence

Need to build inclusive ecosystem from education and training to infrastructure and governance; focus on digital literacy for women, youth, and workers in informal economy; emphasis on transparency, accountability, and fairness in workplace


Major discussion point

Equity and Inclusion Concerns


Topics

Digital access | Capacity development | Future of work


Agreed with

– Tomas Norvoll
– Juha Heikkila
– Nthati Moorosi

Agreed on

Importance of international cooperation and governance frameworks for AI


J

Juha Heikkila

Speech speed

177 words per minute

Speech length

1167 words

Speech time

393 seconds

EU supports innovation-friendly, risk-based regulation that only intervenes where necessary to build trust for AI adoption

Explanation

Heikkila explains the EU’s approach to AI regulation as being supportive of innovation while addressing risks where intervention is necessary. He argues that trust is essential for AI adoption and that benefits can only materialize with proper uptake, which requires balanced regulation.


Evidence

EU AI Act as framework to put people first; innovation-friendly risk-based regulation; strong and increasing support for AI innovation in European Union


Major discussion point

Governance and Regulatory Approaches


Topics

Data governance | Future of work


Agreed with

– Tomas Norvoll
– Junha Li
– Nthati Moorosi

Agreed on

Importance of international cooperation and governance frameworks for AI


Disagreed with

– Jennifer Bacchus

Disagreed on

Regulatory approach to AI development


Jobs will be replaced, changed, and created simultaneously, with routine tasks most at risk but new opportunities emerging

Explanation

Heikkila acknowledges uncertainty about AI’s net effect on employment while noting that routine tasks face the highest risk of automation. He emphasizes that job creation often occurs in distributed ways while job losses may be more concentrated and visible, and that many jobs requiring dexterity remain difficult to automate.


Evidence

Studies showing different quantifications of AI impact; routine tasks at risk with generative AI proving powerful; jobs like folding garments, waiting tables, doing haircuts remaining complicated for robots; job creation being distributed while losses more concentrated


Major discussion point

Future of Work and Economic Impact


Topics

Future of work


Concerns about de-skilling as increased reliance on AI may cause people to lose essential capabilities

Explanation

Heikkila warns about the risk of de-skilling, where over-reliance on AI and automation could lead to loss of fundamental human capabilities. He uses the example of navigation skills declining with GPS use to illustrate how convenience can lead to dependency and skill atrophy.


Evidence

Personal example of remembering time before navigators and current generation not knowing how to read maps; concern about becoming heavily dependent on AI and lacking required skills when service is not available


Major discussion point

Risks and Challenges of AI Implementation


Topics

Future of work


AI literacy should be part of education, with focus on up-skilling workforce and using digital innovation hubs for transition support

Explanation

Heikkila advocates for integrating AI literacy into education systems and supporting workforce transitions through dedicated infrastructure. He emphasizes the importance of preparing people to work with AI systems and participate in the digital transition through accessible support networks.


Evidence

AI Continent Action Plan focusing on skills as one of five pillars; network of digital innovation hubs for people to participate in digital transition; working with governments and ecosystems in regions


Major discussion point

Education and Skills Development


Topics

Online education | Capacity development | Future of work


C

Chris Yiu

Speech speed

168 words per minute

Speech length

2149 words

Speech time

765 seconds

AI enables small businesses and entrepreneurs to compete with larger companies through accessible tools and platforms

Explanation

Yiu argues that AI serves as a leveling technology that allows smaller entities to access capabilities previously available only to large corporations. He emphasizes that Meta’s AI tools help small businesses compete more effectively and bring products to market in ways that weren’t previously possible.


Evidence

Small businesses using Meta’s platforms and AI tools to compete with larger businesses; AI helping individual entrepreneurs bring products to market; Meta’s commitment to open source and making AI accessible


Major discussion point

AI’s Current Impact and Transformation of Work


Topics

Future of work | Digital business models


Agreed with

– Sandro Gianella
– Jennifer Bacchus
– Ishita Barua

Agreed on

AI enhances rather than replaces human work through task-level automation


Open source AI development democratizes access and ensures technology isn’t controlled by few large corporations

Explanation

Yiu explains Meta’s commitment to open source AI development as a way to democratize access to powerful AI technologies. He argues that open source models prevent concentration of AI capabilities in the hands of a few large companies and enable global communities to benefit from and contribute to AI development.


Evidence

Llama models as open weights models downloaded over a billion times; developers able to download, deploy, and customize models; fine-tuning capabilities for local language, culture, and dialects; models designed to run on low-capability hardware


Major discussion point

Governance and Regulatory Approaches


Topics

Digital access | Future of work


Disagreed with

– Joseph Gordon-Levitt

Disagreed on

Role of private sector vs public sector in AI governance


AI can level educational playing field by providing personalized learning tools and reducing administrative burden for educators

Explanation

Yiu argues that AI can transform education by helping educators be more productive and providing students with personalized learning experiences. He emphasizes that AI can handle administrative tasks to free up educators for direct student interaction while offering customized educational content that was previously available only to a few.


Evidence

AI tools helping educators with administration and tasks when not face-to-face with students; opportunity for personalized learning environments instead of one-size-fits-all; AI models being fine-tuned for local language, culture, and dialects


Major discussion point

Education and Skills Development


Topics

Online education | Future of work


J

Jonathan Charles

Speech speed

182 words per minute

Speech length

1741 words

Speech time

571 seconds

Gen Z workers are faking productivity due to fear of AI job displacement, creating concerns about future talent development

Explanation

Charles highlights a concerning trend where younger workers are pretending to be busy because they fear their entry-level positions will be automated by AI. This creates a paradox where AI adoption may prevent junior staff from developing the professional expertise and judgment needed to advance in their careers.


Evidence

Gen Zers often fake looking busy all the time, worried that their lower-level jobs will be replaced by AI; large investment bank CEO worried about how younger staff will progress if they can’t build professional expertise when AI is taking over their roles


Major discussion point

Future of Work and Economic Impact


Topics

Future of work


Current AI transformation discussions mirror early internet governance challenges, suggesting similar collaborative solutions may be needed

Explanation

Charles draws parallels between current AI governance debates and the early days of internet governance forums from 20 years ago. He suggests that just as the internet community found ways to navigate early challenges through collaborative discussion, similar approaches may help address AI governance issues.


Evidence

Discussions remind him of early Internet Governance Forum conversations 20-odd years ago about internet challenges; observation that ‘somehow we found a way through that’ gives hope for AI challenges


Major discussion point

Governance and Regulatory Approaches


Topics

Data governance


PISA survey results show declining learning abilities since 2010, coinciding with major technological shifts

Explanation

Charles references recent educational assessment data indicating that human learning capabilities have been deteriorating since 2010. He implies a connection between this decline and the introduction of new technologies during that period, raising questions about technology’s impact on cognitive development.


Evidence

PISA survey showing ability to learn has been dropping off substantially since 2010, with implicit connection to technological changes that began around that time


Major discussion point

Education and Skills Development


Topics

Online education


Agreements

Agreement points

AI is already transforming current work and society, not just a future concern

Speakers

– Tomas Norvoll
– Sandro Gianella
– Nthati Moorosi
– Ishita Barua
– Junha Li

Arguments

AI is already embedded in daily tools and transforming sectors like energy, healthcare, and agriculture – not just a future technology


AI offers transformative potential in pharma, scientific research, education, and climate work, with tools now accessible to small businesses and individuals


AI is helping tackle healthcare challenges like TB detection and agricultural issues through locally developed applications


AI is restoring healthcare by addressing care debt through scribes, diagnostics, and patient communication tools


AI is fundamentally altering how value is created and who benefits, extending beyond job displacement to affect judgment, coordination, and creativity


Summary

Multiple speakers agree that AI is not merely a future technology but is already actively transforming various sectors including healthcare, agriculture, energy, and education. They emphasize that AI’s impact is currently visible and measurable across different industries and geographical contexts.


Topics

Future of work | Sustainable development | Digital access


Need for equitable access to AI tools to prevent widening digital divides

Speakers

– Tomas Norvoll
– Sandro Gianella
– Nthati Moorosi
– Junha Li

Arguments

Risk of widening digital divides if access to AI tools and training is not equitable across populations


Need for broad, equitable access to prevent widening gaps between those with and without access to AI technology


Digital divide excludes rural populations from AI innovations, and privacy concerns require robust data protection


International cooperation essential to ensure AI bridges rather than deepens global divides between developed and developing nations


Summary

There is strong consensus that without deliberate efforts to ensure equitable access, AI could exacerbate existing inequalities. Speakers agree that bridging the digital divide is crucial for AI to benefit all populations rather than creating new forms of exclusion.


Topics

Digital access | Future of work | Capacity development


AI enhances rather than replaces human work through task-level automation

Speakers

– Sandro Gianella
– Jennifer Bacchus
– Chris Yiu
– Ishita Barua

Arguments

AI primarily provides task-level automation rather than job-level replacement, enhancing productivity while creating new roles


AI should boost worker productivity, improve job quality, and create new roles like AI trainers and human-machine teaming managers


AI enables small businesses and entrepreneurs to compete with larger companies through accessible tools and platforms


AI is restoring healthcare by addressing care debt through scribes, diagnostics, and patient communication tools


Summary

Multiple speakers agree that AI functions primarily as a productivity enhancer that automates specific tasks rather than replacing entire jobs. They emphasize that AI creates new opportunities and roles while helping humans focus on higher-value activities.


Topics

Future of work


Importance of international cooperation and governance frameworks for AI

Speakers

– Tomas Norvoll
– Junha Li
– Juha Heikkila
– Nthati Moorosi

Arguments

Need for international cooperation and frameworks that protect people while enabling innovation, as individual nations cannot address AI challenges alone


International cooperation essential to ensure AI bridges rather than deepens global divides between developed and developing nations


EU supports innovation-friendly, risk-based regulation that only intervenes where necessary to build trust for AI adoption


Inclusive growth requires whole-of-society engagement and international support for capacity-building in local contexts


Summary

There is broad agreement that AI governance requires international collaboration and coordinated frameworks. Speakers emphasize that no single nation can address AI challenges alone and that global cooperation is essential for equitable development.


Topics

Data governance | Capacity development | Future of work


Similar viewpoints

Both speakers emphasize the need for protective frameworks that ensure human rights and economic interests are safeguarded while allowing technological innovation. They share concern about protecting individual creators and workers from exploitation.

Speakers

– Joseph Gordon-Levitt
– Tomas Norvoll

Arguments

AI companies take people’s creative work without permission or compensation to train valuable models, threatening economic incentives for creativity


Need for international cooperation and frameworks that protect people while enabling innovation, as individual nations cannot address AI challenges alone


Topics

Intellectual property rights | Future of work | Data governance


Both speakers express concern about over-reliance on AI leading to loss of fundamental human skills and capabilities. They emphasize the importance of maintaining human expertise and critical thinking abilities.

Speakers

– Ishita Barua
– Juha Heikkila

Arguments

Quality education must value domain expertise and critical thinking, not just tool adoption, to prevent cognitive outsourcing


Concerns about de-skilling as increased reliance on AI may cause people to lose essential capabilities


Topics

Online education | Future of work


Both speakers from major tech companies advocate for democratizing AI access and preventing concentration of AI capabilities in the hands of few large corporations. They emphasize making AI tools accessible to smaller entities and diverse populations.

Speakers

– Chris Yiu
– Sandro Gianella

Arguments

Open source AI development democratizes access and ensures technology isn’t controlled by few large corporations


Need for broad, equitable access to prevent widening gaps between those with and without access to AI technology


Topics

Digital access | Future of work


Unexpected consensus

Need for protective regulation while supporting innovation

Speakers

– Jennifer Bacchus
– Juha Heikkila
– Tomas Norvoll

Arguments

US opposes excessive regulation that could strangle AI innovation and will block authoritarian misuse while promoting pro-innovation policies


EU supports innovation-friendly, risk-based regulation that only intervenes where necessary to build trust for AI adoption


Need for international cooperation and frameworks that protect people while enabling innovation, as individual nations cannot address AI challenges alone


Explanation

Despite representing different regulatory philosophies (US pro-deregulation vs EU regulatory approach), there is unexpected consensus on the need to balance innovation support with protective measures. All agree on preventing authoritarian misuse and supporting innovation while protecting people.


Topics

Data governance | Future of work


AI as democratizing technology for small businesses and individuals

Speakers

– Chris Yiu
– Sandro Gianella
– Tomas Norvoll

Arguments

AI enables small businesses and entrepreneurs to compete with larger companies through accessible tools and platforms


AI offers transformative potential in pharma, scientific research, education, and climate work, with tools now accessible to small businesses and individuals


AI can help governments maintain welfare states more efficiently, particularly in healthcare and education sectors facing demographic challenges


Explanation

There is unexpected consensus between tech company representatives and government officials that AI serves as a democratizing force that levels the playing field for smaller entities. This challenges common narratives about AI benefiting only large corporations.


Topics

Future of work | Digital business models | Digital access


Overall assessment

Summary

The discussion reveals strong consensus on several key areas: AI’s current transformative impact across sectors, the need for equitable access to prevent digital divides, AI’s role as a productivity enhancer rather than job replacer, and the necessity of international cooperation for governance. There is also agreement on the importance of education and skills development, though with some debate about the balance between tool adoption and maintaining human capabilities.


Consensus level

High level of consensus on fundamental principles with constructive disagreement on implementation approaches. The agreement spans across different stakeholder groups (government, tech companies, civil society, international organizations) and geographical regions, suggesting strong foundation for collaborative policy development. However, tensions remain around issues of data ownership, compensation for human contributions to AI training, and the appropriate level of regulation versus innovation support.


Differences

Different viewpoints

Regulatory approach to AI development

Speakers

– Jennifer Bacchus
– Juha Heikkila

Arguments

US opposes excessive regulation that could strangle AI innovation and will block authoritarian misuse while promoting pro-innovation policies


EU supports innovation-friendly, risk-based regulation that only intervenes where necessary to build trust for AI adoption


Summary

The US advocates for minimal regulation and warns against policies that could restrict AI development, while the EU promotes structured risk-based regulation as necessary for building trust and enabling adoption. The US specifically criticizes European regulatory approaches as potentially harmful to innovation.


Topics

Data governance | Future of work


Compensation for data used in AI training

Speakers

– Joseph Gordon-Levitt
– Sandro Gianella

Arguments

AI companies take people’s creative work without permission or compensation to train valuable models, threatening economic incentives for creativity


AI primarily provides task-level automation rather than job-level replacement, enhancing productivity while creating new roles


Summary

Gordon-Levitt argues that AI companies are essentially stealing human creative work without compensation, undermining economic incentives for creativity. Gianella focuses on AI as a productivity enhancer that creates new opportunities rather than addressing the compensation issue directly.


Topics

Intellectual property rights | Future of work


Role of private sector vs public sector in AI governance

Speakers

– Joseph Gordon-Levitt
– Chris Yiu

Arguments

Digital work and data should belong to workers, with compensation systems needed to maintain economic incentives for human creativity


Open source AI development democratizes access and ensures technology isn’t controlled by few large corporations


Summary

Gordon-Levitt argues that private companies cannot prioritize what’s good for the world due to shareholder obligations and calls for public-private partnership with rules. Yiu emphasizes that open source approaches by companies like Meta can democratize access without necessarily requiring extensive regulation.


Topics

Data governance | Future of work | Digital access


Unexpected differences

Educational approach to AI integration

Speakers

– Ishita Barua
– Sandro Gianella

Arguments

Quality education must value domain expertise and critical thinking, not just tool adoption, to prevent cognitive outsourcing


Combination of work and learning is intertwined, with AI tools helping people learn new skills throughout their careers


Explanation

This disagreement is unexpected because both speakers are generally supportive of AI, but Barua warns against over-reliance on AI tools in education while Gianella promotes AI as a learning enabler. Barua’s concern about cognitive outsourcing directly challenges the assumption that AI tools inherently improve learning.


Topics

Online education | Future of work


De-skilling concerns vs productivity benefits

Speakers

– Juha Heikkila
– Jennifer Bacchus

Arguments

Concerns about de-skilling as increased reliance on AI may cause people to lose essential capabilities


AI should boost worker productivity, improve job quality, and create new roles like AI trainers and human-machine teaming managers


Explanation

This disagreement is unexpected as both represent developed regions that generally support AI adoption, yet Heikkila raises fundamental concerns about human capability loss while Bacchus focuses purely on productivity gains. This suggests even AI-supportive stakeholders have different risk assessments.


Topics

Future of work


Overall assessment

Summary

The main areas of disagreement center on regulatory approaches (US vs EU perspectives), compensation for AI training data, the balance between private and public sector roles, and concerns about over-reliance on AI tools in education and work.


Disagreement level

Moderate disagreement with significant implications. While speakers generally agree on AI’s potential benefits, they fundamentally differ on governance approaches, economic models, and risk mitigation strategies. These disagreements could lead to fragmented global approaches to AI governance, potentially creating regulatory arbitrage and uneven development of AI benefits across regions.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers emphasize the need for protective frameworks that ensure human rights and economic interests are safeguarded while allowing technological innovation. They share concern about protecting individual creators and workers from exploitation.

Speakers

– Joseph Gordon-Levitt
– Tomas Norvoll

Arguments

AI companies take people’s creative work without permission or compensation to train valuable models, threatening economic incentives for creativity


Need for international cooperation and frameworks that protect people while enabling innovation, as individual nations cannot address AI challenges alone


Topics

Intellectual property rights | Future of work | Data governance


Both speakers express concern about over-reliance on AI leading to loss of fundamental human skills and capabilities. They emphasize the importance of maintaining human expertise and critical thinking abilities.

Speakers

– Ishita Barua
– Juha Heikkila

Arguments

Quality education must value domain expertise and critical thinking, not just tool adoption, to prevent cognitive outsourcing


Concerns about de-skilling as increased reliance on AI may cause people to lose essential capabilities


Topics

Online education | Future of work


Both speakers from major tech companies advocate for democratizing AI access and preventing concentration of AI capabilities in the hands of few large corporations. They emphasize making AI tools accessible to smaller entities and diverse populations.

Speakers

– Chris Yiu
– Sandro Gianella

Arguments

Open source AI development democratizes access and ensures technology isn’t controlled by few large corporations


Need for broad, equitable access to prevent widening gaps between those with and without access to AI technology


Topics

Digital access | Future of work


Takeaways

Key takeaways

AI is already transforming work across sectors (healthcare, agriculture, education) rather than being a future technology, with both opportunities for productivity gains and risks of job displacement


A fundamental tension exists between AI companies using human-created data without consent/compensation and the need to maintain economic incentives for human creativity and work


International cooperation is essential for AI governance as no single nation can address the challenges alone, but approaches vary significantly (US favors minimal regulation, EU supports risk-based frameworks)


Equitable access to AI tools is critical to prevent widening digital divides, with particular concerns about rural populations, developing countries, and gender disparities in adoption


Education systems must balance AI literacy with preserving critical thinking and domain expertise to avoid ‘de-skilling’ and cognitive outsourcing


AI should serve as a productivity enhancer providing task-level rather than job-level automation, requiring human-AI collaboration rather than replacement


Open source AI development can democratize access and prevent concentration of power in few large corporations


Quality AI implementation requires human-centered policies that align with cultural values and protect worker rights while enabling innovation


Resolutions and action items

Norway committed to investing 1 billion kroner to establish six national AI research centers to study societal impacts and strengthen innovation


OpenAI Academy launched to train policymakers and practitioners (1.4 million people trained since launch)


EU launched AI Continent Action Plan focusing on skills as one of five pillars, utilizing digital innovation hubs


Need for governments to guarantee workers ‘a seat at the table’ in major AI policy decisions


Establishment of frameworks for data ownership where ‘digital work belongs to workers’ with compensation systems


Investment in STEM education, AI scholarships, and large-scale workforce reskilling through apprenticeships and vocational programs


Unresolved issues

How to implement fair compensation systems for human data used in AI training without stifling innovation


Reconciling different regulatory approaches between US (minimal regulation) and EU (risk-based frameworks) for global cooperation


Determining optimal balance between AI assistance and preserving human skills to prevent de-skilling


Addressing the ‘care debt’ in healthcare and other sectors while ensuring AI enhances rather than replaces human judgment


Resolving tensions between open source AI development and protecting intellectual property rights


Establishing international standards for AI governance that respect cultural differences and local contexts


Quantifying the actual net impact of AI on job creation versus job displacement


Ensuring AI tools remain accessible to developing countries and rural populations without reliable internet infrastructure


Suggested compromises

Partnership model between private sector innovation and public sector regulation rather than viewing them as opposing forces


Human-AI collaboration approach where AI provides task-level automation while humans retain decision-making and creative roles


Gradual implementation of AI in sensitive sectors like healthcare and education with strong human oversight


Open source AI development combined with safety frameworks and ethical guidelines


Flexible regulatory approaches that can adapt to local contexts while maintaining international cooperation


Investment in both AI tool development and human skill preservation through education reform


Compensation systems for data use that allow innovation while providing economic incentives for human creativity


Thought provoking comments

The sleight of hand that’s going on in that statement, though, is the idea that the AI is generating all this economic value, when in fact there is no economic value without all the human contributions that were hoovered up into these machine learning models… your digital self, and in the context of this panel, your digital work belongs to you.

Speaker

Joseph Gordon-Levitt


Reason

This comment fundamentally reframes the AI debate by exposing the hidden dependency of AI systems on human-created data and challenging the narrative that AI creates value independently. It introduces the crucial concept of data ownership and compensation, moving beyond surface-level discussions of job displacement to address the foundational economic structure of AI development.


Impact

This comment created a significant shift in the discussion, forcing other panelists to address the ethics of data usage and compensation. It directly challenged the tech industry representatives and led to more nuanced discussions about the relationship between innovation and worker rights. The comment elevated the conversation from technical capabilities to fundamental questions of economic justice.


I write to think. I don’t truly understand something until I’ve worked through it on page, failed and revised it, and clarified it… scribo, ergo cogito, ergo sum. I write, therefore I think, and therefore I am.

Speaker

Ishita Barua


Reason

This philosophical insight challenges the assumption that AI-assisted productivity is inherently beneficial by highlighting the cognitive processes that may be lost when we outsource thinking tasks to AI. It introduces the concept of ‘cognitive outsourcing’ and questions whether efficiency gains come at the cost of human intellectual development.


Impact

This comment shifted the education discussion from access and tools to the fundamental nature of learning and cognition. It prompted deeper reflection on what constitutes quality education in an AI age and influenced subsequent speakers to consider the balance between AI assistance and human skill development. The philosophical framing elevated the discussion beyond practical considerations to existential questions about human agency.


With respect, Meta cannot prioritize what’s good for the world. It’s not built to do that. It’s a for-profit company, and it has to prioritize value for its shareholders… This is a false dichotomy, this contrast to say that innovation is the opposite of rules.

Speaker

Joseph Gordon-Levitt


Reason

This comment cuts through corporate rhetoric to address the structural limitations of relying on private companies for public good. It challenges the prevalent narrative that market forces alone will ensure beneficial AI development and argues for the necessity of public-private partnership with appropriate regulation.


Impact

This final comment served as a powerful counterpoint to the tech industry’s self-regulation narrative presented throughout the panel. It reframed the innovation vs. regulation debate as a false choice and emphasized the essential role of government oversight, providing a strong conclusion that challenged participants and audience to think beyond market-driven solutions.


AI is just as much at work today as it is about the future, because AI is already here… When the Sumerians invented the wheel, surely there was someone who worried that it could have negative consequences for those who were used to carrying things on their back.

Speaker

Tomas Norvoll


Reason

This comment effectively reframes the entire discussion by challenging the premise that AI is a future concern, establishing it as a present reality. The historical analogy provides valuable perspective on technological transitions while acknowledging both optimism and legitimate concerns about change.


Impact

This opening comment set a pragmatic, historically-informed tone for the entire discussion. It moved the conversation away from speculative future scenarios to concrete present-day applications and challenges, influencing subsequent speakers to focus on current implementations and immediate policy needs rather than abstract possibilities.


There are two Nordic studies that show women are adopting tools like ChatGPT more slowly than men, not because of a lack of competence, but due to differences in digital confidence… women in high-tech industries, they are more susceptible to AI-driven change… three times more exposed to automation risks.

Speaker

Ishita Barua


Reason

This comment introduces critical gender analysis that had been largely absent from the discussion, providing concrete data about differential impacts of AI adoption. It challenges the assumption that AI benefits will be equally distributed and highlights how existing inequalities may be amplified by new technologies.


Impact

This comment added a crucial dimension of analysis that influenced the final discussions about inclusive growth and equitable access. It demonstrated how seemingly neutral technology can have gendered impacts and prompted consideration of how AI policies must explicitly address structural inequalities rather than assuming universal benefits.


Overall assessment

These key comments fundamentally shaped the discussion by introducing critical tensions between innovation rhetoric and social reality. Joseph Gordon-Levitt’s interventions consistently challenged tech industry narratives about value creation and self-regulation, forcing a more honest examination of power dynamics and economic structures. Ishita Barua’s contributions elevated the conversation beyond technical capabilities to examine cognitive, educational, and social implications, particularly around gender equity. Tomas Norvoll’s opening grounded the discussion in present realities rather than future speculation. Together, these comments prevented the discussion from becoming a simple celebration of AI capabilities and instead created a nuanced examination of the complex tradeoffs, power dynamics, and policy challenges involved in AI’s integration into work and society. The interplay between these critical voices and industry representatives created a more substantive and realistic dialogue about the future of work in an AI-enabled world.


Follow-up questions

How can we establish international frameworks and regulations to protect people’s data ownership and ensure they are compensated when their digital work is used to train AI models?

Speaker

Joseph Gordon-Levitt and Tomas Norvoll


Explanation

This addresses the fundamental issue of data ownership and fair compensation for creators whose work is used to train AI systems, requiring global cooperation to establish effective frameworks.


How do we prevent AI from causing de-skilling and cognitive outsourcing while maintaining the benefits of AI assistance?

Speaker

Juha Heikkila and Ishita Barua


Explanation

This explores the balance between leveraging AI tools for productivity while ensuring humans maintain essential skills and critical thinking abilities.


What specific mechanisms can ensure equitable access to AI tools across different socioeconomic levels and geographic regions?

Speaker

Multiple speakers including Nthati Moorosi, Tomas Norvoll, and Sandro Gianella


Explanation

This addresses the digital divide concerns and the need for inclusive AI deployment strategies to prevent widening inequalities.


How can we develop gender-responsive AI strategies to address the disproportionate impact on women in the workforce?

Speaker

Ishita Barua


Explanation

This highlights the need for research into gender-specific impacts of AI adoption and targeted interventions to ensure equitable outcomes.


What are the long-term effects of AI on learning and cognitive development, particularly given declining learning abilities since 2010?

Speaker

Jonathan Charles (moderator) referencing PISA survey data


Explanation

This requires investigation into whether AI tools are contributing to cognitive decline and how educational systems should adapt.


How can we quantify and better understand the net impact of AI on job creation versus job displacement?

Speaker

Juha Heikkila


Explanation

Current studies show varying results, and more comprehensive research is needed to understand the actual employment effects of AI adoption.


What models of public-private partnership can effectively balance innovation with worker protection and fair compensation?

Speaker

Joseph Gordon-Levitt


Explanation

This explores how to structure collaboration between private companies and public sector to ensure AI benefits are broadly shared while maintaining innovation incentives.


How can AI tools be designed to be culturally relevant and linguistically appropriate for diverse global communities?

Speaker

Nthati Moorosi and Chris Yiu


Explanation

This addresses the need for AI systems that respect local cultures, languages, and contexts rather than imposing uniform solutions.


What are the most effective methods for large-scale workforce reskilling and how can they be implemented globally?

Speaker

Multiple speakers including Sandro Gianella and Jennifer Bacchus


Explanation

This requires research into scalable training programs and educational approaches to prepare workers for AI-augmented roles.


How can we ensure AI systems in healthcare maintain equity and don’t hard-code existing inequalities into future care delivery?

Speaker

Ishita Barua


Explanation

This addresses the critical need to prevent AI from perpetuating or amplifying healthcare disparities while maximizing its benefits.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.