Town Hall: Dilemmas around Knowledge
20 Jan 2026 12:00h - 12:45h
Town Hall: Dilemmas around Knowledge
Session at a glance
Summary
This town hall discussion at the World Economic Forum explored the dilemmas around knowledge in the age of AI, featuring Cambridge Vice-Chancellor Deborah Prentice, Cohere CEO Aidan Gomez, and Udemy CEO Hugo Sarrazin. The panelists examined what becomes the scarcest resource when AI provides instant answers, with audience polling revealing that critical thinking and sustained attention were the top concerns, followed by trust and deep mastery.
Gomez emphasized that AI creates a false sense of mastery, arguing that deep understanding becomes most at risk when students can easily obtain surface-level answers without truly comprehending complex subjects. He stressed the importance of testing humans without AI tools to assess genuine knowledge retention. Sarrazin described AI as creating “millions of polymaths” through democratized access to knowledge, but warned about the poverty of attention that accompanies this wealth of information. He advocated for AI-powered personalized learning that can adapt to individual learning styles and provide real-time feedback.
The discussion revealed how AI is transforming education from traditional online learning models to adaptive, in-the-flow learning experiences. Sarrazin explained Udemy’s pivot toward becoming an AI platform for workforce reskilling, emphasizing the need to measure learning ROI beyond simple course completion metrics. The panelists agreed that humans must focus on developing skills AI cannot replicate: asking the right questions, critical thinking, and quality assurance of AI-generated content.
A key concern emerged about the future of expertise and authority when AI provides answers without showing its work or explaining reasoning. The panelists discussed the importance of explainable AI and the need for specialized, trusted models rather than general-purpose systems trained on all available data. They concluded that while AI will excel at the middle phase of problem-solving, humans must maintain competitive advantage in problem identification and solution validation to preserve agency in an AI-dominated world.
Keypoints
Major Discussion Points:
– The scarcest resource in an AI-driven world: The panel explored what becomes most valuable when AI provides instant answers – with audience polling showing critical thinking and sustained attention as top concerns, while panelists emphasized deep mastery and self-knowledge as key risks.
– Evolution from traditional online learning to AI-powered education: Discussion of how education has progressed from MOOCs to personalized AI tutoring that can adapt to individual learning styles, provide real-time feedback, and potentially achieve the “two sigma” improvement of one-on-one coaching at scale.
– The changing role of humans in AI-enhanced education: Debate over where human intervention remains essential, with emphasis on humans as customers/learners who need to develop skills in asking the right questions and critically evaluating AI-generated answers, while teachers remain important for motivation and storytelling.
– Testing and assessment challenges: Examination of how to evaluate learning when AI can easily complete assignments, leading to discussions about testing “with or without the calculator” and the need for new methods to detect AI-generated work and measure genuine human understanding.
– The future of expertise and institutional authority: Questions about what happens to traditional sources of knowledge and authority (like university libraries) when AI becomes a “polymath by design,” and whether the traditional university model needs to be “unbundled” given changing economic and technological realities.
Overall Purpose:
This World Economic Forum town hall aimed to examine the fundamental challenges and opportunities that AI presents to education and knowledge systems, bringing together perspectives from traditional academia (Cambridge University) and AI/EdTech companies (Cohere and Udemy) to explore how learning, teaching, and credentialing might evolve.
Overall Tone:
The discussion maintained a thoughtful, exploratory tone throughout, with panelists acknowledging both the promise and perils of AI in education. While there were moments of concern about potential negative impacts (loss of deep learning, attention spans, critical thinking), the overall atmosphere remained optimistic about AI’s potential to democratize and personalize education. The tone was collaborative rather than adversarial, with panelists from different sectors finding common ground while respectfully noting their different perspectives and constraints.
Speakers
– Deborah Prentice: Vice-Chancellor of the University of Cambridge, Professor, from the not-for-profit education sector
– Aidan Gomez: Co-founder and Chief Executive Officer of Cohere, an enterprise AI company developing advanced language models for business use
– Hugo Sarrazin: President and Chief Executive Officer of Udemy, which provides business and leadership development courses including AI courses to businesses and organizations worldwide in various fields such as financial services, higher education, government, manufacturing and technology
– Audience: Multiple audience members who asked questions during the town hall discussion
Additional speakers:
– Anna Van Eels: Director of the Levium Trust
– Nathaniel: Runs an education company in Australia
– Pranjal Sharma: Author and analyst from India
– Kian: CEO of an AI company called Workera
Full session report
Knowledge in the Age of AI: World Economic Forum Town Hall Discussion
Executive Summary
This World Economic Forum town hall discussion examined how artificial intelligence is reshaping knowledge systems and education. The panel featured Deborah Prentice, Vice-Chancellor of the University of Cambridge; Aidan Gomez, Co-founder and CEO of Cohere; and Hugo Sarrazin, President and CEO of Udemy.
The central question explored what becomes the scarcest resource when AI can provide instant answers to virtually any query. Through audience polling, participants identified critical thinking (30%) and sustained attention (25%) as their primary concerns, followed by deep mastery (20%), self-knowledge (15%), and curiosity (10%). The discussion revealed how AI is transforming educational paradigms while raising fundamental questions about human agency, expertise development, and the future of learning institutions.
The Scarcest Resource: Divergent Expert Perspectives
The panelists offered different views on AI’s primary educational risks. Aidan Gomez emphasized the threat to deep understanding: “LLMs can fool you into thinking that you understand something when you don’t, and I view that as a core risk as we integrate these LLMs into an education environment—this false sense of mastery or understanding.”
Deborah Prentice focused on self-knowledge, arguing that students traditionally relied on difficulty and struggle as internal cues for learning progress. She rejected all five polling options, stating: “I want to add a sixth one, which is self-knowledge… knowing what you know and what you don’t know, knowing what you’re interested in and what you’re not interested in.”
Hugo Sarrazin identified attention as the critical scarcity, referencing Herbert Simon’s observation about “information wealth creating poverty of attention.” He emphasized broader concerns about human agency: “We cannot as the human race give up that ability to influence… if as a society, we begin to rely on this thing that is super facile, that gives us an answer and we don’t have the questioning and we don’t kind of do the checking and the validating, we lose agency on important decisions.”
AI’s Educational Evolution: From Content Delivery to Personalized Learning
Hugo Sarrazin described the transformation from traditional online learning to AI-powered personalization. He explained how Udemy, with 250,000 courses and 80 million learners, is evolving from a course-completion model to measuring actual learning outcomes and return on investment.
Sarrazin highlighted AI’s potential to solve the “two sigma problem”—the finding that students with one-on-one tutoring perform two standard deviations better than those in traditional classrooms. He shared a personal example about how AI could help replicate effective teaching styles: his physics teacher’s engaging approach versus his chemistry teacher’s dry delivery, suggesting AI could help scale the better pedagogical approach.
The discussion included practical AI applications Udemy is implementing, such as AI role-play for sales training and call center onboarding, representing what Sarrazin called “in-the-flow learning” that adapts to immediate workplace needs.
Redefining Human Roles in AI-Enhanced Learning
The panelists agreed on a framework positioning humans at the beginning and end of learning processes. Hugo Sarrazin articulated this clearly: humans maintain competitive advantage in asking the right questions (front-end) and providing quality assurance of AI outputs (back-end), while AI excels at information retrieval and synthesis (middle process).
Aidan Gomez advocated for treating AI like a calculator—useful for learning but requiring separate testing without AI assistance to verify genuine understanding. He suggested that exceptional teachers could potentially extend their influence across multiple subjects through AI assistance.
An audience member raised a critical concern about generational knowledge transfer: “My juniors, they were not able to judge because they don’t have the experience. But to some extent, I could fire them because I don’t need them anymore because of these AI technologies. But maybe there will be a gap… who will be the future able to do this critical thinking on what AI is doing?”
Assessment Challenges and Institutional Futures
The integration of AI creates unprecedented assessment challenges. Traditional evaluation methods become inadequate when AI can produce sophisticated analyses and solutions. Aidan Gomez emphasized the need for clear distinctions between learning with AI and testing without it, similar to mathematics education where students learn with calculators but are tested on manual calculations.
Regarding institutional futures, the panelists showed different perspectives. Hugo Sarrazin suggested that economic pressures might justify “unbundling” traditional university functions of learning, accreditation, and social experience. Deborah Prentice defended universities’ integrated value, arguing they provide essential functions beyond knowledge transfer, including critical thinking development and intellectual community.
Trust and Transparency in AI Systems
Hugo Sarrazin emphasized concerns about AI’s “black box” nature and its impact on trust and decision-making. However, Aidan Gomez provided optimism about developing transparency: “New reasoning models now show their internal thought processes, and retrieval-augmented generation allows for better auditability and source citation.”
Gomez explained that reasoning models can now show their “internal monologue,” making their decision-making processes more transparent and auditable. This development could address educational concerns about understanding how AI reaches conclusions.
Global Perspectives and Practical Applications
The discussion included international viewpoints, with audience members from India and Australia contributing insights about AI’s varying impact across different educational systems and cultural contexts. Questions arose about mismatches between what educational institutions offer and rapidly changing job market demands.
Hugo Sarrazin mentioned Alpha School as an example of innovative educational approaches, and the conversation touched on practical policy questions, including social media bans in schools and classroom AI policies.
Key Unresolved Challenges
Several critical issues remain unaddressed:
1. Expertise Transfer Gap: How to maintain pathways for developing human expertise when AI can replace entry-level positions where such expertise is traditionally gained.
2. Assessment Evolution: Developing new methodologies to meaningfully test human understanding in an AI-augmented world.
3. Institutional Adaptation: Balancing preservation of valuable traditional educational functions with pressure for more efficient, AI-enabled alternatives.
4. Quality Assurance: Ensuring AI democratization of education maintains standards while improving accessibility.
Conclusion
The discussion revealed that successfully integrating AI into education requires careful balance between leveraging technological capabilities and preserving essential human competencies. The panelists agreed that AI should enhance rather than replace human intelligence, with humans maintaining critical roles in questioning, critical evaluation, and quality assurance.
While AI promises to democratize access to personalized, high-quality education, significant challenges remain in maintaining educational standards, ensuring meaningful assessment, and preserving human agency in learning. The conversation emphasized that the future of education will be determined not by AI technology itself, but by thoughtful decisions about how to integrate these tools while developing uniquely human capabilities.
The stakes extend beyond individual learning outcomes to broader questions about human autonomy and critical thinking in an increasingly AI-influenced world. As Hugo Sarrazin concluded, maintaining human agency in questioning and validation remains essential for democratic participation and informed decision-making.
Session transcript
Good afternoon everyone and thank you for joining this town hall discussion where we will be talking about a topic that university and education leaders are all buzzing about which is namely dilemmas around knowledge.
This has been a topic for us since schools were first invented, libraries were first invented and it’s still with us today. It’s extremely relevant today in an age in which AI is changing, is making knowledge available broadly to everybody all the time, but it doesn’t mean that there aren’t still dilemmas around knowledge and we’re going to probe these today.
I’m Professor Deborah Prentice and I’m the Vice-Chancellor of the University of Cambridge. I’m very pleased to introduce you to our panellists for this session. So we have Aidan Gomez who is the co-founder and Chief Executive Officer of Cohere, an enterprise AI company developing advanced language models for use by business and we also welcome Hugo Sarrazin who is President and Chief Executive Officer of Udemy which provides a wide range of business and leadership development courses including AI courses to businesses and organisations around the world in fields such as financial services, higher education, government, manufacturing and technology.
We have some fascinating questions to discuss this afternoon around knowledge, misinformation, AI, attention spans and even the nature of expertise. We’re going to bring the audience in early and often so I hope that you’ll all participate with us. We as panellists come from very different perspectives.
Aidan and Hugo run very successful businesses selling a product. They are from the for-profit educational technology sector and I’m from the not-for-profit sector so there are different pressures, different opportunities, different challenges that we face in this space.
Before we get started with our panel discussion, I’d like to remind the online audience that if you are sharing with us through your social channels, you should use the hashtag. Hashtag WEF26 and whether you’re joining online today or here in person and it’s great to see so many of you here. Thank you so much for coming.
Please feel free to get involved in the session by reacting to the questions we discuss in our conversation and also by submitting questions to panelists via the Slido app. Okay? Okay, so our first question is, in a world of instant answers and AI assistance, what is becoming the scarcest resource?
Okay, the answers are from a list of options. Is it sustained human attention, independent judgment and critical thinking, deep understanding and mastery, motivation to learn in the first place, or trust in what we know and who to believe? And actually I said or, that could be and.
You can choose as many of these as you as you want. Okay? So you can see on the screen, actually, as people are responding via the Slido app, but I want to ask our panelists.
What would you say? So you can see the answers on the screen. What would you say, Hugo?
Well, I think it’s a complicated question and I think there’s a lot of all of the above. If you take a historical perspective, knowledge was scarce. That was a source of power.
You know, countries fought for that. And we also had experts that built knowledge over time, but very few polymath. Very few.
Those ones that were, were very, very, very important. Now today you have LLMs that can learn everything and they can learn across different domains and they can become the polymath. So every data center, every time we say there’s a new infrastructure that’s being added, we’re adding millions and millions of polymath.
And that becomes a democratization of that knowledge. The problem is, and you know, there’s, there’s an amazing quote from Herbert Simon, when you have a wealth of information. you have a poverty of attention and I think that’s what’s happening for a lot of learners and that’s why traditional methods need to change and we’re gonna come up and talk I’m sure about how learning needs to evolve, what the process, what’s the role of traditional institution and changing, what’s the role corporation need to and what individual needs to do.
So I think attention is one big component. The second is a lot of when you go to LLM and AI and you ask for a question, it will give you an answer. It will feel very comfortable with that answer.
It doesn’t explain. Explainability in AI is a whole field, a whole domain and most of these LLMs don’t give you that. So if you have a society that begins to rely on products that give you an answer but don’t tell you where that answer came from, how do you learn and what do you have in terms of trust?
So I think the trust piece is also equally important. So I’ll stop at that we can go well further.
Yeah I was looking at the the poll up there and for whatever reason that the first one that came to me was deep mastery which seems to be the most unpopular choice. So I think you know when you exist in a world where it’s so fast and easy to get answers to whatever question you might have or to get a very surface level answer to even a complex question like whatever like how does quantum mechanics work?
It’ll give you a four paragraph response but that’s not deep understanding of the subject matter and so I think LLMs can’t chatbots. They can fool you into thinking that you understand something when you don’t and I view that as a core risk as we integrate these LLMs into an education environment is this false sense of mastery or understanding.
You know we can discuss the the different solutions to that. I think that testing is essential to it. The idea that you need to take away the tool.
and see what the human alone understands and has retained. The ability for you to assess depth has to take away those tools. And I think that is, from my perspective, what’s most at risk.
Yeah, it’s interesting. You know, my answer is a variant on yours. I wanted to, I, of course, wanted to reject all five.
But I think it’s because of where I come from, coming from the university sector. I mean, I want to say self-knowledge for the learner, right? And it’s part of what you’re saying.
I mean, you don’t know if you’re mastered and you don’t know if you’re interested in it and you don’t know if you get it, right? It comes to you. So much of what you learn, so much of what you learn comes from what is difficult and what is compelling and what it, so for that, for those cues to no longer be actually, you know, useful cues for self-understanding means how will you even know?
But that’s my answer anyway. So we can see what the, whoops, it went away. I think critical thinking was the one that won out at the end.
It looked like critical thinking was actually the audience preferred. We can keep coming back to this. But I want to use this as a jumping, oh, there we go, okay, yeah.
Critical thinking and then sustained attention. They were neck and neck for the most of the time, yeah. And then trust and then deep, deep mastery, right, it’s interesting.
So I want to talk a little bit about each of what you do. So we can start with you, Hugo, you know, tell us about Udemy.
So Udemy is a 15-year-old company that at the time did a pretty cool thing around introducing online learning. There was a great innovation to change accessibility and the cost of reaching out to millions and millions of people and created a creator economy around that. So we now have 250,000 courses.
80 million learners on a regular basis. We serve 17,000 large enterprises. We have 85,000 instructors that kind of come to this marketplace to offer their ware.
They’re very deeply committed. They know stuff and they want to, you know, share it to the world. And we do it in, you know, about 40% of our revenues are in the U.S.
The rest is around the world. So we’re in tons of languages, 46 plus. And the funny story, I’ve only been in the role for less than a year.
When I came in my first town hall and the people who may be listening online who were on that town hall, I came in and I said, we’re going to exit online learning. That is a wonderful innovation. It did a bunch of great things, but it doesn’t solve the problem of today.
And with AI, we can do so many different things. So I want to make a hard pivot of the business toward becoming an AI platform to reskill the workforce of the future. And we can talk about that.
And I couldn’t, I don’t want to take too much time, but there’s a lot of, you know, ways you can use AI to do some of the things Aidan, you were suggesting to kind of help build the mastery, how to do assessment using AI, how to use AI role play to immerse people.
And it also does the thing that I think is so, so important, traditional online learning and actually traditional learning. You’re an instructor and you teach to the average, right? You create your curriculum and you think you’re going to hit like the most of the people you can’t get for the super fast, you can’t get for the super slow.
And the same on online learning. And then different people have different starting points. And we don’t have an easy way to accommodate that.
Now with AI, you can do a quick assessment, you can break apart the class, you can have feedback loop and reinforce that in a very, very powerful way. And I think that’s one of the things that’s going to emerge of using AI to kind of reskill the workforce is going to build on that previous generation of online learning to do something pretty remarkable and quite different moving forward.
Thank you. Aidan?
Yes, so COHERE builds large language models, so we’re one of the developers of this core piece of technology that powers things like ChatGPT and all these different applications. We’re focused purely on the enterprise side of house, and so we work with businesses to put those models to work inside the organization. We give them access to internal data and systems that the humans have access to.
And then we teach or we work with our customer to teach the workforce to shift their role from being the ones individually doing the work to managing a team of these models or agents to carry out that work.
Our big differentiator is on the security side, so there’s no data exiting our customer’s perimeter. Instead, we send all of our models and software to them, and they keep it self-contained. Yeah, so you have certain customers who will only subscribe to you, right?
Yeah, certainly critical industries, financial services, telco, healthcare, and then of course government applications as well. Anything that’s a national security concern, and arguably education is within that remit, that’s a place that we do extremely well.
That’s interesting. So, Hugo, what can we learn from the arc of progress from MOOCs and online education to now AI-driven?
I think a few things. The first one is, you know, if you look at the traditional learning processes and methods that we had, there was a void. And that’s why online learning took off, and that’s why there’s a whole industry.
And it addressed a bunch of problems around, you know, getting to skills, specific skills. and also getting to certification and then helping organization reskill. So that was a very, very, very powerful thing.
What is now becoming a lot more a priority, and in the last six months I spent an enormous amount of time, I spoke to 400 CHROs and Head of Learning and Development in large enterprise. So the pattern that I saw is they had an enormous proliferation of tools and things that were bothering the COVID era. Very few could explain the ROI.
How do you measure the ROI of learning? It’s a really good question. And everybody kind of defaulted to, did they take the class?
Did they complete the class? Hours of learning? And as a business leader, it’s not particularly helpful.
And it gets even worse. When they get certification in Google Cloud or AWS or cyber something, to know that you’ve certified yourself two years ago, I’m a business leader, I want to know, are you current? Are you relevant today?
So I think the arc now is moving in the enterprise to an ability to do in the flow of work learning, do it at bite size, do it in an adaptive way, and we can come back to what adaptive means, and with an ROI, an ability to measure what skills people are deploying in real time.
So you’re now beginning to create a workforce management tool that is powered by an operating learning system.
So, Aidan, you said that you were not as worried about sustained human attention as you were some of the others. How does Cohere solve the attention problem?
Well, I mean, I don’t know if Cohere solves the attention problem. I think it’s definitely a concern. There’s lots of pressures on our attention span.
I think social media, short-form content, driving a lot of that. I’m certainly on the receiving end of that, you know, after 30 seconds because of TikTok, my attention span ends and I need to talk about something else. And also just the way that we do business now are in these short 30-minute meetings where you completely swap context.
And so I think those are difficult challenges not related to AI that are still applying pressure on human attention span. But it has a pretty strong consequence on how people learn and how students can learn when they’re constantly being distracted, when they struggle to sit with material over time. I think AI can perhaps assist in resolving that by its ability to personalize the experience to the individual and engage them more effectively.
And so if you have a generic education offering which, you know, bores some part of the population, excites the other, you’re missing, you’re underserving that population that gets bored. But if we can have a very targeted, scalable approach to, for each individual, giving them something that’s engaging, exciting, if they are auditory learners or visual learners, we can tailor it to them and hopefully keep their attention better than we might otherwise would.
So AI might be part of the solution as opposed to the source of the problem.
Hugo, does your vision of AI comport with that?
It completely matches. And I think, you know, there’s a well-known piece of research from the 80s from a University of Chicago professor. It’s the Bloom Two Sigma problem.
And they did some research where they looked at the ability to learn with one-on-one coaching. It was two sigma higher than the classroom. But the economics of doing that was not there.
That’s why we have these big classrooms, and that’s why there are bigger classrooms for first years, and that’s why it doesn’t deliver the same learning experience. Now, to Aidan’s point, with AI, you can personalize the experience. You can adapt it and you can create feedback loops that a professor cannot today, you know, you’ve got 40 students, you cannot pick up who’s, you know, not easily, some teachers are amazing and they have the ability to do incredible things, but now you have the ability to have that feedback.
So I think, you know, we’re going to see a lot of AI expert tutors and coaches that will have context and that will have been trained on a body of knowledge that is hopefully trusted, hopefully accurate, and will help, you know, in the way that you like to learn.
So if you’re an auditory learner, we’re going to give it to you that way. And if you’re a visual, we’ll give it to you that way. I think that’s a really exciting and promising world we’re entering from that point of view.
So we’re going to go to questions from the audience in just a second. So start thinking about your question. I’m just going to ask one more question of our panelists, myself, which is where do humans fit in, in this brave new world of AI-based education?
I think all of us who are educators know that at some point we need human intervention in the process, even with the most fabulous technology. Where do you think they need to come in?
I think they’re the customer, right? So they’re the ones that we’re serving with this technology. And so we need to create the best possible product for them.
If we just do surface-level education that’s very confirmatory, oh, yeah, you’ve got it, great, you know, a bit sycophantic, then they won’t be effective in the real world when they actually enter the job market.
And so there’s a burden on us as product creators to create the most effective product, to teach people skills and give them knowledge. And I think that AI is actually an incredibly effective tool. tool towards that.
But I do still believe that it’s a tool. It’s like a calculator. It’s something that you can lean on to give you faster answers, more thorough answers.
But we still need to ground ourselves in the human without the tool. And so testing becomes, it’s always been important of course, but I think it becomes absolutely critical now because you can fake your way through an education system much more easily. And so having very strict text testing regimens is going to be essential.
Yeah. I have a variation on this. I do think the teachers, the instructors are part of the, partly the customers, but I do think they’re, you know, they need to be in the loop.
There’s amazing storytellers. They have a way. I mean, if I ask anybody in this room, who was your favorite teacher in high school?
And I paused for five seconds. There’s somebody in your mind right now. What was special about that person?
And you cannot replicate that, but you can augment that. You can make that person now be able to maybe teach you on something that they were not that, you know, maybe like my favorite teacher in high school was a physics teacher. I love the way he presented.
I love the way he engaged. And it was so motivating. My chemistry teacher was not that, but now I can augment, you know, with AI and have the voice, not just the voice, but the way he thought, the way he presented the information be applied to a different topic.
And I think that gets pretty, pretty exciting as well. You finally understand chemistry. I may finally understand chemistry.
I stayed away from chemistry because of that, but physics I love.
Okay. I want to, I want to open up to questions from the audience. So I will call on you the old fashioned way.
If you raise your hand. Oh, you have to, sorry, you have to speak.
Thank you. Anna Van Eels, director of the Levium Trust. I guess learning is a bit like working out it’s got to hurt to be effective.
How do you think AI-enabled tech of various kinds can help with that motivation issue? You’ve talked about the teacher being the one that absolutely motivates, but a lot of the systems we’re talking about in the workplace, etc., you’re not going to have that human in the loop. So can we do things with AI and tech that could prompt that?
Yeah, I’m going to offer a few suggestions. And this is not like future, this exists today. So you can do AI role play in a way that makes you go through the learning process.
And I’m going to use business example. So if you’re a new salesperson and you have a new product that you need to sell, you can load up the specs of that product into an AI role play and practice selling to a person. And there will be a rubric against which we’re going to score you.
And we’re going to discover whether or not you are competent at selling this product that you’re responsible for. So that’s a business example. I can do the same thing in a call center.
We have one of the largest call center outsourcers. There are 20,000 call center agents they need to onboard every month. That is incredibly complicated.
But now you can load the most common error cause, the most common tickets, the product specs, and instead of taking three weeks to onboard somebody, through the process of learning, of experimenting, you can do a role play and get to accelerate that learning by doing a lot of practice.
So it’s simulation. So that’s one powerful example. I think the other one is AI can give you feedback and monitor the progress you’re making in the way that we can bring you back to that point in the gym where you’re struggling with whatever exercise you’re doing.
We’re going to make you do that exercise more and more and get that repetition in a way that reinforces the gap that you have.
Hi, I’m Nathaniel. I run an education company in Australia. Now, as a region, Australia has an interesting relationship with technology.
As many of you may know, we’ve just recently had a social media ban for young people under 16. And in a similar vein, we don’t really have a good consensus around the role of AI in classrooms. So my question is, what do you believe the role is for AI in physical classrooms?
And what would you say to people who might be on the side of banning versus not banning it?
Yeah, I think I’m interested to hear your answer. But from my side, I think it’s a tool like a calculator. I think also, a duty of the education system now is to teach people how to use this AI, how to engage with it, how to most effectively use that tool.
And so it certainly should exist as part of the classroom and as part of schooling. But like I said, it can become a crutch, and it can be used to cheat. And so we have to come up with ways to ensure that students aren’t misusing it or using it in the ways that are unproductive to their learning.
I’m excited to hear what you think.
I’ve got two-part answer. The first one is any business process or any endeavor, you have the problem statement asking the right question, you have the solving, and then you have the quality assurance in the back. It’s a feedback loop that you go through a circle all the time.
And education is no different. What AI does well is that middle part. It doesn’t do a whole lot in the front end and the back end.
So what we need to teach young student and adults is how to ask the right question, the critical thinking. I love that it came out at the very top. Super, super important.
But you can, as you said, the calculator is a calculator. Like the fact that I can’t do multiplication table all the way to 100. is not that relevant for my day-to-day job.
But the fact that I can be critical in my thinking, I can summarize, I can contextualize, I think those are the skills you want. Second part, for those who are curious, I have no relationship, but I am just fascinated. There’s a school in the U.S.
called Alpha School. And they’ve got a really powerful model. They are using AI.
They’re encouraging students to use AI. And they’re demonstrating that I’m gonna get all the stats wrong, but they get twice the learning in half the time or three times the learning half the time. And then the kids in the afternoon, they go learn and learn how to be a civic leader or a leader in all sorts of other contexts instead of spending all their time where historically you would have learned various dates.
It’s not that relevant to know the dates of specific things, but it’s relevant to understand the context of those events. And I think that’s where we can focus a lot of the effort.
Thank you. Terrific topic to be discussed at Davos. I’m Pranjal Sharma.
I’m from India. I’m an author and analyst. We’re looking at a lot of the micro pieces, but I’d like to focus on the macro.
We have a situation today where we’re all skilled up but nowhere to go, right? Last year, I think ILO says 7 million fewer jobs were created, not to mention the existing jobs that disappeared. So there is a cry from the industry.
Firstly, they don’t know who to hire and why to hire and what to hire, and they don’t even know what to test that credentials on. The second part is there’s a huge disconnect between what they want and what academia is offering. Plus the concept of a degree shouldn’t exist, and even continuous learning in terms of applied knowledge is missing.
So I think the core phrase to be used here is applied knowledge. How do you create? information for a person to be able to earn a livelihood irrespective of white gray blue color and I think that’s the gap of applied knowledge delivered in the right way to the right people at the right time.
From a labor market perspective I think there’s a a good case to be concerned about the impact of AI and what might happen and reskilling is going to be an essential component of that. The mismatch in the market between what education institutions are offering and what the market is demanding I think that is a major issue that we need to figure out how to solve. I think AI can be a part of speeding up delivery of new programs and courses and keeping up with changes in demand much faster than we have in the past.
The process of scaling up educational infrastructure to meet a shift in market demand has been historically extremely slow and laborious but with AI we’re able to create programs much faster the models are you know infinitely scalable they’re always awake 24-7 they’re extremely they never get annoyed at the student right so we have these incredibly compelling tutors to deploy at scale against the problem of teaching the population the skills that we need but I think the issue might be in identifying the skills that we need and that’s still gonna have to come from us the humans the business leaders the the policymakers so that might be the core constraint we need a direction to be set against to start building the solution.
I think too I mean what I would say is I think that that you know universities aren’t teaching to what businesses need necessarily they’re teaching we’re teaching things that we believe are fundamentally important and I would defend that I mean we’re teaching critical thinking and we’re teaching deep mastery and we’re teaching them to people at a critical moment in their lives most of them where they actually really need to have a go and and learn these skills they may need additional skills when they go out into the workplace and that as far as I’m concerned is what the kinds of products that you’re talking about are for.
Let’s go back to the critical thinking because now in the university the students widely use the AI assistance and get the instant answer. In that case, how can we teach them to increase their capability of critical thinking to make factual check, logical check, scientific check, ethical check, to the instant answer they got from models?
From my point of view, you’re on to one of the core issues. We need to start teaching a very different set of skills. You need to, in my little model of ask the right question, get the answer, and then check the answer.
The middle part, AI is going to outdo the human. It is already a foregone conclusion. The AI will outdo the human.
Where we can be competitively differentiated versus the AI is in the front and in the back end. We need to adapt the curriculum to make sure that people are asking the right questions with the right context. It is critical thinking.
It is critical thinking, but we need to expand and we need to have a better way to evaluate the level of critical thinking these students have when they hit the workforce so that you can evaluate. Then, the same on assessing, AI is marvelous right now. It generates codes like there’s no tomorrow, but it’s mostly garbage.
We have bottlenecks in quality assurance in the back end. How do you create the new tools and teach people to have the critical thinking to see if this is using the right library, is it using the right pattern, is it using the right data? I think that’s one of the core change that you know academic institution organization like me an individual need to do you know as you do your self-development you need to kind of really lean into this this ability to ask the right question because the middle part you don’t have a competitive advantage you will be outgunned and the thing that is even more crazy historically like people did PhD I have a PhD I went like super deep on one little topic and I got buried somewhere in the sinkhole and it took my entire you know body of effort to get there and to be a polymath is very hard to be able to understand I don’t I know nothing about chemistry I know nothing about biology psychology my dad did that so I got something rubbed up on me maybe so but AI is a polymath by design it has the data set across all of that so the middle part is a foregone conclusion folks you need to get good at the front and the back end
yeah well I was gonna say another thing which is teaching is a skill in the same way coding is a skill or doing math is a skill and so it’s a core capability that we as model developers need to invest in and it’s not something that is easily benchmarked and it’s not something that is accurately tracked at the moment but I think the more this rules I mean it’s already in the hands of every student on the face of the planet it’s going to become imperative that we’re able to track the performance of models in teaching tasks to ensure that they’re actually effective and improve that over time that’s just so like a technical level that is not done presently I don’t know of a teaching benchmark but I can point to probably 30 code ones 50 math ones you know biology etc right
I think that psychology is rubbing off well. When you say AI is a polymath by design, it’s a brilliant thought, you know, it was, you articulated it very well, which also means that by definition humans cannot compete. So we basically have to end the session and say that doom is nigh.
Well I don’t think so. I mean I’m more optimistic. So the polymath thing is real.
I mean, you know, like if you do, again, historical perspective, he who had Leonardo da Vinci on his team, you know, had an advantage to build war machine or, you know, a better court or whatever. Now there’s going to be a similar debate like who assembles these polymath AI thingy has an advantage. Okay, that that is a foregone conclusion.
That’s why there’s all these, you know, battle for, but I think we cannot as the human race give up that ability to influence. I think that we made a point, I think you did at the very beginning, like these models typically are not designed, though some of them can be designed to explain their reasoning. So if as a society, we begin to rely on this thing that is super facile, that gives us an answer and we don’t have the questioning and we don’t kind of do the checking and the validating, we lose agency on important decision.
And I think that is one of the things that we need to focus on deeply as a society. It also leads to the guardrail, the ethical things and all that other stuff. We need to go there because in the middle, it’s going to come up with answers that will be amazing in biology and will solve things in biology because it got trained in English language.
I don’t know. It’s going to take pretty, pretty while, but we cannot lose agency around this polymath. I mean, every data center is going to have hundreds of millions of polymath in there.
Yeah.
I just want to share a thinking. I believe there’s a type of paradox within companies about this critical thinking. Let me say it this way.
We senior professionals, we know how to judge what the AI is doing. So I ask her one day for the AI to do a model, whatever, and I could judge. My juniors, they were not able to judge because they don’t have the experience.
But to some extent, I could fire them because I don’t need them anymore because of these AI technologies. But maybe there will be a gap. So at some point in time, AI can enhance a lot what I do.
But if you don’t train, let’s say, the new generation, the junior, who will be the future? Who will be in the future able to do this critical thinking on what AI is doing? I don’t have the answers.
Obviously, companies need to take efficiency, and we need to do our best to reduce costs, whatever. But I think it’s something we, as a society, will have to think a lot about.
It’s fair. Here, we’ve got one here. You wanted, you were up, right?
Yeah. I didn’t just cold call you.
Hi, thank you for your insights. I’m Kian. I’m the CEO of an AI company called Workera.
I really like what you said, Aidan, on testing the human. And I think in the world of testing right now, there is almost two camps, one that says you can test them with the calculator, or you can test them without the calculator. And there’s also, overlaid on top of it, the risks of proctoring and understanding who’s cheating, who’s not cheating, and what can you tell about it.
So how are you thinking about that idea of testing with or without the calculator?
Yeah, can you tell whether a piece of text was written by AI? It’s really tough. A lot of the detectors out there are total scams.
They’ll say 100% AI, even when it’s not used at all. So they’re extremely overconfident, very high error rate on both sides, false positive, false negative. and but the the answer to that question is you can like you can insert into language models subtle cues to indicate for the reader this was written by an AI you can not sample from natural language language that I’m drawing from right now you can sample from a slightly shifted distribution and use certain words much more than any normal like any human would use and then as soon as those words appear you have a good piece of evidence that this was written by a language model and so us language modeling companies do that we we shift the distribution of the language model so that when its text gets read we have some ability to say you know I can assign a likelihood that was generated by my model so you can detect that to some extent but many of the tools are scams and so I think we need to make better tools and put them in the hands of educators more readily.
On testing with and without the calculator I have a pretty strong focus on without the calculator like I think everything needs to be ripped away and you standing alone as yourself need to prove your knowledge that is like the gold standard test of what you have what you have learned retained but of course like I was saying earlier using the language model is a skill itself and we should have space to test that in which case of course you’re gonna need the LLM in the loop.
Let me let me seize the chair’s prerogative here to ask because I’m curious what you’d both say to this question what happens in this in this brave new world of polymaths and and not showing your work and not explaining your answer to expertise or authority so you know there we have at Cambridge a you know library after library of big books that tell you the truth or that that was always the the idea right you would go look it up somewhere what do you do in a world in which looking it up is no longer There’s not a dictionary.
There’s not a truth
I’ll start. I think most technology go back and forth. There’s a pendulum.
We’re in the pendulum that bigger is better We’re throwing everything under the Sun every reddit quote is now part of you know training every large language model and That is good. It’s gonna give you an average answer for average problem now over time I think we’re gonna come back and say you do need you know specialized trust it and We need to have Confidence that we did use the right source, and I think there will be a space for that at least I want to hope that that will be the case that we’re gonna come back, and we’re gonna have these specialized Model that will not only be rag, but there are gonna be you know Define from from scratch with the right intent and they don’t need to be a bazillion trillion Function points and whatever I mean they they just need to be trained on The expertise and then you do need to trust it and it’s it’s gonna be incredibly important I think we also need a lot of research on explain ability These and and Ben Geo And University of Montreal one of the guys who got the Turin Awards Has been very vocal around this we need to kind of go back and explain a lot more of these These are statistical model, this is all this is is huge matrices, and they’re like weights assigned to different things So the people so this is not a piece of software where you say if then this that this is just statistics So it on averages gives good answers but it depends on the data and You need to come back and put a bunch of tools to put the explain ability into the model and there are ways to do it It’s not yet super advanced, and I think we need to invest in that so that we do You know have the confidence build a trust, and I do think it’s part of the learning question you have because if the models are black box you lose the ability to learn from the deduction process which doesn’t exist it’s just a statistical model there’s no deduction so anyway those are my two two ideas.
yeah over the course of last year there was a paradigm shift in the type of model that gets used now we don’t just use input output direct response models like you were alluding to every model now is a reasoning model and so before it actually responds it has a internal monologue where it thinks through the problem tries to reason about it and then delivers a response it is primitive it’s a year old but it’s getting much better and so I think exposing that to the user and showing these chains of thought this reasoning is an important solution and then like you say rag which is retrieval augmented generation where the model isn’t just drawing on its own knowledge but it’s actually making direct and specific reference to external knowledge so we can plug it into the Cambridge Library I went to Oxford so the Bodleian but and it can cite directly back from from those sources and that that provides some degree of both reasoning and rag provides some degree of auditability so you can have a little bit more confidence in the response because you can check its work just out of curiosity what’s driving that what’s what’s driving the need for reasoning yeah because the models were brittle they would very confidently answer with the wrong solution and it turns out you know humans don’t put the same amount of energy into answering every question but that was the previous the prior expectation on these models you would ask them what’s one plus one and it would immediately respond and put the same amount of effort into answering that question and you would ask it you know to prove some unsolved air dose problem or something and it would put the same amount of effort as one plus one into that that was obviously wrong you know there are some problems that we should spend days, weeks, months, years, decades, putting effort in to solve.
And there are others that can be responded to instantly. It’s just a better, more robust intelligence.
That’s fascinating. We have time for one more question. Anything pressing?
Yeah, I’m very interested to ask the question of, just circling back to the beginning where we said we have like public sector university as well as a technology, a tech platform being in the same room.
The question I have on my mind is that with right now, like in the U.S. especially, education cost is so astronomically high and prohibitive. Lots of people are saying the narrative goes is like there’s no point going to university anymore.
And I would see in that world, there would be a lot of attention turned to online education. I think we’re all very familiar with Udemy. Has, what is the gaps between an online education and an accredited college or elite college?
Has there ever been customer or market demand for online education to move towards a model or imitate a traditional college experience? Like has that ever surfaced as a need? And yeah, just like comparing the gaps there.
I’m going to say something maybe controversial, but it’s fun. The university degree is a bundle. It’s a convenient bundle that as a society we chose to create.
So you learn something, you get an accreditation and you have a right of passage. You know, these kids are at the moment, they leave home, they go and that bundle is a convenient and we bundle that with research because the same people could now pass on their knowledge to others. It is a convenient bundle as a society.
It has worked well for a long time, Oxford and Cambridge are examples of long-standing institutions that had a version of this bundle, it changes over time. Is it time to revisit whether all of these components need to fit together because of the economics and what AI can do to change the economic of delivery? Maybe.
I think the second-
Think it quickly.
Yeah, quickly.
Because we’re running out of time.
And the second piece is just the adaptability. If you have the labor market that moves so fast, you’re now going to begin to put more weight on addressing a specific need for a specific skill. So I think that is a reality in addition to that potential unbundling of that whole experience.
Do you have a good word for the university?
I’m actually interested to hear from the university’s perspective.
Then I’ll just end by saying, I think that they are currently serving very different functions that are, right now university does so much more than provide knowledge that it still is worth its weight in gold, and it is gold.
But we’ll see how the space develops, right? And with that, I’m getting all kinds of signals from the producers, so we’ve got to end it. But thank you very much.
Thank you for your questions. And thank you to our panelists. Thank you.
Aidan Gomez
Speech speed
164 words per minute
Speech length
1995 words
Speech time
728 seconds
Deep understanding and mastery are most at risk because AI can fool people into thinking they understand complex topics when they only have surface-level knowledge
Explanation
Gomez argues that LLMs and chatbots can provide quick, surface-level answers to complex questions like quantum mechanics, creating a false sense of understanding. He views this as a core risk when integrating LLMs into education environments, emphasizing that true assessment of depth requires removing AI tools to see what humans actually understand and retain.
Evidence
Example of asking an LLM how quantum mechanics works and getting a four-paragraph response that isn’t deep understanding
Major discussion point
The Scarcest Resource in an AI-Driven World
Topics
Online education
Disagreed with
– Deborah Prentice
– Hugo Sarrazin
Disagreed on
What is the scarcest resource in an AI-driven world
AI should be treated as a tool like a calculator, but strict testing without AI assistance is essential to verify human knowledge retention
Explanation
Gomez believes AI is an incredibly effective educational tool that can provide faster and more thorough answers, but emphasizes it remains just a tool. He argues that testing becomes absolutely critical now because students can fake their way through education systems more easily, requiring strict testing regimens where the tool is removed to assess what humans actually know.
Evidence
Comparison to calculators as tools; mentions that students can fake their way through education systems much more easily now
Major discussion point
The Role of AI in Education and Learning
Topics
Online education
Agreed with
– Hugo Sarrazin
Agreed on
AI as a tool requiring human oversight and testing
Disagreed with
– Hugo Sarrazin
Disagreed on
The role of testing and AI tools in education
Cohere focuses on enterprise AI solutions with strong security, keeping customer data within their perimeter for sensitive industries
Explanation
Gomez explains that Cohere builds large language models specifically for enterprise use, with their key differentiator being security – no data exits the customer’s perimeter. Instead, they send their models and software to customers who keep everything self-contained, making them attractive to critical industries and government applications.
Evidence
Serves financial services, telco, healthcare, and government applications; mentions national security concerns and education as within that remit
Major discussion point
Business Models and Educational Technology Evolution
Topics
Online education | Digital business models
New reasoning models now show their internal thought processes, and retrieval-augmented generation allows for better auditability and source citation
Explanation
Gomez describes a paradigm shift where AI models now have internal monologues to think through problems before responding, rather than putting equal effort into all questions. He also explains how retrieval-augmented generation allows models to reference external sources like university libraries and cite back to them, providing better auditability and confidence in responses.
Evidence
Explains the shift from models putting same effort into ‘1+1’ as complex mathematical proofs; mentions plugging into Cambridge Library or Oxford’s Bodleian
Major discussion point
The Future of Human Expertise and Authority
Topics
Online education | Digital standards
Deborah Prentice
Speech speed
162 words per minute
Speech length
1282 words
Speech time
473 seconds
Self-knowledge for learners is critical since they can no longer rely on difficulty and struggle as cues for understanding their own learning progress
Explanation
Prentice argues that much learning comes from recognizing what is difficult, compelling, or challenging, and these cues help learners understand their own progress and interests. When AI removes these natural learning cues by making everything seem easy, learners lose the ability to assess their own understanding and engagement with material.
Major discussion point
The Scarcest Resource in an AI-Driven World
Topics
Online education
Disagreed with
– Aidan Gomez
– Hugo Sarrazin
Disagreed on
What is the scarcest resource in an AI-driven world
Universities teach fundamentally important skills like critical thinking and deep mastery that remain essential regardless of technological advances
Explanation
Prentice defends the university model by arguing that universities teach skills they believe are fundamentally important, particularly critical thinking and deep mastery, to students at a critical moment in their lives. She suggests that while students may need additional workplace skills later, the foundational education provided by universities remains valuable and distinct from business training.
Major discussion point
The Role of AI in Education and Learning
Topics
Online education
Agreed with
– Hugo Sarrazin
– Audience
Agreed on
Critical thinking as essential human skill
Universities serve functions beyond knowledge provision that remain valuable
Explanation
Prentice concludes that universities currently serve many more functions than simply providing knowledge, making them still worth their value despite technological changes. She suggests that universities offer something more comprehensive than what current AI-driven educational platforms can provide, though acknowledges the space will continue to develop.
Major discussion point
The Future of Human Expertise and Authority
Topics
Online education
Hugo Sarrazin
Speech speed
169 words per minute
Speech length
3379 words
Speech time
1193 seconds
Attention is scarce due to information wealth creating poverty of attention, and trust is compromised when AI systems don’t explain their reasoning
Explanation
Sarrazin references Herbert Simon’s quote about wealth of information creating poverty of attention, arguing this affects learners and requires traditional methods to change. He also emphasizes that when LLMs provide answers without explaining their reasoning, it creates trust issues and prevents people from learning how conclusions were reached.
Evidence
Herbert Simon quote about wealth of information creating poverty of attention; mentions that explainability in AI is a whole field
Major discussion point
The Scarcest Resource in an AI-Driven World
Topics
Online education
Disagreed with
– Aidan Gomez
– Deborah Prentice
Disagreed on
What is the scarcest resource in an AI-driven world
AI enables personalized learning experiences that can adapt to individual learning styles and provide one-on-one coaching at scale, solving the Bloom Two Sigma problem
Explanation
Sarrazin references 1980s research from University of Chicago showing that one-on-one coaching produced learning outcomes two sigma higher than classroom instruction, but the economics weren’t feasible. He argues that AI can now provide this personalized experience at scale, adapting to whether someone is an auditory or visual learner and creating feedback loops that teachers can’t manage with large classes.
Evidence
Bloom Two Sigma problem research from University of Chicago in the 1980s; mentions economics of one-on-one coaching vs. large classrooms
Major discussion point
The Role of AI in Education and Learning
Topics
Online education
Agreed with
– Aidan Gomez
Agreed on
Personalized learning through AI technology
Disagreed with
– Aidan Gomez
Disagreed on
The role of testing and AI tools in education
Udemy is pivoting from traditional online learning to become an AI platform for workforce reskilling, moving beyond completion metrics to measure real ROI
Explanation
Sarrazin explains that after speaking with 400 CHROs and Head of Learning executives, he found they couldn’t explain ROI from learning programs and defaulted to basic metrics like course completion. He’s pivoting Udemy toward AI-powered, in-the-flow-of-work learning that can measure real-time skill deployment and provide adaptive, bite-sized learning experiences.
Evidence
Spoke to 400 CHROs and Head of Learning executives; mentions proliferation of tools during COVID era with unclear ROI; example of certifications becoming outdated
Major discussion point
Business Models and Educational Technology Evolution
Topics
Online education | Digital business models | Future of work
AI has become a polymath by design, learning across all domains, which democratizes knowledge but raises questions about human competitive advantage
Explanation
Sarrazin argues that historically, knowledge was scarce and powerful, with very few polymaths like Leonardo da Vinci providing competitive advantages. Now, every data center contains millions of AI polymaths that can learn across all domains, democratizing this capability but creating challenges for human differentiation and the risk of losing agency in important decisions.
Evidence
Historical reference to Leonardo da Vinci providing competitive advantage; mentions that every data center adds millions of polymaths
Major discussion point
The Future of Human Expertise and Authority
Topics
Online education | Future of work
Humans need to focus on asking the right questions and quality assurance rather than the middle process of finding answers, where AI excels
Explanation
Sarrazin presents a three-part model: asking the right question, solving/getting answers, and quality assurance. He argues that AI excels at the middle part and will outperform humans there, so humans must develop competitive advantages in critical thinking for asking good questions and in assessing whether AI outputs use the right methods, libraries, or data.
Evidence
Example of AI generating code that is ‘mostly garbage’ requiring quality assurance; mentions his own PhD experience of going deep on one topic vs. AI being a polymath by design
Major discussion point
The Future of Human Expertise and Authority
Topics
Online education | Future of work
Agreed with
– Deborah Prentice
– Audience
Agreed on
Critical thinking as essential human skill
The traditional university bundle of learning, accreditation, and rite of passage may need to be reconsidered due to economic pressures and AI capabilities
Explanation
Sarrazin describes universities as offering a ‘convenient bundle’ that combines learning, accreditation, and rite of passage, historically including research. He suggests this bundle worked well historically but questions whether all components need to fit together given current economics and AI’s ability to change delivery costs, especially as labor markets move faster requiring more specific, targeted skills.
Evidence
References Oxford and Cambridge as examples of long-standing institutions with evolving versions of this bundle
Major discussion point
The Future of Human Expertise and Authority
Topics
Online education | Digital business models
Audience
Speech speed
152 words per minute
Speech length
934 words
Speech time
367 seconds
Critical thinking and sustained attention were identified as the top concerns by the audience poll
Explanation
The audience poll results showed critical thinking and sustained attention as the top two responses when asked what is becoming the scarcest resource in a world of instant answers and AI assistance. These were described as being ‘neck and neck’ throughout most of the polling period, followed by trust, then deep mastery.
Evidence
Live polling results displayed during the session showing critical thinking and sustained attention as top choices
Major discussion point
The Scarcest Resource in an AI-Driven World
Topics
Online education
Agreed with
– Hugo Sarrazin
– Deborah Prentice
Agreed on
Critical thinking as essential human skill
There are concerns about how to teach critical thinking when students rely on AI for instant answers
Explanation
An audience member raised concerns about how universities can teach students to increase their critical thinking capabilities and perform factual, logical, scientific, and ethical checks on instant answers they receive from AI models. This highlights the challenge of developing critical evaluation skills when students have easy access to AI assistance.
Major discussion point
The Role of AI in Education and Learning
Topics
Online education
There’s a fundamental mismatch between what educational institutions offer and what the job market demands
Explanation
An audience member from India highlighted that while people are skilled up, there are fewer jobs being created, and there’s a disconnect between what industry wants and what academia offers. They emphasized the need for ‘applied knowledge’ delivered at the right time to help people earn livelihoods, regardless of collar color, and noted that industries don’t know who to hire or what credentials to test for.
Evidence
ILO data showing 7 million fewer jobs created last year; mentions the concept of degrees becoming obsolete and missing continuous learning in applied knowledge
Major discussion point
Business Models and Educational Technology Evolution
Topics
Online education | Future of work
There are concerns about maintaining expertise transfer between generations when AI can replace junior workers before they gain experience
Explanation
An audience member raised a paradox where senior professionals can judge AI output due to their experience, but junior employees who lack this experience might be replaced by AI before they can develop the critical thinking skills needed to evaluate AI work. This creates a potential gap in knowledge transfer between generations and raises questions about who will have the expertise to critically evaluate AI in the future.
Evidence
Personal example of being able to judge AI model creation while juniors cannot, but considering firing juniors due to AI efficiency
Major discussion point
The Future of Human Expertise and Authority
Topics
Online education | Future of work
Agreements
Agreement points
AI as a tool requiring human oversight and testing
Speakers
– Aidan Gomez
– Hugo Sarrazin
Arguments
AI should be treated as a tool like a calculator, but strict testing without AI assistance is essential to verify human knowledge retention
Humans need to focus on asking the right questions and quality assurance rather than the middle process of finding answers, where AI excels
Summary
Both speakers agree that AI is fundamentally a tool that can enhance human capabilities, but emphasize the critical need for human oversight, particularly in testing and quality assurance. They share the view that humans must maintain agency in evaluating AI outputs.
Topics
Online education
Personalized learning through AI technology
Speakers
– Aidan Gomez
– Hugo Sarrazin
Arguments
AI can perhaps assist in resolving attention problems by its ability to personalize the experience to the individual and engage them more effectively
AI enables personalized learning experiences that can adapt to individual learning styles and provide one-on-one coaching at scale, solving the Bloom Two Sigma problem
Summary
Both speakers strongly advocate for AI’s potential to create personalized learning experiences that can adapt to individual learning styles, whether auditory or visual, and provide more engaging educational content at scale.
Topics
Online education
Critical thinking as essential human skill
Speakers
– Hugo Sarrazin
– Deborah Prentice
– Audience
Arguments
Humans need to focus on asking the right questions and quality assurance rather than the middle process of finding answers, where AI excels
Universities teach fundamentally important skills like critical thinking and deep mastery that remain essential regardless of technological advances
Critical thinking and sustained attention were identified as the top concerns by the audience poll
Summary
There is strong consensus that critical thinking remains a fundamentally important human skill that must be developed and maintained, even as AI handles more routine information processing tasks.
Topics
Online education
Similar viewpoints
Both speakers recognize the importance of AI systems being able to explain their reasoning and provide transparency in their decision-making processes, though they approach it from different technical and trust perspectives.
Speakers
– Aidan Gomez
– Hugo Sarrazin
Arguments
New reasoning models now show their internal thought processes, and retrieval-augmented generation allows for better auditability and source citation
Attention is scarce due to information wealth creating poverty of attention, and trust is compromised when AI systems don’t explain their reasoning
Topics
Online education | Digital standards
Both speakers acknowledge that universities provide more than just knowledge transfer, though Sarrazin is more open to unbundling these functions while Prentice defends their continued integrated value.
Speakers
– Hugo Sarrazin
– Deborah Prentice
Arguments
The traditional university bundle of learning, accreditation, and rite of passage may need to be reconsidered due to economic pressures and AI capabilities
Universities serve functions beyond knowledge provision that remain valuable
Topics
Online education | Digital business models
Unexpected consensus
AI superiority in knowledge processing
Speakers
– Hugo Sarrazin
– Aidan Gomez
Arguments
AI has become a polymath by design, learning across all domains, which democratizes knowledge but raises questions about human competitive advantage
Deep understanding and mastery are most at risk because AI can fool people into thinking they understand complex topics when they only have surface-level knowledge
Explanation
Surprisingly, both business leaders openly acknowledge that AI will outperform humans in knowledge processing and information synthesis, which could be seen as undermining their own business models. This honest assessment of AI’s capabilities while simultaneously advocating for human-centered approaches shows unexpected intellectual honesty.
Topics
Online education | Future of work
Need for specialized, trusted AI models
Speakers
– Hugo Sarrazin
– Aidan Gomez
Arguments
Attention is scarce due to information wealth creating poverty of attention, and trust is compromised when AI systems don’t explain their reasoning
Cohere focuses on enterprise AI solutions with strong security, keeping customer data within their perimeter for sensitive industries
Explanation
Both speakers, despite representing different business models, agree on the need to move away from generic, large-scale AI models toward more specialized, trustworthy systems. This consensus is unexpected given the current industry trend toward ever-larger general-purpose models.
Topics
Online education | Digital standards
Overall assessment
Summary
The speakers demonstrate remarkable consensus on key issues: AI as a powerful tool requiring human oversight, the critical importance of personalized learning, the need for transparency in AI systems, and the continued relevance of critical thinking skills. They also agree on the challenges facing traditional educational models and the need for adaptation.
Consensus level
High level of consensus with constructive dialogue rather than fundamental disagreements. The implications suggest a mature understanding of AI’s role in education, where technology enhances rather than replaces human capabilities, and where the focus shifts to developing uniquely human skills like critical thinking, questioning, and quality assurance. This consensus points toward a collaborative future between AI and human educators rather than a replacement model.
Differences
Different viewpoints
What is the scarcest resource in an AI-driven world
Speakers
– Aidan Gomez
– Deborah Prentice
– Hugo Sarrazin
Arguments
Deep understanding and mastery are most at risk because AI can fool people into thinking they understand complex topics when they only have surface-level knowledge
Self-knowledge for learners is critical since they can no longer rely on difficulty and struggle as cues for understanding their own learning progress
Attention is scarce due to information wealth creating poverty of attention, and trust is compromised when AI systems don’t explain their reasoning
Summary
The speakers identified different primary concerns: Gomez focused on the risk of false mastery and surface-level understanding, Prentice emphasized the loss of self-awareness in learning, while Sarrazin highlighted attention scarcity and trust issues with unexplainable AI systems.
Topics
Online education
The role of testing and AI tools in education
Speakers
– Aidan Gomez
– Hugo Sarrazin
Arguments
AI should be treated as a tool like a calculator, but strict testing without AI assistance is essential to verify human knowledge retention
AI enables personalized learning experiences that can adapt to individual learning styles and provide one-on-one coaching at scale, solving the Bloom Two Sigma problem
Summary
Gomez emphasized the critical need for testing without AI tools to verify genuine human understanding, while Sarrazin focused more on AI’s potential to enhance and personalize the learning experience itself.
Topics
Online education
Unexpected differences
Priority of deep mastery versus other learning concerns
Speakers
– Aidan Gomez
– Audience
Arguments
Deep understanding and mastery are most at risk because AI can fool people into thinking they understand complex topics when they only have surface-level knowledge
Critical thinking and sustained attention were identified as the top concerns by the audience poll
Explanation
It was unexpected that Gomez, as a technology leader, prioritized deep mastery as the most critical concern while acknowledging it was the most unpopular choice in the audience poll. This suggests a disconnect between technology developers’ concerns and broader educational stakeholder priorities.
Topics
Online education
Overall assessment
Summary
The main areas of disagreement centered on identifying the primary risks of AI in education, the appropriate balance between AI assistance and human verification in learning, and whether traditional educational institutions need fundamental restructuring or just enhancement.
Disagreement level
The disagreement level was moderate but constructive. While speakers had different emphases and priorities, they shared common concerns about maintaining human agency and critical thinking in an AI-driven world. The disagreements were more about approach and emphasis rather than fundamental opposition, suggesting room for synthesis of their different perspectives in developing comprehensive AI education strategies.
Partial agreements
Partial agreements
Similar viewpoints
Both speakers recognize the importance of AI systems being able to explain their reasoning and provide transparency in their decision-making processes, though they approach it from different technical and trust perspectives.
Speakers
– Aidan Gomez
– Hugo Sarrazin
Arguments
New reasoning models now show their internal thought processes, and retrieval-augmented generation allows for better auditability and source citation
Attention is scarce due to information wealth creating poverty of attention, and trust is compromised when AI systems don’t explain their reasoning
Topics
Online education | Digital standards
Both speakers acknowledge that universities provide more than just knowledge transfer, though Sarrazin is more open to unbundling these functions while Prentice defends their continued integrated value.
Speakers
– Hugo Sarrazin
– Deborah Prentice
Arguments
The traditional university bundle of learning, accreditation, and rite of passage may need to be reconsidered due to economic pressures and AI capabilities
Universities serve functions beyond knowledge provision that remain valuable
Topics
Online education | Digital business models
Takeaways
Key takeaways
Critical thinking and sustained attention are the scarcest resources in an AI-driven world, with deep understanding and mastery being most at risk due to AI creating false confidence in surface-level knowledge
AI should be treated as a tool like a calculator, requiring strict testing without AI assistance to verify genuine human knowledge retention and understanding
AI enables personalized learning at scale, solving the economic constraints of one-on-one tutoring through adaptive feedback and individualized instruction methods
Human competitive advantage lies in asking the right questions (front-end) and quality assurance/critical evaluation (back-end), while AI excels at the middle process of finding answers
AI has become a polymath by design, democratizing knowledge across all domains but raising concerns about maintaining human agency in decision-making
Educational institutions and businesses serve different but complementary functions – universities teach fundamental skills like critical thinking while business platforms address specific workforce reskilling needs
New AI reasoning models and retrieval-augmented generation provide better auditability and source citation, addressing some trust and explainability concerns
Resolutions and action items
Educational curricula need to adapt to focus more heavily on critical thinking, question formulation, and quality assurance skills rather than information retrieval
Better AI detection tools and testing methodologies must be developed and deployed to educators to maintain academic integrity
Investment in AI explainability research is needed to make models more transparent and trustworthy for educational applications
Development of specialized, trusted AI models trained on verified sources rather than general internet data
Creation of teaching benchmarks for AI systems to ensure they are effective educational tools
Unresolved issues
How to maintain expertise transfer between generations when AI can replace junior workers before they gain necessary experience to provide critical oversight
The fundamental mismatch between what educational institutions offer and what the rapidly changing job market demands
Whether the traditional university ‘bundle’ of learning, accreditation, and social experience should be unbundled due to economic pressures and AI capabilities
How to measure meaningful ROI in learning beyond simple completion metrics
The risk of society losing agency in important decisions if people become overly reliant on unexplainable AI systems
How to effectively teach critical thinking skills when students have access to AI that provides instant answers
Suggested compromises
AI should be integrated into classrooms as a teaching tool while maintaining strict testing protocols without AI assistance to verify learning
A hybrid approach where AI augments rather than replaces human teachers, allowing exceptional educators to extend their teaching methods across different subjects
Balancing the use of general AI models with specialized, trusted models trained on verified sources for different educational contexts
Adapting educational approaches to focus on skills where humans maintain competitive advantage while leveraging AI for areas where it excels
Thought provoking comments
Now today you have LLMs that can learn everything and they can learn across different domains and they can become the polymath. So every data center, every time we say there’s a new infrastructure that’s being added, we’re adding millions and millions of polymath.
Speaker
Hugo Sarrazin
Reason
This comment reframes AI’s capabilities in historical context, comparing LLMs to polymaths like Leonardo da Vinci. It’s profound because it suggests we’re not just creating tools, but millions of Renaissance-level thinkers, fundamentally changing the scarcity of expertise.
Impact
This metaphor became a recurring theme throughout the discussion, with Hugo later expanding on it (‘AI is a polymath by design’) and other participants referencing it. It shifted the conversation from technical capabilities to the philosophical implications of democratized expertise.
I think LLMs can’t chatbots. They can fool you into thinking that you understand something when you don’t and I view that as a core risk as we integrate these LLMs into an education environment is this false sense of mastery or understanding.
Speaker
Aidan Gomez
Reason
This insight identifies a critical paradox: the very ease of AI assistance creates an illusion of competence. It’s particularly striking coming from an AI company CEO who recognizes the limitations of his own technology.
Impact
This comment established one of the central tensions of the discussion – the gap between surface-level answers and deep understanding. It influenced subsequent conversations about testing methodologies and the need to assess humans without AI assistance.
I want to say self-knowledge for the learner, right? And it’s part of what you’re saying. I mean, you don’t know if you’re mastered and you don’t know if you’re interested in it and you don’t know if you get it, right?
Speaker
Deborah Prentice
Reason
This comment introduces a meta-cognitive dimension often overlooked in AI education discussions. It highlights that learning isn’t just about acquiring knowledge, but about developing self-awareness of one’s own understanding and interests.
Impact
This shifted the conversation from external measures of learning to internal awareness, adding psychological depth to the technical discussion and connecting to broader themes about human agency in an AI-dominated world.
We cannot as the human race give up that ability to influence… if as a society, we begin to rely on this thing that is super facile, that gives us an answer and we don’t have the questioning and we don’t kind of do the checking and the validating, we lose agency on important decision.
Speaker
Hugo Sarrazin
Reason
This comment elevates the discussion from individual learning to societal implications, framing the issue as one of human agency and democratic participation. It connects educational choices to broader questions of power and control.
Impact
This comment transformed what could have been a purely technical discussion into a conversation about human autonomy and societal governance. It prompted deeper reflection on the long-term consequences of AI dependence.
My juniors, they were not able to judge because they don’t have the experience. But to some extent, I could fire them because I don’t need them anymore because of these AI technologies. But maybe there will be a gap… who will be the future able to do this critical thinking on what AI is doing?
Speaker
Audience member
Reason
This comment identifies a crucial generational paradox: senior professionals can evaluate AI because of their experience, but AI might eliminate the junior roles where that experience is traditionally gained, creating a knowledge transfer crisis.
Impact
This observation introduced a temporal dimension to the discussion, highlighting how current AI adoption decisions might create future expertise gaps. It forced the panelists to grapple with the long-term sustainability of human expertise development.
The university degree is a bundle. It’s a convenient bundle that as a society we chose to create. So you learn something, you get an accreditation and you have a right of passage… Is it time to revisit whether all of these components need to fit together because of the economics and what AI can do to change the economic of delivery?
Speaker
Hugo Sarrazin
Reason
This comment deconstructs higher education into its component parts, challenging the assumption that learning, credentialing, and social development must be packaged together. It’s provocative because it questions fundamental educational structures.
Impact
This ‘unbundling’ concept provided a framework for understanding how AI might reshape not just how we learn, but how we structure educational institutions. It prompted reflection on what aspects of traditional education are truly essential versus merely convenient.
Overall assessment
These key comments transformed what could have been a straightforward discussion about AI in education into a profound examination of human agency, expertise, and societal structures. The polymath metaphor provided a powerful lens for understanding AI’s disruptive potential, while insights about false mastery and self-knowledge highlighted the psychological complexities of AI-assisted learning. The generational paradox comment forced consideration of long-term consequences, and the ‘unbundling’ concept challenged fundamental assumptions about educational institutions. Together, these comments elevated the discussion from technical implementation to philosophical implications, creating a rich dialogue about the future of human learning and expertise in an AI-dominated world.
Follow-up questions
How do you measure the ROI of learning effectively beyond completion rates and hours spent?
Speaker
Hugo Sarrazin
Explanation
This is a critical business challenge as organizations struggle to demonstrate the value and impact of their learning investments, moving beyond basic metrics to meaningful outcomes
How can we develop better teaching benchmarks for AI models similar to existing code and math benchmarks?
Speaker
Aidan Gomez
Explanation
Currently there’s a gap in measuring AI’s effectiveness as a teaching tool, which is essential as AI becomes more integrated into educational systems
How do we ensure continuity of expertise and critical thinking skills when senior professionals can be replaced by AI but juniors need experience to develop judgment?
Speaker
Audience member
Explanation
This addresses a potential skills gap where the experienced workers who can evaluate AI outputs may be displaced, leaving no one to train the next generation of critical thinkers
What research is needed to advance explainability in AI models for educational applications?
Speaker
Hugo Sarrazin
Explanation
Understanding how AI reaches conclusions is crucial for learning, as students need to understand reasoning processes, not just receive statistical outputs
Should the traditional university bundle of education, accreditation, and rite of passage be unbundled given AI’s impact on education economics?
Speaker
Hugo Sarrazin
Explanation
This questions fundamental assumptions about higher education structure and whether all components need to remain together in the AI era
How can we develop specialized, trusted AI models trained on verified expertise rather than general internet data?
Speaker
Hugo Sarrazin
Explanation
This addresses concerns about AI reliability and the need for authoritative sources in an era where traditional expertise repositories are being challenged
What is driving the shift toward reasoning models in AI and how can this reasoning be made more transparent to users?
Speaker
Aidan Gomez
Explanation
Understanding the evolution of AI capabilities and making the reasoning process visible is important for educational applications and building trust
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

World Economic Forum Annual Meeting 2026 at Davos
19 Jan 2026 08:00h - 23 Jan 2026 18:00h
Davos, Switzerland
