Driving Enterprise Impact Through Scalable AI Adoption
20 Feb 2026 12:00h - 13:00h
Driving Enterprise Impact Through Scalable AI Adoption
Summary
The town-hall convened to examine how the abundance of AI-generated knowledge creates new dilemmas for learners and educators [1][3]. Panelists argued that, while AI makes information instantly available, the real scarcity is now human attention and the ability to judge trustworthiness [40][43][46-47]. Hugo highlighted Herbert Simon’s “poverty of attention” and warned that large language models often provide answers without explaining their sources, eroding trust [40][46-47]. Aidan warned that easy access to surface-level answers fosters a false sense of deep mastery, making rigorous testing essential to verify what learners truly understand [48][50-53]. Debbie reported that the audience’s votes favored critical thinking and sustained attention as the most needed skills in this environment [65-68].
Hugo described Udemy’s evolution from a massive catalogue of 250 000 courses and 80 million learners to an AI-driven reskilling platform that can assess individuals and personalize feedback [73-78][91-107]. Cohere, explained Aidan, builds enterprise-grade LLMs that stay within a client’s security perimeter and helps organisations shift workers from performing tasks to orchestrating AI agents [110-118][119-121]. Both speakers agreed that AI can augment but not replace teachers, citing the Bloom two-sigma finding that one-on-one coaching dramatically outperforms large classes and that AI could scale such personalised tutoring [149-157][184-188].
They also stressed the need for explainability, noting that future models must provide reasoning traces or retrieval-augmented citations so users can audit answers [346-353][374-382]. Hugo warned that reliance on black-box models could diminish human agency and ethical guardrails, urging societies to retain the ability to question and validate AI output [346-353][369-370]. Aidan added that while reasoning-enabled models are emerging, they remain brittle, so exposing their chain-of-thought is crucial for trust [374-376][386-390].
The panel concluded that education must adapt by emphasizing front-end skills such as asking the right questions and back-end skills like critical evaluation, while leveraging AI for personalization and scalable assessment [274-277]. Overall, the discussion underscored that AI will reshape knowledge delivery, but preserving critical thinking, explainability, and human oversight is essential for effective learning [65-68][311-313].
Keypoints
Major discussion points
– Attention, critical thinking and deep mastery are becoming scarce resources in an AI-driven world.
Hugo notes Herbert Simon’s “wealth of information, poverty of attention” and highlights attention and trust as key challenges [40-44][46-47]. Aidan warns that LLMs can give a false sense of deep mastery, making genuine understanding the most at-risk skill [48-53]. Debbie reports that the audience ultimately favored critical thinking and sustained attention as the most valuable traits [65-70].
– AI can personalize and scale learning, enabling rapid reskilling and adaptive education.
Hugo describes Udemy’s pivot to an AI platform that uses rapid assessment, role-play simulations and feedback loops to tailor learning to each individual [93-107]. He also cites the “Bloom two-sigma” research and the shift toward bite-size, in-the-flow learning for enterprises [124-141]. Aidan adds that Cohere’s enterprise LLMs focus on secure, on-premise deployment, giving businesses the tools to embed AI into their workforce [110-118].
– Rigorous testing and assessment are essential to preserve human judgment and avoid superficial competence.
Aidan stresses that testing without AI tools is the “gold-standard” for measuring true understanding [48-53][321-334]. Hugo argues that human teachers remain indispensable as storytellers and mentors, and that AI-driven tutors must augment-not replace-this human element [156-166]. Both panelists agree that without strong assessment, learners can “fake” their way through education.
– A tension exists between for-profit ed-tech models and traditional universities, raising questions about the future of degrees and possible unbundling.
Debbie frames the panel as representing “for-profit educational technology” versus the “not-for-profit” university sector [13-15]. Hugo later calls the university degree a “convenient bundle” that may need to be re-examined in light of AI-enabled delivery [408-416] and discusses the need for more adaptable, skill-focused credentials [419-421]. Audience members ask directly about gaps between online platforms like Udemy and accredited colleges [403-406].
– Explainability, trust, and agency are major concerns when AI provides answers without transparent reasoning.
Hugo points out that most LLMs do not explain how an answer was derived, threatening trust [44-47]. He later calls for research on explainability and specialized, trusted models [340-368]. Aidan describes emerging “reasoning” models that generate internal monologues and Retrieval-Augmented Generation (RAG) to cite sources, aiming to improve auditability and user confidence [372-383][386-393].
Overall purpose / goal of the discussion
The town-hall was convened to surface and interrogate the “dilemmas around knowledge” that arise as AI makes information instantly accessible. Participants examined how AI reshapes the scarcity of attention, critical thinking, and mastery; explored ways AI can enhance personalized, scalable learning; debated the need for robust assessment and human oversight; and considered the shifting relationship between traditional universities and for-profit ed-tech providers. The ultimate aim was to identify challenges and opportunities for educators, businesses, and policymakers in an AI-infused knowledge ecosystem.
Tone of the discussion
The conversation begins with a formal, inquisitive tone as Debbie introduces the panel and the poll question. As the dialogue progresses, Hugo and Aidan adopt an optimistic, solution-oriented tone, highlighting AI’s potential for personalization and reskilling. Mid-session, the tone shifts to a more cautionary and reflective stance, emphasizing the risks of attention loss, superficial mastery, and loss of agency [40-47][48-53][340-368]. Toward the end, the tone becomes balanced-acknowledging both the transformative promise of AI and the need for rigorous testing, explainability, and thoughtful redesign of educational structures. Throughout, the discussion remains professional and collaborative, with occasional moments of urgency when addressing trust and ethical concerns.
Speakers
– Hugo Sarazen – President, Chairperson and Chief Executive Officer of Udemy; expertise in online learning platforms, corporate training, and AI-driven education. [S1]
– Debbie Prentice – Professor and Vice Chancellor of the University of Cambridge; expertise in higher-education leadership and the not-for-profit education sector. [S2]
– Audience – Various participants representing industry and academia (e.g., Anna Van Niels, Director of the Livium Trust; Nathaniel, founder of an education company in Australia; Pranjal Sharma, author and analyst; Kian, CEO of Workera). Roles/titles as noted. [S3][S4][S5]
– Aidan Gomez – Co-founder and Chief Executive Officer of Cohere, an enterprise AI company; expertise in large language models, AI product development, and enterprise AI deployment. [S6][S7][S8]
Additional speakers:
– None
The World Economic Forum town-hall opened with Professor Debbie Prentice, Vice-Chancellor of Cambridge, welcoming participants and framing the session as an exploration of “dilemmas around knowledge” that have persisted since the invention of schools and libraries but are now amplified by AI-driven instant access to information [1-4]. She introduced the panel: Aidan Gómez, co-founder and CEO of Cohere, an enterprise AI firm building large language models (LLMs), and Hugo Sarazen, President, Chairperson and CEO of Udemy, a global online-learning platform [5-10]. The moderator highlighted the diversity of perspectives – for-profit ed-tech versus not-for-profit university – and invited the audience to engage via the Slido app and the hashtag #WEF26 [12-22].
Poll question & live results
The first poll asked participants which resource is becoming scarcest in a world of instant AI answers, offering options: sustained attention, independent judgment, deep mastery, motivation, and trust [22-28]. The live results showed critical thinking receiving the most votes, with sustained attention a close second [65-70].
Panel responses
– Hugo Sarazen argued that attention is the most pressing shortage, invoking Herbert Simon’s insight that “when you have a wealth of information, you have a poverty of attention” and warning that LLMs often provide answers without explaining their provenance, thereby undermining trust [40-44][46-47].
– Aidan Gómez counter-pointed that the greatest risk is a false sense of deep mastery: learners can obtain surface-level responses that feel comprehensive, so rigorous testing that removes the tool is essential to verify what the human actually knows [48-53][321-334].
– Debbie Prentice rejected all five poll options, noting that without cues about difficulty or interest students cannot gauge their own understanding [54-61]; she then pointed to the audience vote, which favored critical thinking (and sustained attention) as the leading choice [65-70].
Udemy’s evolution (deep-dive)
Hugo described how Udemy has moved from a catalogue of 250 000 courses and 80 million learners to an AI-driven reskilling platform. By assessing each learner quickly, breaking courses into adaptive pathways, and providing real-time feedback loops-including role-play simulations (e.g., sales-pitch practice) and automated scoring rubrics-Udemy can keep users engaged longer than generic courses [73-78][91-107][199-207][144-146][215-217]. He also referenced Bloom’s two-sigma problem, noting that one-on-one tutoring yields a two-sigma improvement over classroom instruction but has been economically infeasible to scale, a gap AI can now begin to fill [149-157][184-188].
Cohere’s approach (deep-dive)
Aidan explained that Cohere supplies enterprise-grade LLMs that run inside a client’s security perimeter, ensuring no data leaves the organisation while enabling workers to shift from performing tasks to orchestrating AI agents [110-118][119-121]. He highlighted recent advances: an “internal monologue” or chain-of-thought reasoning that structures problem-solving before output, and Retrieval-Augmented Generation (RAG) that cites external sources (e.g., the Cambridge library) to improve auditability and user confidence [372-383][386-394]. Both panelists agreed that explainability is crucial; Hugo called for specialised, trusted models and research into transparent reasoning, while Aidan stressed exposing chain-of-thought and source citations as a technical route [340-353][374-383].
Discussion on attention, personalization, assessment, and detection
The conversation returned to attention scarcity, with Hugo emphasizing AI-driven personalization-quick learner assessment, adaptive pathways, and instant feedback-as a way to mitigate the deficit [93-107][144-146][215-217]. Aidan reiterated that the “gold-standard” remains testing without AI to gauge true retention, but also recognised that proficiency with AI tools is itself a skill that should be evaluated with the tool in the loop [171-182][321-334]. He warned that current AI-text detectors are unreliable and described a technique of embedding subtle cues in model outputs to enable more robust detection [321-330]. Both warned that more reliable detection mechanisms are needed [321-330].
An audience member raised a paradox in companies: senior professionals can judge AI output while junior staff cannot, creating concerns about future job security and underscoring the need to train the next generation in critical evaluation of AI [460-470].
Audience Q&A
– Motivation: Anna Van Niels asked how AI can sustain motivation without a human teacher; Hugo answered with AI-driven role-play and feedback loops that mimic gym-style repetition to keep learners engaged [194-207][215-217].
– Physical classrooms: Nathaniel from Australia queried AI’s role amid a social-media ban for under-16s; Aidan argued AI should be taught as a calculator-like tool with safeguards, while Hugo stressed teaching students to ask the right questions and develop critical judgment [217-250].
– Applied knowledge: Pranjal Sharma highlighted the gap between academic credentials and applied knowledge; Aidan noted AI can accelerate programme creation but that skill mapping must remain human-led [247-266][254-266].
– Degree bundle: Hugo described the university degree as a “convenient bundle” of credential, rite of passage, and research, suggesting AI-enabled delivery may prompt a re-evaluation or unbundling of these components [408-416]; Debbie defended the broader mission of universities-fostering critical thinking, deep mastery, and research-while acknowledging graduates may need additional AI-supported skill development for the workplace [425-426][267-272].
Closing remarks
The panel concluded that AI will irrevocably reshape knowledge delivery, but effective education in the AI era will require:
– preserving and cultivating human attention, critical thinking, and self-knowledge;
– deploying secure, enterprise-grade LLMs that can be personalised and audited;
– maintaining human teachers as mentors and storytellers; and
– establishing robust, dual-track assessment regimes that combine tool-free testing with AI-enhanced simulations.
Consensus highlights
– Attention and critical thinking are the most endangered cognitive resources.
– AI-driven personalization can help alleviate attention scarcity.
– Explainability and trust are non-negotiable for widespread adoption.
– Human educators remain indispensable, with AI serving as an augmentative tool.
These points reflect the transcript’s emphasis on trust, explainability, and human agency as central pillars for responsible AI integration in education [40-44][65-70][340-353][184-188][169-176].
Good afternoon, everyone, and thank you for joining this town hall discussion where we will be talking about a topic that university and education leaders are all buzzing about, which is namely dilemmas around knowledge. This has been a topic for us since schools were first invented, libraries were first invented, and it’s still with us today. It’s extremely relevant today in an age in which AI is changing, making knowledge available broadly to everybody all the time. But it doesn’t mean that there aren’t still dilemmas around knowledge, and we’re going to probe these today. I’m Professor Debbie Prentice, and I’m the Vice Chancellor of the University of Cambridge. I’m very pleased to introduce you to our panelists for this session.
So we have Aidan Gomez, who is the co -founder and chief executive officer of Cohere, an enterprise AI company developing advanced language models for use by business. And we also welcome Hugo Sarazen, who is president and chairperson. Chief Executive Officer of Udemy, which provides a wide range of business and leadership development courses, including AI courses, to businesses and organizations around the world in fields such as financial services, higher education, government, manufacturing, and technology. We have some fascinating questions to discuss this afternoon around knowledge, misinformation, AI, attention spans, and even the nature of expertise. And we’re going to bring the audience in early and often, so I hope that you’ll all participate with us. We, as panelists, come from very different perspectives.
Aidan and Hugo run very successful businesses selling a product. They are from the for -profit educational technology sector, and I’m from the not -for -profit sector. So there are different pressures, different opportunities, different challenges that we face in this space. Before we get started with… Before we get started with our panel discussion, I’d like to remind the online audience that… If you are sharing with us through your social channels, you should use the hashtag, hashtag WEF26. And whether you’re joining online today or here in person, and it’s great to see so many of you here. Thank you so much for coming. Please feel free to get involved in the session by reacting to the questions we discuss in our conversation and also by submitting questions to panelists via the Slido app.
Okay? Okay, so our first question is, in a world of instant answers and AI assistance, what is becoming the scarcest resource? Okay, the answers are from a list of options. Is it sustained human attention, independent judgment and critical thinking, deep understanding and mastery, motivation to learn in the first place, or trust in what we know and who to believe? And actually I said or. That could be and. You can choose as many of these as you. as you want. Okay? So you can see on the screen, actually, as people are responding via the Slido app, but I want to ask our panelists, what would you say? So you can see the answers on the screen.
What would you say, Hugo?
Well, I think it’s a complicated question, and I think there’s a lot of all of the above. If you take a historical perspective, knowledge was scarce. That was a source of power. Our countries fought for that. And we also had experts that built knowledge over time, but very few polymath. Very few. Those ones that were were very, very, very important. Now today, you have LLMs that can learn everything, and they can learn across different domains, and they can become the polymath. So every data center, every time we say there’s a new infrastructure that’s being added, we’re adding millions and millions, millions of polymath. And that becomes a democratization. of that knowledge. The problem is, and there’s an amazing quote from Herbert Simon, when you have a wealth of information, you have a poverty of attention.
And I think that’s what’s happening for a lot of learners, and that’s why traditional methods need to change. And we’re going to come up and talk, I’m sure, about how learning needs to evolve, what the process, what’s the role of traditional institution in changing, what’s the role corporation need to, and what individual needs to do. So I think attention is one big component. The second is a lot of, when you go to LLM and AI and you ask for a question, it will give you an answer. It will feel very comfortable with that answer. It doesn’t explain. Explainability in AI is a whole field, a whole domain, and most of these LLMs don’t give you that.
So if you have a society that begins to rely on product, that give you an answer but don’t tell you where that answer is, answer came from how do you learn and what do you have in terms of trust so i think the trust piece is also equally important so i’ll stop at that we can go well further but
yeah i was looking at the uh the poll up there and i for whatever reason the first one that came to me was deep mastery which seems to be the most unpopular choice so i think um you know when you exist in a world where it’s so fast and easy to get answers to whatever question you might have or to get a very surface level answer to even a complex question like whatever like how does quantum mechanics work it’ll give you a four paragraph response um but that’s not deep understanding of the subject matter and so i think llms can’t chat bots they can fool you into thinking that you understand something when you don’t and i view that as a core risk as we integrate these LLMs into an education environment, is this false sense of mastery or understanding.
We can discuss the different solutions to that. I think that testing is essential to it. The idea that you need to take away the tool and see what the human alone understands and has retained. The ability for you to assess depth has to take away those tools. I think that is, from my perspective, what’s most at risk.
That’s interesting. My answer is a variant on yours. I, of course, wanted to reject all five. But I think it’s because of where I come from, coming from the university sector. I wanted to say self -knowledge for the learner. It’s part of what you’re saying. You don’t know if you’re mastered and you don’t know if you’re interested in it. You don’t know if you get it. It comes to you. So much of what you learn, so much of what you learn, what you learn comes from what is difficult and what is compelling. So for those cues to no longer be actually useful cues for self -understanding means how will you even know, but that’s my answer anyway.
So we can see what the – whoops, it went away. I think critical thinking was the one that won out at the end. It looked like critical thinking was actually the audience preferred. We can keep coming back to this, but I want to use this as a jumping – oh, there we go. Okay, yeah, critical thinking and then sustained attention. They were neck and neck for most of the time, yeah, and then trust and then deep mastery, right. That’s interesting. So I want to talk a little bit about each of what you do. So we can start with you, Hugo. Tell us about Udemy.
So Udemy is a 15 -year -old company that, at the time, did a – pretty cool thing around introducing online learning. It was a great innovation to change accessibility and the cost of reaching out to millions and millions of people and created a creator economy around that. So we now have 250 ,000 courses, 80 million learners on a regular basis. We serve 17 ,000 large enterprise. We have 85 ,000 instructors that kind of come to this marketplace to offer their wear. They’re very deeply committed. They know stuff and they want to share it to the world. And we do it in about 40 % of our revenues are in the U .S. The rest is around the world. So we’re in tons of languages, 46 plus. And the funny story, I’ve only been in the world for less than a year.
When I came in my first on -haul and the people who may be listening online who were working on this, they were like, oh, I’m going to do this. I’m going to do this. I’m going to do this. I’m going to do this. I’m going to do this. I’m going to do this. I’m going to do this. town hall, I came in and I said, we’re going to exit online learning. That is a wonderful innovation. It did a bunch of great things, but it doesn’t solve the problem of today. And with AI, we can do so many different things. So I want to make a hard pivot of the business toward becoming an AI platform to reskill the workforce of the future.
And we can talk about that. And I don’t want to take too much time, but there’s a lot of ways you can use AI to do some of the things you were suggesting to kind of help build the mastery, how to do assessment using AI, how to use AI role play to immerse people. And it also does the thing that I think is so, so important. Traditional online learning and actually traditional learning. You’re an instructor and you teach to the average, right? You create your curriculum and you think you’re going to hit like the most of the people. You can’t get for the super fast. You can’t get for the super slow. You’re on online learning.
And then different people have different starting points, and we don’t have an easy way to accommodate that. Now with AI, you can do a quick assessment. You can break apart the class. You can have feedback loop and reinforce that in a very, very powerful way. And I think that’s one of the things that’s going to emerge of using AI to kind of re -skill the workforce. It’s going to build on that previous generation of online learning to do something pretty remarkable and quite different moving forward.
Thank you. Aidan?
Yes, so Cohere builds large language models. So we’re one of the developers of this core piece of technology that powers things like ChatGPT and all these different applications. We’re focused purely on the enterprise side of house, and so we work with businesses to put those models to work inside the organization. We give them access to internal data and systems that the humans have access to. And then we teach or we work with our customer to teach. the workforce to shift their role from being the ones individually doing the work to managing a team of these models or agents to carry out that work. Our big differentiator is on the security side. So there’s no data exiting our customer’s perimeter.
Instead, we send all of our models and software to them, and they keep it self -contained. Yeah.
So you have certain customers who will only subscribe to you, right?
Yeah. Certainly critical industries, financial services, telco, healthcare, and then, of course, government applications as well. Anything that’s a national security concern, and arguably education is within that remit, that’s a place that we do extremely well.
That’s interesting. So, Hugo, what can we learn from the arc of progress from MOOCs and… online education to now AI -driven?
I think a few things. The first one is, you know, if you look at the traditional learning processes and methods that we had, there was a void. And that’s why online learning took off and that’s why there’s a whole industry. And it addressed a bunch of problems around, you know, getting to skills, specific skills, and also getting to certification and then helping organization rescale. So that was a very, very, very powerful thing. What is now becoming a lot more a priority, and in the last six months, I spent an enormous amount of time, I spoke to 400 CHROs and head of learning and development in a large enterprise. So the pattern that I saw is they had an enormous proliferation of tools and things that were bought during the pandemic.
During the COVID era. very few could explain the ROI. How do you measure the ROI of learning? It’s a really good question. And everybody kind of defaulted to, did they take the class? Did they complete the class? Hours of learning. And as a business leader, it’s not particularly helpful. And it gets even worse. When they get certification in Google Cloud or AWS or Cyber something, to know that you’ve certified yourself two years ago, I’m a business leader. I want to know, are you current? Are you relevant today? So I think the arc now is moving in the enterprise to an ability to do in the flow of work learning, do it at bite size, do it in an adaptive way, and then we can come back to what adaptive means, and with an ROI, an ability to measure what skills people are deploying in real time.
So you’re now beginning to create a workforce management tool that is powered by an operating learning system.
so Aidan you said that you said that you were not as worried about uh sustained human attention as you were some of the others how does Coherence solve the attention problem
um well I mean I don’t know if Coherence solves the attention problem I think it it’s definitely a concern there’s lots of pressures on our attention span I think um social media short -form content is uh driving a lot of that um I’m certainly on the receiving end of that you know after 30 seconds because of TikTok my attention span ends and I need to talk about something else and also just the way that we do business now are in these short 30 minute meetings where you completely swap context and so I think those are difficult challenges not related to AI that are still applying pressure on human attention span um but it has a pretty good impact on the the pretty strong consequence on how people learn and how students can learn when they’re constantly being distracted when they to sit with material over time.
I think AI can perhaps assist in resolving that by its ability to personalize the experience to the individual and engage them more effectively. And so if you have a generic education offering, which, you know, bores some part of the population, excites the other, you’re missing, you’re underserving that population that gets bored. But if we can have a very targeted, scalable approach for each individual, giving them something that’s engaging, exciting, if they are auditory learners or visual learners, we can tailor it to them and hopefully keep their attention better than we might otherwise would. So AI might be part of the solution as opposed to the source of the problem.
Yugo, does your vision of AI comport with that?
It completely matches. And I think, you know, there is a well -known piece of research from the 80s from a university, a Chicago professor. It’s the Bloom two -sigma problem. And they did some research where they looked at the ability to learn with one -on -one coaching. It was two -sigma higher than the classroom. But the economics of doing that was not there. That’s why we have these big classrooms, and that’s why there are bigger classrooms for first years. It doesn’t deliver the same learning experience. Now, to Aidan’s point, with AI, you can personalize the experience. You can adapt it, and you can create feedback loops that a professor cannot today. You’ve got 40 students. You cannot pick up who’s not easily.
Some teachers are amazing, and they have the ability to do incredible things. But now you have the ability to have that feedback. So I think we’re going to see a lot of AI expert tutors and coaches that will have context and that will have been trained. on a body of knowledge that is hopefully trusted, hopefully accurate, and will help in the way that you like to learn. So if you’re an auditory learner, we’re going to give it to you that way. And if you’re a visual, we’ll give it to you that way. I think that’s a really exciting and promising world we’re entering from that point of view. So we’re going to go to questions from the audience in just a second.
So start thinking about your question. I’m just going to ask one more question of our panelists myself, which is where do humans fit in in this brave new world of AI -based education? I think all of us who are educators know that at some point we need human intervention in the process, even with the most fabulous technology. Where do you think they need to come in?
I think they’re the customer. So they’re the ones that we’re serving with this technology. And so we need to be able to serve them. We need to create. the best possible product for them. If we just do surface -level education that’s very confirmatory, oh yeah, you’ve got it, great, you know, a bit sycophantic, then they won’t be effective in the real world when they actually enter the job market. And so there’s a burden on us as product creators to create the most effective product to teach people skills and give them knowledge. And I think that AI is actually an incredibly effective tool towards that. But I do still believe that it’s a tool. It’s like a calculator.
It’s something that you can lean on to give you faster answers, more thorough answers. But we still need to ground ourselves in the human without the tool. And so testing becomes, it’s always been important, of course, but I think it becomes absolutely critical now because you can fake your way through an education system much more easily. And so having very strict testing regimens is going to be essential.
I have a variation on this I do think the teachers, the instructors are partly the customers but I do think they need to be in the loop they’re amazing storytellers they have a way if I ask anybody in this room who was your favorite teacher in high school and I pause for 5 seconds, there’s somebody in your mind right now what was special about that person and you cannot replicate that but you can augment that you can make that person now be able to maybe teach you on something that they were not like my favorite teacher in high school was a physics teacher I loved the way he presented, I loved the way he engaged and it was so motivating my chemistry teacher was not that but now I can augment with AI and have the voice, not just the voice but the way he thought, the way he presented the information apply to a different topic.
And I think that gets pretty exciting as well. You may finally understand chemistry. I may finally understand chemistry. I stayed away from chemistry because of that. But physics I love.
Okay, I wanna open up to questions from the audience. So I will call on you the old fashioned way. If you raise your hand. Oh, you have to, sorry, you have to speak into my ear.
Anna Van Niels, director of the Livium Trust. I guess learning is a bit like working out it’s got to hurt to be effective. How do you think AI enabled tech of various kinds can help with that motivation issue? You’ve talked about the teacher being the one that absolutely the motivates, but a lot of the systems we’re talking about in the workplace, et cetera, you’re not gonna have that human in the loop. So can we do things with AI and tech that could prompt that?
Yeah, I’m gonna offer a few suggestions. And this is not like future, this exists today. So you can do AI role -playing and you can do AI role -playing. in a way that makes you go through the learning process. And I’m going to use a business example. So if you’re a new salesperson and you have a new product that you need to sell, you can load up the specs of that product into an AI role play and practice selling to a person. And there will be a rubric against which we’re going to score you. And we’re going to discover whether or not you are competent at selling this product that you’re responsible for. So that’s a business example.
I can do the same thing in a call center. You know, we have one of the largest call center outsourcers. There are 20 ,000 call center agents they need to onboard every month. That is incredibly complicated. But now you can load, you know, the most common error cause, the most common tickets, the product specs. And instead of taking three weeks to onboard somebody, through the process of learning, of experimenting, you can load up the specs of the product that you need to sell. You can do a role play and get to accelerate that learning by doing a lot of practice. So it’s simulation. So that’s one powerful example. I think the other one is AI can give you feedback and monitor the progress you’re making in a way that we can bring you back to that point in the gym where you’re struggling with whatever exercise you’re doing.
We’re going to make you do that exercise more and more and get that repetition in a way that reinforces the gap that you have.
Hi, I’m Nathaniel. I run an education company in Australia. Now, as a region, Australia has an interesting relationship with technology. As many of you may know, we’ve just recently had a social media ban for young people under 16. And in a similar vein, we don’t really have a good consensus around the role of AI. So my question is, what do you believe the role is for AI in physical classrooms? And what would you say to people who might be on the side of banning versus not banning it?
Yeah, I think I’m interested to hear your answer. But from my side, I think it’s a tool like a calculator. I think also a duty of the education system now is to teach people how to use this AI, how to engage with it, how to most effectively use that tool. And so it certainly should exist as part of the classroom and as part of schooling. But like I said, it can become a crutch and it can be used to cheat. And so we have to come up with ways to ensure that students aren’t misusing it or using it in the ways that are unproductive to their learning. I’m excited to hear your answer.
I’ve got two -part answer. The first one is any business process or any endeavor, you have the problem statement asking the right question, you have the solving, and then you have the quality assurance in the back. It’s a feedback loop that you go through a circle all the time. And education is… No different. What AI does well is that middle part. It doesn’t do a whole lot in the front end and the back end. So what we need to teach young students and adults is how to ask the right question. The critical thinking, I love that it came out at the very top. Super, super important. But you can, as you said, the calculator is a calculator.
The fact that I can’t do multiplication table all the way to 100 is not that relevant for my day -to -day job. But the fact that I can be critical in my thinking, I can summarize, I can contextualize, I think those are the skills you want. Second part, for those who are curious, I have no relationship, but I am just fascinated. There’s a school in the U .S. called Alpha School. And they’ve got a really powerful model. They are using AI. They are encouraging students to use AI. And they are demonstrating that I’m going to get all the stats wrong, but they get two. the learning in half the time or three times the learning half the time and then the kids in the afternoon they go learn and learn how to be a civic leader or you know a leader in all sorts of other contexts instead of spending all their time where you know historically you would have learned you know various dates it’s not that relevant to know the dates of specific things but it’s relevant to understand the context of those events and I think that’s where we can focus a lot of the effort
Thank you Terrific topic to be discussed at Davos I’m Pranjal Sharma I’m from India I’m an author and analyst we’re looking at a lot of the micro pieces but I’d like to focus on the macro we have a situation today where we’re all skilled up but nowhere to go right last year I think ILO says 7 million fewer jobs were created not to mention the existing jobs that disappeared So there is a cry from the industry. Firstly, they don’t know who to hire and why to hire and what to hire, and they don’t even know what to test that credentials on. The second part is there’s a huge disconnect between what they want and what academia is offering.
Plus, the concept of a degree shouldn’t exist, and even continuous learning in terms of applied knowledge is missing. So I think the core phrase to be used here is applied knowledge. How do you create information for a person to be able to earn a livelihood, irrespective of white, gray, blue collar? And I think that’s the gap of applied knowledge delivered in the right way to the right people at the right time.
From a labor market perspective, I think there’s a good case to be concerned about the impact of AI and what might happen, and reskilling is going to be an essential component of that. Thank you. The mismatch in the market between what education institutions are offering and what the market is demanding, I think that is a major issue that we need to figure out how to solve. I think AI can be a part of speeding up delivery of new programs and courses and keeping up with changes in demand much faster than we have in the past. The process of scaling up educational infrastructure to meet a shift in market demand has been historically extremely slow and laborious.
But with AI, we’re able to create programs much faster. The models are infinitely scalable. They’re always awake 24 -7. They never get annoyed at the student. So we have these incredibly compelling tutors to deploy at scale against the problem of teaching the population the skills that we need. But I think the issue might be in identifying the skills that we need, and that’s still going to have to come first. From us, the humans, the business leaders. the policymakers. So that might be the core constraint. We need a direction to be set against to start building the solution.
I think too, I mean, what I would say is I think that, you know, universities aren’t teaching to what businesses need necessarily. We’re teaching things that we believe are fundamentally important, and I would defend that. I mean, we’re teaching critical thinking, and we’re teaching deep mastery, and we’re teaching them to people at a critical moment in their lives, most of them, where they actually really need to have a go and learn these skills. They may need additional skills when they go out into the workplace, and that, as far as I’m concerned, is what the kinds of products that you’re talking about are for. Good, thank you. Let’s go back to the critical thinking because now in the university the students widely use the AI assistance and get the instant answer.
In that case, how can we teach them to increase their capability of critical thinking to make factual check, logical check, scientific check, ethical check to the instant answer they got from models?
conclusion. The AI will outdo the human. So where we can be competitively differentiated versus the AI is in the front and in the back end. So we need to adapt the curriculum to make sure that people are asking the right questions with the right context. And it is critical thinking. It is critical thinking, but we need to expand and we need to have a better way to evaluate the level of critical thinking these students have when they hit the workforce so that you can evaluate. And then the same on assessing. I mean, AI is marvelous right now. It generates codes like there’s no tomorrow, but it’s mostly garbage. It is, you know, we have bottlenecks and quality assurance in the back end.
So how do you kind of create the new tools and you teach people to have, you know, the critical thinking to see if this is using the right library, is it using the right pattern? Is it using the right data? I think that’s one of the core, you know, change that. academic institution, organization like me, an individual need to do, as you do your self -development, you need to kind of really lean into this ability to ask the right question. Because the middle part, you don’t have a competitive advantage. You will be outgunned. And the thing that is even more crazy, historically, like people did PhD. I have a PhD. I went like super deep on one little topic and I got buried somewhere in the sinkhole.
And it took my entire body of effort to get there. And to be a polymath is very hard. To be able to understand, I know nothing about chemistry. I know nothing about biology, psychology. My dad did that. So I got something rubbed up on me, maybe. But AI is a polymath by design. It has the data set across all of that. So the middle part is a foregone conclusion, folks. You need to get, get good at the front and the back end.
Yeah, I was going to say another thing, which is teaching is a skill in the same way coding is a skill or doing math is a skill. And so it’s a core capability that we as model developers need to invest in. And it’s not something that is easily benchmarked. And it’s not something that is accurately tracked at the moment. But I think the more this rolls out, I mean, it’s already in the hands of every student on the face of the planet. It’s going to become imperative that we’re able to track the performance of models in teaching tasks to ensure that they’re actually effective and improve that over time. That’s just so like a technical level that is not done presently.
I don’t know of a teaching benchmark, but I can point to probably 30 code ones, 50 math ones, you know, biology, et cetera.
All right. it happens from time to time I think that psychology is rubbing off well when you say AI is a polymath by design, it’s a brilliant thought you know, it was you articulated it very well which also means that by definition humans cannot compete so we basically have to end the session and say that doom is nigh
well, I don’t think so I mean, I’m more optimistic so the polymath thing is real I mean, if you do, again, historical perspective he who had Leonardo da Vinci on his team had an advantage to build a war machine or a better court or whatever now there’s going to be a similar debate, like who assembles these polymath AI thingy has an advantage that is a foregone conclusion, that’s why there’s all these battles for a But I think we cannot, as the human race, give up that ability to influence. I think that we made a point, I think you did at the very beginning. Like, these models typically are not designed, though some of them can be designed, to explain their reasoning.
So if as a society we begin to rely on this thing that is super facile, that gives us an answer, and we don’t have the questioning, and we don’t kind of do the checking and the validating, we lose agency on important decision. And I think that is one of the things that we need to focus on deeply as a society. It also leads to the guardrail, the ethical things, and all that other stuff. We need to go there, because in the middle, it’s going to come up with answers that will be amazing in biology, and will solve things in biology, because I got trained in English language, I don’t know. But it’s going to be pretty wild, but we cannot lose agency around this polymath.
I mean, every data center is going to have… hundreds of millions of polymaths in there.
yeah I just want to shed a thinking I believe there’s a type of paradox within companies about this critical thinking let me say it this way we senior professionals we know how to judge what the AI is doing so I ask them one day for the AI to model whatever and I could judge my juniors they were not able to judge because they don’t have the experience so but to some extent I could fire them because I don’t need them anymore because of these AI technologies but maybe there will be a gap so at some point in time AI can enhance a lot what I do but if you don’t train let’s say the new generation the junior who will be the future who will be in the future able to do this critical thinking on what AI is doing I don’t have the answers obviously companies need to take efficiency and we need to do our best to reduce cost whatever but I think it’s something we as a society will have to think a lot about
it’s fair thank you here we’ve got one here you wanted you were up right yeah i didn’t just call call you
hi thank you for your insights i’m i’m kian i’m the CEO of an AI company called workera um i really like what you said even on um testing the human and i think in in the world of testing right now there’s almost two camps one that says you can test them with the calculator we can test them without the calculator and there’s also overlaid on top of it the risks of proctoring and understanding um who’s cheating who’s not cheating and what can you tell about it so how are you thinking about that idea of testing with or without the calculator
yeah the uh the cheat like can you tell whether a piece of text was written by AI it’s really tough a lot of the detectors out there are total scams they’ll say 100 % AI even when it’s not used at all so they’re extremely overconfident very high error rate on both sides uh false positive false negative. And but the answer to that question is, you can. Like, you can insert into language models subtle cues to indicate for the reader, this was written by an AI. You can not sample from natural language, language that I’m drawing from right now. You can sample from a slightly shifted distribution and use certain words much more than any normal like any human would use.
And then as soon as those words appear, you have a good piece of evidence that this was written by a language model. And so us language modeling companies do that. We shift the distribution of the language model so that when its text gets read, we have some ability to say, you know, I can assign a likelihood that that was generated by my model. So you can detect that to some extent, but many of the tools are scams. And so I think we need to make better tools and put them in the hands of educators more readily. Thank you. On testing with and without the calculator, I have a pretty strong focus on without the calculator.
I think everything needs to be ripped away, and you, standing alone as yourself, need to prove your knowledge. That is like the gold standard test of what you have learned retained. But of course, like I was saying earlier, using the language model is a skill itself, and we should have space to test that, in which case, of course, you’re going to need the LLM in the loop.
Let me seize the chair’s prerogative here to ask, because I’m curious what you would both say to this question. What happens in this brave new world of polymaths and not showing your work and not explaining your answer to expertise or authority? So, you know, we have at Cambridge, you know, library after library of big books that tell you the truth, or that was always the… the idea, right? You would go look it up somewhere. What do you do in a world in which looking it up is no longer… there’s not a dictionary, there’s not a truth?
I’ll start. I think most technology go back and forth. There’s a pendulum. We’re in the pendulum that bigger is better. We’re throwing everything under the sun. Every Reddit quote is now part of training every large language model. And that is good. It’s going to give you an average answer for average problem. Now, over time, I think we’re going to come back and say, you do need specialized, trusted, and we need to have confidence that we did use the right source. And I think there will be a space for that. At least I want to hope that that will be the case, that we’re going to come back and we’re going to have these specialized model that will not only be rag, but they are going to be defined from scratch with the right intent.
And they don’t need to be a zillion, trillion function points and whatever. I mean, they just need to be trained on the expertise. And then you do need to trust it. It’s going to be incredibly important. I think we also need a lot of research on explainability. And Ben Gio at the University of Montreal, one of the guys who got the Turin Awards, has been very vocal around this. We need to kind of go back and explain a lot more. These are statistical models. This is all this is. These are huge matrices, and they’re like weights assigned to different things. So this is not a piece of software where you say if, then, this, that.
This is just statistics. So it, on average, gives good answers. But it depends on the data. And you need to come back and put a bunch of tools to put the explainability into the model. And there are ways to do it. It’s not yet super advanced. And I think we need to invest in that so that we do have the confidence, build a trust. And I do think it’s part of the learning. The learning question you have. Because if the models are black box, you lose. the ability to learn from their deduction process, which doesn’t exist. It’s just a statistical model. There’s no deduction. So anyway, those are my two ideas.
Yeah, over the course of last year, there was a paradigm shift in the type of model that gets to use now. We don’t just use input -output direct response models like you were alluding to. Every model now is a reasoning model. And so before it actually responds, it has an internal monologue where it thinks through the problem, tries to reason about it, and then delivers a response. It is primitive. It’s a year old, but it’s getting much better. And so I think exposing that to the user and showing these chains of thought, this reasoning is an important solution. And then like you say, RAG, which is retrieval augmented generation, where the model isn’t just drawing on its own knowledge, but it’s actually making direct and specific reference to external knowledge.
So we can plug it in into the Cambridge library. I went to Oxford, so the Bodley. And it can cite directly back from those sources. And that provides some degree of both reasoning and RAG provides some degree of auditability. So you can have a little bit more confidence in the response because you can check its work.
Just out of curiosity, what’s driving that? What’s driving the need for reasoning?
Because the models were brittle. They would very confidently answer with the wrong solution. And it turns out humans don’t put the same amount of energy into answering every question. But that was the prior expectation on these models. You would ask them, what’s 1 plus 1? And it would immediately respond and put the same amount of effort into answering that question. And you would ask it to prove some unsolved Erdos problem or something. And it would put the same amount of effort as 1 plus 1 into that. That was obvious. You know, there are some problems that we should spend. days, weeks, months, years, decades, putting effort in to solve, and there are others that can be responded to instantly.
It’s just a better, more robust intelligence.
That’s fascinating. We have time for one more question. Anything pressing in there?
Thank you. Yeah, I’m very interested to ask the question of just circling back to the beginning where we said we have like public sector university as well as a technology, a tech platform being in the same room. The question I have on my mind is that with right now, like in the U .S. especially, education cost is so astronomically high and prohibitive. Lots of people are saying the narrative goes as like there’s no point going to university anymore. And I would see in that world, there would be a lot of attention turned to online education. I think we’re all very familiar with Udemy. What is the gaps between an online education and an accredited college or an elite college?
Has there ever been customer or market demand for online education to move towards a model or imitate a traditional college experience? Has that ever surfaced as a need? And just comparing the gaps there.
I’m going to say something maybe controversial, but it’s fun. The university degree is a bundle. It’s a convenient bundle that as a society we chose to create. So you learn something, you get an accreditation, and you get a degree. have a rite of passage. You know, these kids are at a moment, they leave home, they go, and they, and that bundle is a convenient, and we bundle that with research, because the same people could now pass on their knowledge to others. It is a convenient bundle as a society. It has worked well for, you know, a long time. Oxford and Cambridge are examples of long -standing institutions that had a version of this bundle. It changes over time.
Is it time to revisit whether all of these components need to fit together because of the economics and what AI can do to change the economic of delivery? Maybe. I think the second…
Think it quickly.
Yeah, quickly. And the second piece is just the adaptability. If you have the labor market that moves so fast, you’re now going to begin to put more weight on the addressing a specific need for a specific skill. So I think that is a reality in addition to that potential unbundling of that whole experience.
You have a good word for the university. I’m actually interested to hear from the university’s perspective. Then I’ll just end by saying I think that they are currently serving very different functions. Right now, university does so much more than provide knowledge that it still is worth its weight in gold, and it is gold. But we’ll see how the space develops. With that, I’m getting all kinds of signals from the producers, so we’ve got to end it. But thank you very much. Thank you for your questions, and thank you to our panelists. Thank you. To be continued… To be continued… Thank you. Thank you.
The central question explored what becomes the scarcest resource when AI can provide instant answers to virtually any query. Through audience polling, participants identified critical thinking (30%) a…
EventDeep understanding and mastery are at risk as LLMs can fool people into thinking they understand when they don’t Sustained human attention is becoming scarce due to information wealth creating povert…
EventBut even those skills can be eroded without regular practice and engagement. Core cognitive capabilities, such as judgment and critical thinking, deteriorate over time. And then there’s the issue of w…
Event“It is the age of abundant intelligence where the scarce resources are trust, stewardship, and human capability.”<a href=”https://dig.watch/event/india-ai-impact-summit-2026/driving-enterprise-impact-…
EventAdditionally, the speakers emphasized the need for personalized learning and adaptive teaching methods. They discussed the challenges faced by teachers in classrooms with a large number of students, m…
EventAI has the potential to tailor education to each student’s specific requirements. This personalization can enhance the learning process by addressing individual strengths and weaknesses.
EventAI, through approaches such as apprenticeship models and storytelling, can help swing the ‘learning pendulum’ back. It can rebalance the system by enabling personalised learning and fostering critical…
BlogThe use of Artificial Intelligence (AI) in education has both positive and negative impacts. On one hand, AI has the potential to greatly improve the quality of education. It can provide students with…
EventPeople’s judgment remains crucial, particularly for decisions that involve values, context, or individual circumstances.
Blogconsiders that what is essential for the protection of individuals’ rights in the context of the regime under consideration is that the FRA’s signals intelligence is subject to a…
ResourceHuman control and judgement will be particularly important for tasks and decisions that can lead to injury or loss of life, or damage to, or destruction of, civilian infrastructure. These will likely …
ResourceBrazil:Thank you, Mr. President, Mr. President, dear colleagues. I thank the Secretary General for his briefing today and for joining us in the briefing. I also thank Mr. Jack Clark and Mr. Yi Zheng f…
EventTraditional education system faces challenges as students question value of expensive degrees
EventJennifer DeBoer has herself gone through traditional university education and holds multiple formal degrees The readiness of universities to adapt to these external changes is questioned. The need fo…
EventAcross different sessions, participants expressed concerns about the lack of transparency in AI algorithms, which can lead to significant issues in accountability and decision-making. When algorithms …
EventAnja Kaspersen: Massively so. So let me, I’m just gonna rewind a little bit to our title of this session if you allow me and share some observations. Just first of all thank you to ITU for having me a…
EventIf a computer cannot explain its behavior, people will not trust it Issues mentioned include transparency, explainability, discrimination, data governance, etc. Large language models can manifest un…
EventDomenico Zipoli: Thank you very much. It’s always fascinating to be in a room with both stakeholders coming from companies and from the public sector. The short question is, as I think everyone here a…
Event2. Data privacy: Gong Ke highlighted data privacy concerns as a challenge for transparency. Amal El Fallah Seghrouchni: Thank you very much for the question. Yes, Morocco is Arabic-African. We have w…
Event“Professor Debbie Prentice is the Vice‑Chancellor of the University of Cambridge”
The knowledge base identifies her as “Professor Debbie Prentice, Vice Chancellor of the University of Cambridge” confirming her role [S1] and also refers to her as “Deborah Prentice, Vice-Chancellor of the University of Cambridge” [S17].
“The moderator highlighted the contrast between for‑profit ed‑tech companies and a not‑for‑profit university”
A source explicitly notes the for-profit nature of Aidan and Hugo’s businesses versus the not-for-profit sector of the academic speaker, matching the report’s description [S25].
The discussion reveals strong convergence on four main fronts: (1) attention scarcity and the role of AI personalization; (2) the centrality of critical thinking; (3) the necessity for explainable, trustworthy AI; (4) the enduring, augmentable role of human educators. Additionally, both speakers see AI as a catalyst for faster, individualized learning and for enterprise‑level workforce development.
High consensus across speakers on the challenges posed by AI (attention, trust, critical thinking) and on AI‑enabled solutions (personalization, adaptive tutoring, explainability). This broad agreement suggests a shared understanding that future education policies must balance AI integration with human oversight, prioritize critical thinking, and invest in transparent, learner‑centric AI systems.
The panel reveals substantive disagreements on which learning resource is most endangered (attention vs deep mastery vs critical thinking), how to secure trust in AI outputs (explainability research vs technical reasoning/citation), the future of the university degree bundle, and the proper role of AI in assessment. While all participants agree AI will reshape education, they diverge on priorities and implementation pathways.
Moderate to high disagreement: the speakers share a common recognition of AI’s transformative potential but differ sharply on strategic focus areas, indicating that consensus on policy and practice will require careful negotiation across academia, industry, and education providers.
The discussion’s trajectory was shaped by a handful of pivotal remarks that reframed the problem space. Hugo’s attention‑poverty observation and his reference to the Bloom two‑sigma study introduced human‑centric constraints and a concrete AI solution, steering the dialogue toward personalization. Aidan’s warnings about false mastery and his exposition of reasoning‑enabled, retrieval‑augmented models supplied both a problem statement and a technical answer, deepening the analysis of trust and explainability. Hugo’s challenge to the traditional degree bundle and his caution about losing agency to black‑box AI broadened the conversation from pedagogy to societal structures. Together, these comments sparked new sub‑topics (assessment design, AI‑driven tutoring, credential unbundling, explainability) and prompted participants and the audience to reconsider assumptions, thereby elevating the discussion from a surface‑level inventory of scarce resources to a nuanced exploration of how AI reshapes learning, evaluation, and the very purpose of higher education.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

