Driving Enterprise Impact Through Scalable AI Adoption

20 Feb 2026 12:00h - 13:00h

Driving Enterprise Impact Through Scalable AI Adoption

Session at a glance

Summary

This town hall discussion at the World Economic Forum explored the evolving dilemmas around knowledge and education in the age of artificial intelligence. The panel featured Professor Debbie Prentice, Vice Chancellor of the University of Cambridge, alongside business leaders Aidan Gomez, CEO of Cohere (an enterprise AI company), and Hugo Sarazen, CEO of Udemy (an online learning platform). The conversation began with a poll asking what has become the scarcest resource in our era of instant AI answers, with critical thinking and sustained human attention emerging as top concerns among audience members.


The panelists discussed how AI is fundamentally changing the educational landscape by democratizing access to knowledge and creating “millions of polymaths” through large language models. However, they identified significant challenges, including the risk of false mastery where students believe they understand concepts without truly grasping them, and the difficulty of maintaining deep learning when answers come so easily. Gomez emphasized the importance of testing students without AI tools to assess genuine understanding, while Sarazen highlighted AI’s potential to personalize learning experiences and provide one-on-one tutoring at scale.


The discussion revealed a consensus that humans must focus on developing skills that AI cannot replicate: asking the right questions, critical thinking, and quality assessment of AI-generated content. The panelists argued that while AI excels at the “middle part” of problem-solving, humans remain essential for framing problems and evaluating solutions. They also addressed concerns about the future of traditional education, with debates about whether university degrees should be “unbundled” and how institutions must adapt to rapidly changing skill demands in the workforce. The conversation concluded with recognition that this technological transformation requires ongoing dialogue about preserving human agency and expertise while harnessing AI’s educational potential.


Keypoints

Major Discussion Points:

The scarcest resource in an AI-driven world: The panel explored what becomes most valuable when AI provides instant answers – with audience polling showing critical thinking and sustained attention as top concerns, while panelists emphasized deep mastery, self-knowledge, and the ability to ask the right questions.


The evolution from online learning to AI-powered education: Discussion of how education has progressed from MOOCs and traditional online courses to personalized, adaptive AI tutoring systems that can provide one-on-one coaching at scale, addressing the historical “Bloom two-sigma problem” of individualized instruction.


The role of humans in AI-enhanced education: Debate over where human intervention remains essential, with consensus that humans excel at the “front end” (asking critical questions, providing context) and “back end” (quality assurance, validation) while AI handles the middle processing, and the continued importance of human teachers as storytellers and motivators.


Testing and assessment challenges: Extensive discussion of how to evaluate learning when AI tools are readily available, including the debate over testing “with or without the calculator,” the difficulty of detecting AI-generated work, and the need for new methods to assess genuine human understanding and skill retention.


The future of expertise and institutional authority: Examination of how traditional sources of knowledge and authority (universities, libraries, expert credentials) are being challenged by AI “polymaths” that can access vast knowledge across domains, and the implications for trust, explainability, and the bundled university experience.


Overall Purpose:

The discussion aimed to explore the fundamental challenges and opportunities that AI presents to education and knowledge systems, bringing together perspectives from traditional academia (Cambridge University) and commercial AI/education technology companies to examine how learning, teaching, and credentialing might evolve in an AI-dominated landscape.


Overall Tone:

The tone was thoughtful and exploratory rather than alarmist, with participants acknowledging both the transformative potential and genuine risks of AI in education. While there were moments of concern about losing human agency and the challenges of maintaining educational quality, the overall discussion remained optimistic about AI’s potential to enhance rather than replace human learning. The conversation was collaborative, with panelists building on each other’s ideas rather than presenting opposing viewpoints, and maintained an academic yet accessible tone appropriate for the World Economic Forum setting.


Speakers

Speakers from the provided list:


Debbie Prentice – Professor and Vice Chancellor of the University of Cambridge, representing the not-for-profit education sector


Aidan Gomez – Co-founder and Chief Executive Officer of Cohere, an enterprise AI company developing advanced language models for business use


Hugo Sarazen – President and Chief Executive Officer of Udemy, which provides business and leadership development courses (including AI courses) to organizations worldwide across various sectors including financial services, higher education, government, manufacturing, and technology


Audience – Multiple audience members who asked questions during the town hall discussion


Additional speakers:


Anna Van Niels – Director of the Livium Trust


Nathaniel – Runs an education company in Australia


Pranjal Sharma – Author and analyst from India


Kian – CEO of an AI company called Workera


Full session report

This panel discussion brought together three distinct perspectives on the evolving challenges of knowledge and education in the artificial intelligence era. Professor Debbie Prentice, Vice Chancellor of the University of Cambridge, served as moderator while also representing traditional academic institutions. Aidan Gomez, CEO of Cohere (which builds large language models for enterprise with a focus on security), and Hugo Sarazen, CEO of Udemy (an online learning platform with 250,000 courses, 80 million learners, 17,000 large enterprise clients, and 85,000 instructors), offered insights from the commercial education technology sector. Their conversation revealed both the transformative potential and fundamental risks that AI presents to how we learn, teach, and validate knowledge.


The Scarcity Paradox: What Becomes Rare When Information is Abundant

The discussion opened with an audience poll asking what becomes the scarcest resource when AI provides instant answers to virtually any query. The results showed critical thinking and sustained human attention as the primary concerns among participants. However, the panellists offered more nuanced perspectives on this fundamental question.


Hugo Sarazen framed the challenge through Herbert Simon’s observation that “when you have a wealth of information, you have a poverty of attention.” He noted that AI is essentially creating “millions and millions of polymaths” in every data centre, democratising access to knowledge while creating new forms of scarcity around human cognitive resources.


Aidan Gomez identified a particularly subtle risk: the false sense of mastery that AI can create. He argued that large language models can “fool you into thinking that you understand something when you don’t,” representing a core psychological risk as these systems become integrated into educational environments. This insight is particularly significant coming from someone who builds these systems, acknowledging their fundamental limitation in creating genuine understanding versus surface-level familiarity.


Debbie Prentice rejected all five options in the poll, instead emphasising the importance of self-knowledge for learners. She argued that much of what we learn comes from recognising what is difficult and compelling, but when AI makes everything appear easy, learners lose these crucial cues for self-understanding. This represents a meta-cognitive challenge that goes beyond simple knowledge acquisition to the fundamental ability to assess one’s own competence and interests.


The Transformation of Online Learning Through AI

Hugo Sarazen described a dramatic shift in his approach to education technology, revealing that he was “exiting online learning” when he joined Udemy and pivoting entirely toward AI-powered solutions. Drawing from conversations with CHROs and heads of learning and development, he identified critical shortcomings in traditional online education. While these platforms successfully addressed accessibility and cost barriers, they failed to solve fundamental problems around personalisation and return on investment measurement.


The current paradigm shift, according to Sarazen, involves moving toward AI platforms that can provide “in the flow of work learning” with adaptive, bite-sized content and measurable skill deployment. He referenced the “Bloom two-sigma problem” – research showing that one-on-one coaching produces dramatically better learning outcomes than classroom instruction, but at prohibitive economic cost. AI now makes this level of personalisation economically viable at scale.


Sarazen provided concrete examples of AI applications, including role-playing systems that allow sales professionals to practice with virtual customers or call centre agents to train on common scenarios. These applications demonstrate AI’s potential to provide safe, scalable environments for skill development through simulation and immediate feedback.


Aidan Gomez supported this vision while emphasising AI’s ability to tailor educational experiences to individual learning styles and preferences. Rather than generic offerings, AI can provide targeted, scalable approaches that maintain engagement through personalisation, representing a fundamental shift from teaching to the average student toward truly individualised instruction.


The Strategic Framework: Where Humans Maintain Competitive Advantage

Hugo Sarazen presented a business process framework for understanding human competitive advantage in an AI-dominated world, dividing cognitive work into three components: the front-end (problem formulation and asking the right questions), the middle (solution generation and processing), and the back-end (quality assurance and validation). His central thesis was that “the middle part, you don’t have a competitive advantage. You will be outgunned” by AI systems that are “polymaths by design.”


This framework provided a strategic roadmap for human adaptation, suggesting that individuals and institutions should focus their development efforts on critical thinking, question formulation, and quality assessment rather than competing with AI in information processing and solution generation. The implications for curriculum development are profound, suggesting a need to move away from information retention toward skills in problem identification and solution evaluation.


The Challenge of Testing and Assessment in an AI World

A significant portion of the discussion addressed the fundamental challenge of educational assessment when AI can complete many traditional academic tasks. Aidan Gomez emphasised the continued importance of testing human capabilities without AI assistance, arguing that while AI should be treated as a tool like a calculator, educational assessment must include rigorous evaluation of what humans can accomplish independently. This “gold standard test of what you have learned and retained” becomes increasingly critical as students can more easily navigate through educational systems with AI assistance.


However, the detection of AI-generated work proves particularly difficult. Gomez noted that many detection tools are “total scams” with high error rates, though he mentioned that technical solutions exist, such as embedding subtle linguistic markers in AI-generated text that would be invisible to users but detectable by institutions.


The conversation also highlighted a concerning generational gap: senior professionals can evaluate AI outputs based on their experience, while junior employees may lack the expertise to assess AI-generated work critically. This creates a potential training gap where the next generation of leaders may not develop the critical thinking skills necessary to manage AI systems effectively.


Trust, Authority, and the Problem of Explainability

A recurring theme was the erosion of traditional sources of authority and the challenge of maintaining trust in an AI-mediated knowledge environment. Hugo Sarazen highlighted that most large language models don’t explain where their answers come from, creating a fundamental problem for learning and decision-making. He described current models as essentially “huge matrices” with “weights assigned to different things” – statistical models rather than logical reasoning systems.


Aidan Gomez offered a more optimistic perspective, describing recent evolution toward “reasoning models” that engage in internal monologue before responding. However, he characterised these as still “primitive” and “a year old,” acknowledging significant limitations. He also mentioned techniques like retrieval-augmented generation (RAG) that allow models to cite specific sources, providing some degree of auditability.


Both speakers acknowledged that substantial investment in explainability research remains necessary, particularly for educational applications where understanding the reasoning process is often as important as the final answer.


The Future of Universities and Educational Institutions

The discussion touched on fundamental questions about the future of traditional educational institutions. Hugo Sarazen offered what he acknowledged as a “maybe controversial” analysis of higher education as a “convenient bundle” that combines learning, accreditation, rites of passage, and research functions. He questioned whether all these components need to remain packaged together given changing economics and AI capabilities.


Debbie Prentice defended the continued value of universities, arguing that they serve “so much more than provide knowledge.” She emphasised that universities teach fundamentally important skills like critical thinking at a crucial developmental moment in students’ lives, functions that may not be easily replicated by AI-powered alternatives. She noted the different pressures between for-profit and not-for-profit educational sectors in adapting to these changes.


The tension between these perspectives reflects broader questions about educational credentialing in an AI world. When AI can provide instant access to information and sophisticated analysis, the value proposition of traditional degrees becomes less clear, yet the social, developmental, and critical thinking functions of higher education may become more rather than less important.


Practical Implementation and Emerging Models

The panellists discussed various practical applications and emerging educational models. Hugo Sarazen mentioned the Alpha School as a “fascinating” example of new educational approaches, though he didn’t elaborate extensively on its specific features.


The conversation also addressed questions about AI in physical classrooms and drew parallels to Australia’s recent social media ban for under-16s, exploring how society might regulate AI access for educational purposes.


Audience members raised concerns about creating “applied knowledge delivered in the right way to the right people at the right time” for employability across all job categories, reflecting broader concerns about technological unemployment and the need for continuous reskilling as AI capabilities expand.


Unresolved Tensions and Future Directions

The discussion concluded with several unresolved tensions that reflect broader societal challenges in adapting to AI. The question of whether to test students with or without AI tools remains contentious, with valid arguments on both sides. Similarly, the appropriate balance between personalised AI tutoring and human instruction continues to evolve.


The conversation highlighted the need for continued research and development in several areas: better benchmarks for evaluating AI teaching effectiveness, improved explainability in AI systems, and more sophisticated methods for measuring learning outcomes. The development of specialised, trusted AI models trained on verified expertise rather than general internet data represents another important direction.


Perhaps most fundamentally, the discussion raised questions about maintaining human agency and critical thinking capabilities as AI systems become more sophisticated and ubiquitous. The risk that society might lose the ability to question and validate AI outputs represents a challenge that extends far beyond education to democratic governance and human autonomy.


The panellists agreed that rather than viewing AI as a replacement for human teachers, the future likely involves an augmentation model where AI enhances human capabilities in storytelling, motivation, and personalised instruction. However, significant work remains in developing the frameworks, policies, and practices necessary to realise this vision while preserving the essential human elements of education and learning.


Session transcript

Debbie Prentice

Good afternoon, everyone, and thank you for joining this town hall discussion where we will be talking about a topic that university and education leaders are all buzzing about, which is namely dilemmas around knowledge. This has been a topic for us since schools were first invented, libraries were first invented, and it’s still with us today. It’s extremely relevant today in an age in which AI is changing, making knowledge available broadly to everybody all the time. But it doesn’t mean that there aren’t still dilemmas around knowledge, and we’re going to probe these today. I’m Professor Debbie Prentice, and I’m the Vice Chancellor of the University of Cambridge. I’m very pleased to introduce you to our panelists for this session.

So we have Aidan Gomez, who is the co -founder and chief executive officer of Cohere, an enterprise AI company developing advanced language models for use by business. And we also welcome Hugo Sarazen, who is president and chairperson. Chief Executive Officer of Udemy, which provides a wide range of business and leadership development courses, including AI courses, to businesses and organizations around the world in fields such as financial services, higher education, government, manufacturing, and technology. We have some fascinating questions to discuss this afternoon around knowledge, misinformation, AI, attention spans, and even the nature of expertise. And we’re going to bring the audience in early and often, so I hope that you’ll all participate with us. We, as panelists, come from very different perspectives.

Aidan and Hugo run very successful businesses selling a product. They are from the for -profit educational technology sector, and I’m from the not -for -profit sector. So there are different pressures, different opportunities, different challenges that we face in this space. Before we get started with… Before we get started with our panel discussion, I’d like to remind the online audience that… If you are sharing with us through your social channels, you should use the hashtag, hashtag WEF26. And whether you’re joining online today or here in person, and it’s great to see so many of you here. Thank you so much for coming. Please feel free to get involved in the session by reacting to the questions we discuss in our conversation and also by submitting questions to panelists via the Slido app.

Okay? Okay, so our first question is, in a world of instant answers and AI assistance, what is becoming the scarcest resource? Okay, the answers are from a list of options. Is it sustained human attention, independent judgment and critical thinking, deep understanding and mastery, motivation to learn in the first place, or trust in what we know and who to believe? And actually I said or. That could be and. You can choose as many of these as you. as you want. Okay? So you can see on the screen, actually, as people are responding via the Slido app, but I want to ask our panelists, what would you say? So you can see the answers on the screen.

What would you say, Hugo?

Hugo Sarazen

Well, I think it’s a complicated question, and I think there’s a lot of all of the above. If you take a historical perspective, knowledge was scarce. That was a source of power. Our countries fought for that. And we also had experts that built knowledge over time, but very few polymath. Very few. Those ones that were were very, very, very important. Now today, you have LLMs that can learn everything, and they can learn across different domains, and they can become the polymath. So every data center, every time we say there’s a new infrastructure that’s being added, we’re adding millions and millions, millions of polymath. And that becomes a democratization. of that knowledge. The problem is, and there’s an amazing quote from Herbert Simon, when you have a wealth of information, you have a poverty of attention.

And I think that’s what’s happening for a lot of learners, and that’s why traditional methods need to change. And we’re going to come up and talk, I’m sure, about how learning needs to evolve, what the process, what’s the role of traditional institution in changing, what’s the role corporation need to, and what individual needs to do. So I think attention is one big component. The second is a lot of, when you go to LLM and AI and you ask for a question, it will give you an answer. It will feel very comfortable with that answer. It doesn’t explain. Explainability in AI is a whole field, a whole domain, and most of these LLMs don’t give you that.

So if you have a society that begins to rely on product, that give you an answer but don’t tell you where that answer is, answer came from how do you learn and what do you have in terms of trust so i think the trust piece is also equally important so i’ll stop at that we can go well further but

Aidan Gomez

yeah i was looking at the uh the poll up there and i for whatever reason the first one that came to me was deep mastery which seems to be the most unpopular choice so i think um you know when you exist in a world where it’s so fast and easy to get answers to whatever question you might have or to get a very surface level answer to even a complex question like whatever like how does quantum mechanics work it’ll give you a four paragraph response um but that’s not deep understanding of the subject matter and so i think llms can’t chat bots they can fool you into thinking that you understand something when you don’t and i view that as a core risk as we integrate these LLMs into an education environment, is this false sense of mastery or understanding.

We can discuss the different solutions to that. I think that testing is essential to it. The idea that you need to take away the tool and see what the human alone understands and has retained. The ability for you to assess depth has to take away those tools. I think that is, from my perspective, what’s most at risk.

Debbie Prentice

That’s interesting. My answer is a variant on yours. I, of course, wanted to reject all five. But I think it’s because of where I come from, coming from the university sector. I wanted to say self -knowledge for the learner. It’s part of what you’re saying. You don’t know if you’re mastered and you don’t know if you’re interested in it. You don’t know if you get it. It comes to you. So much of what you learn, so much of what you learn, what you learn comes from what is difficult and what is compelling. So for those cues to no longer be actually useful cues for self -understanding means how will you even know, but that’s my answer anyway.

So we can see what the – whoops, it went away. I think critical thinking was the one that won out at the end. It looked like critical thinking was actually the audience preferred. We can keep coming back to this, but I want to use this as a jumping – oh, there we go. Okay, yeah, critical thinking and then sustained attention. They were neck and neck for most of the time, yeah, and then trust and then deep mastery, right. That’s interesting. So I want to talk a little bit about each of what you do. So we can start with you, Hugo. Tell us about Udemy.

Hugo Sarazen

So Udemy is a 15 -year -old company that, at the time, did a – pretty cool thing around introducing online learning. It was a great innovation to change accessibility and the cost of reaching out to millions and millions of people and created a creator economy around that. So we now have 250 ,000 courses, 80 million learners on a regular basis. We serve 17 ,000 large enterprise. We have 85 ,000 instructors that kind of come to this marketplace to offer their wear. They’re very deeply committed. They know stuff and they want to share it to the world. And we do it in about 40 % of our revenues are in the U.S. The rest is around the world.

So we’re in tons of languages, 46 plus. And the funny story, I’ve only been in the world for less than a year. When I came in my first on -haul and the people who may be listening online who were working on this, they were like, oh, I’m going to do this. I’m going to do this. I’m going to do this. I’m going to do this. I’m going to do this. I’m going to do this. I’m going to do this. town hall, I came in and I said, we’re going to exit online learning. That is a wonderful innovation. It did a bunch of great things, but it doesn’t solve the problem of today. And with AI, we can do so many different things.

So I want to make a hard pivot of the business toward becoming an AI platform to reskill the workforce of the future. And we can talk about that. And I don’t want to take too much time, but there’s a lot of ways you can use AI to do some of the things you were suggesting to kind of help build the mastery, how to do assessment using AI, how to use AI role play to immerse people. And it also does the thing that I think is so, so important. Traditional online learning and actually traditional learning. You’re an instructor and you teach to the average, right? You create your curriculum and you think you’re going to hit like the most of the people.

You can’t get for the super fast. You can’t get for the super slow. You’re on online learning. And then different people have different starting points, and we don’t have an easy way to accommodate that. Now with AI, you can do a quick assessment. You can break apart the class. You can have feedback loop and reinforce that in a very, very powerful way. And I think that’s one of the things that’s going to emerge of using AI to kind of re -skill the workforce. It’s going to build on that previous generation of online learning to do something pretty remarkable and quite different moving forward.

Debbie Prentice

Thank you. Aidan?

Aidan Gomez

Yes, so Cohere builds large language models. So we’re one of the developers of this core piece of technology that powers things like ChatGPT and all these different applications. We’re focused purely on the enterprise side of house, and so we work with businesses to put those models to work inside the organization. We give them access to internal data and systems that the humans have access to. And then we teach or we work with our customer to teach. the workforce to shift their role from being the ones individually doing the work to managing a team of these models or agents to carry out that work. Our big differentiator is on the security side. So there’s no data exiting our customer’s perimeter.

Instead, we send all of our models and software to them, and they keep it self -contained. Yeah.

Debbie Prentice

So you have certain customers who will only subscribe to you, right?

Aidan Gomez

Yeah. Certainly critical industries, financial services, telco, healthcare, and then, of course, government applications as well. Anything that’s a national security concern, and arguably education is within that remit, that’s a place that we do extremely well.

Debbie Prentice

That’s interesting. So, Hugo, what can we learn from the arc of progress from MOOCs and… online education to now AI -driven?

Hugo Sarazen

I think a few things. The first one is, you know, if you look at the traditional learning processes and methods that we had, there was a void. And that’s why online learning took off and that’s why there’s a whole industry. And it addressed a bunch of problems around, you know, getting to skills, specific skills, and also getting to certification and then helping organization rescale. So that was a very, very, very powerful thing. What is now becoming a lot more a priority, and in the last six months, I spent an enormous amount of time, I spoke to 400 CHROs and head of learning and development in a large enterprise. So the pattern that I saw is they had an enormous proliferation of tools and things that were bought during the pandemic.

During the COVID era. very few could explain the ROI. How do you measure the ROI of learning? It’s a really good question. And everybody kind of defaulted to, did they take the class? Did they complete the class? Hours of learning. And as a business leader, it’s not particularly helpful. And it gets even worse. When they get certification in Google Cloud or AWS or Cyber something, to know that you’ve certified yourself two years ago, I’m a business leader. I want to know, are you current? Are you relevant today? So I think the arc now is moving in the enterprise to an ability to do in the flow of work learning, do it at bite size, do it in an adaptive way, and then we can come back to what adaptive means, and with an ROI, an ability to measure what skills people are deploying in real time.

So you’re now beginning to create a workforce management tool that is powered by an operating learning system.

Debbie Prentice

so Aidan you said that you said that you were not as worried about uh sustained human attention as you were some of the others how does Coherence solve the attention problem

Aidan Gomez

um well I mean I don’t know if Coherence solves the attention problem I think it it’s definitely a concern there’s lots of pressures on our attention span I think um social media short -form content is uh driving a lot of that um I’m certainly on the receiving end of that you know after 30 seconds because of TikTok my attention span ends and I need to talk about something else and also just the way that we do business now are in these short 30 minute meetings where you completely swap context and so I think those are difficult challenges not related to AI that are still applying pressure on human attention span um but it has a pretty good impact on the the pretty strong consequence on how people learn and how students can learn when they’re constantly being distracted when they to sit with material over time.

I think AI can perhaps assist in resolving that by its ability to personalize the experience to the individual and engage them more effectively. And so if you have a generic education offering, which, you know, bores some part of the population, excites the other, you’re missing, you’re underserving that population that gets bored. But if we can have a very targeted, scalable approach for each individual, giving them something that’s engaging, exciting, if they are auditory learners or visual learners, we can tailor it to them and hopefully keep their attention better than we might otherwise would. So AI might be part of the solution as opposed to the source of the problem.

Debbie Prentice

Yugo, does your vision of AI comport with that?

Hugo Sarazen

It completely matches. And I think, you know, there is a well -known piece of research from the 80s from a university, a Chicago professor. It’s the Bloom two -sigma problem. And they did some research where they looked at the ability to learn with one -on -one coaching. It was two -sigma higher than the classroom. But the economics of doing that was not there. That’s why we have these big classrooms, and that’s why there are bigger classrooms for first years. It doesn’t deliver the same learning experience. Now, to Aidan’s point, with AI, you can personalize the experience. You can adapt it, and you can create feedback loops that a professor cannot today. You’ve got 40 students.

You cannot pick up who’s not easily. Some teachers are amazing, and they have the ability to do incredible things. But now you have the ability to have that feedback. So I think we’re going to see a lot of AI expert tutors and coaches that will have context and that will have been trained. on a body of knowledge that is hopefully trusted, hopefully accurate, and will help in the way that you like to learn. So if you’re an auditory learner, we’re going to give it to you that way. And if you’re a visual, we’ll give it to you that way. I think that’s a really exciting and promising world we’re entering from that point of view.

So we’re going to go to questions from the audience in just a second. So start thinking about your question. I’m just going to ask one more question of our panelists myself, which is where do humans fit in in this brave new world of AI -based education? I think all of us who are educators know that at some point we need human intervention in the process, even with the most fabulous technology. Where do you think they need to come in?

Aidan Gomez

I think they’re the customer. So they’re the ones that we’re serving with this technology. And so we need to be able to serve them. We need to create. the best possible product for them. If we just do surface -level education that’s very confirmatory, oh yeah, you’ve got it, great, you know, a bit sycophantic, then they won’t be effective in the real world when they actually enter the job market. And so there’s a burden on us as product creators to create the most effective product to teach people skills and give them knowledge. And I think that AI is actually an incredibly effective tool towards that. But I do still believe that it’s a tool. It’s like a calculator.

It’s something that you can lean on to give you faster answers, more thorough answers. But we still need to ground ourselves in the human without the tool. And so testing becomes, it’s always been important, of course, but I think it becomes absolutely critical now because you can fake your way through an education system much more easily. And so having very strict testing regimens is going to be essential.

Hugo Sarazen

I have a variation on this I do think the teachers, the instructors are partly the customers but I do think they need to be in the loop they’re amazing storytellers they have a way if I ask anybody in this room who was your favorite teacher in high school and I pause for 5 seconds, there’s somebody in your mind right now what was special about that person and you cannot replicate that but you can augment that you can make that person now be able to maybe teach you on something that they were not like my favorite teacher in high school was a physics teacher I loved the way he presented, I loved the way he engaged and it was so motivating my chemistry teacher was not that but now I can augment with AI and have the voice, not just the voice but the way he thought, the way he presented the information apply to a different topic.

And I think that gets pretty exciting as well. You may finally understand chemistry. I may finally understand chemistry. I stayed away from chemistry because of that. But physics I love.

Debbie Prentice

Okay, I wanna open up to questions from the audience. So I will call on you the old fashioned way. If you raise your hand. Oh, you have to, sorry, you have to speak into my ear.

Audience

Anna Van Niels, director of the Livium Trust. I guess learning is a bit like working out it’s got to hurt to be effective. How do you think AI enabled tech of various kinds can help with that motivation issue? You’ve talked about the teacher being the one that absolutely the motivates, but a lot of the systems we’re talking about in the workplace, et cetera, you’re not gonna have that human in the loop. So can we do things with AI and tech that could prompt that?

Hugo Sarazen

Yeah, I’m gonna offer a few suggestions. And this is not like future, this exists today. So you can do AI role -playing and you can do AI role -playing. in a way that makes you go through the learning process. And I’m going to use a business example. So if you’re a new salesperson and you have a new product that you need to sell, you can load up the specs of that product into an AI role play and practice selling to a person. And there will be a rubric against which we’re going to score you. And we’re going to discover whether or not you are competent at selling this product that you’re responsible for. So that’s a business example.

I can do the same thing in a call center. You know, we have one of the largest call center outsourcers. There are 20 ,000 call center agents they need to onboard every month. That is incredibly complicated. But now you can load, you know, the most common error cause, the most common tickets, the product specs. And instead of taking three weeks to onboard somebody, through the process of learning, of experimenting, you can load up the specs of the product that you need to sell. You can do a role play and get to accelerate that learning by doing a lot of practice. So it’s simulation. So that’s one powerful example. I think the other one is AI can give you feedback and monitor the progress you’re making in a way that we can bring you back to that point in the gym where you’re struggling with whatever exercise you’re doing.

We’re going to make you do that exercise more and more and get that repetition in a way that reinforces the gap that you have.

Audience

Hi, I’m Nathaniel. I run an education company in Australia. Now, as a region, Australia has an interesting relationship with technology. As many of you may know, we’ve just recently had a social media ban for young people under 16. And in a similar vein, we don’t really have a good consensus around the role of AI. So my question is, what do you believe the role is for AI in physical classrooms? And what would you say to people who might be on the side of banning versus not banning it?

Aidan Gomez

Yeah, I think I’m interested to hear your answer. But from my side, I think it’s a tool like a calculator. I think also a duty of the education system now is to teach people how to use this AI, how to engage with it, how to most effectively use that tool. And so it certainly should exist as part of the classroom and as part of schooling. But like I said, it can become a crutch and it can be used to cheat. And so we have to come up with ways to ensure that students aren’t misusing it or using it in the ways that are unproductive to their learning. I’m excited to hear your answer.

Hugo Sarazen

I’ve got two -part answer. The first one is any business process or any endeavor, you have the problem statement asking the right question, you have the solving, and then you have the quality assurance in the back. It’s a feedback loop that you go through a circle all the time. And education is… No different. What AI does well is that middle part. It doesn’t do a whole lot in the front end and the back end. So what we need to teach young students and adults is how to ask the right question. The critical thinking, I love that it came out at the very top. Super, super important. But you can, as you said, the calculator is a calculator.

The fact that I can’t do multiplication table all the way to 100 is not that relevant for my day -to -day job. But the fact that I can be critical in my thinking, I can summarize, I can contextualize, I think those are the skills you want. Second part, for those who are curious, I have no relationship, but I am just fascinated. There’s a school in the U.S. called Alpha School. And they’ve got a really powerful model. They are using AI. They are encouraging students to use AI. And they are demonstrating that I’m going to get all the stats wrong, but they get two. the learning in half the time or three times the learning half the time and then the kids in the afternoon they go learn and learn how to be a civic leader or you know a leader in all sorts of other contexts instead of spending all their time where you know historically you would have learned you know various dates it’s not that relevant to know the dates of specific things but it’s relevant to understand the context of those events and I think that’s where we can focus a lot of the effort

Audience

Thank you Terrific topic to be discussed at Davos I’m Pranjal Sharma I’m from India I’m an author and analyst we’re looking at a lot of the micro pieces but I’d like to focus on the macro we have a situation today where we’re all skilled up but nowhere to go right last year I think ILO says 7 million fewer jobs were created not to mention the existing jobs that disappeared So there is a cry from the industry. Firstly, they don’t know who to hire and why to hire and what to hire, and they don’t even know what to test that credentials on. The second part is there’s a huge disconnect between what they want and what academia is offering.

Plus, the concept of a degree shouldn’t exist, and even continuous learning in terms of applied knowledge is missing. So I think the core phrase to be used here is applied knowledge. How do you create information for a person to be able to earn a livelihood, irrespective of white, gray, blue collar? And I think that’s the gap of applied knowledge delivered in the right way to the right people at the right time.

Aidan Gomez

From a labor market perspective, I think there’s a good case to be concerned about the impact of AI and what might happen, and reskilling is going to be an essential component of that. Thank you. The mismatch in the market between what education institutions are offering and what the market is demanding, I think that is a major issue that we need to figure out how to solve. I think AI can be a part of speeding up delivery of new programs and courses and keeping up with changes in demand much faster than we have in the past. The process of scaling up educational infrastructure to meet a shift in market demand has been historically extremely slow and laborious.

But with AI, we’re able to create programs much faster. The models are infinitely scalable. They’re always awake 24 -7. They never get annoyed at the student. So we have these incredibly compelling tutors to deploy at scale against the problem of teaching the population the skills that we need. But I think the issue might be in identifying the skills that we need, and that’s still going to have to come first. From us, the humans, the business leaders. the policymakers. So that might be the core constraint. We need a direction to be set against to start building the solution.

Debbie Prentice

I think too, I mean, what I would say is I think that, you know, universities aren’t teaching to what businesses need necessarily. We’re teaching things that we believe are fundamentally important, and I would defend that. I mean, we’re teaching critical thinking, and we’re teaching deep mastery, and we’re teaching them to people at a critical moment in their lives, most of them, where they actually really need to have a go and learn these skills. They may need additional skills when they go out into the workplace, and that, as far as I’m concerned, is what the kinds of products that you’re talking about are for. Good, thank you. Let’s go back to the critical thinking because now in the university the students widely use the AI assistance and get the instant answer.

In that case, how can we teach them to increase their capability of critical thinking to make factual check, logical check, scientific check, ethical check to the instant answer they got from models?

Hugo Sarazen

conclusion. The AI will outdo the human. So where we can be competitively differentiated versus the AI is in the front and in the back end. So we need to adapt the curriculum to make sure that people are asking the right questions with the right context. And it is critical thinking. It is critical thinking, but we need to expand and we need to have a better way to evaluate the level of critical thinking these students have when they hit the workforce so that you can evaluate. And then the same on assessing. I mean, AI is marvelous right now. It generates codes like there’s no tomorrow, but it’s mostly garbage. It is, you know, we have bottlenecks and quality assurance in the back end.

So how do you kind of create the new tools and you teach people to have, you know, the critical thinking to see if this is using the right library, is it using the right pattern? Is it using the right data? I think that’s one of the core, you know, change that. academic institution, organization like me, an individual need to do, as you do your self -development, you need to kind of really lean into this ability to ask the right question. Because the middle part, you don’t have a competitive advantage. You will be outgunned. And the thing that is even more crazy, historically, like people did PhD. I have a PhD. I went like super deep on one little topic and I got buried somewhere in the sinkhole.

And it took my entire body of effort to get there. And to be a polymath is very hard. To be able to understand, I know nothing about chemistry. I know nothing about biology, psychology. My dad did that. So I got something rubbed up on me, maybe. But AI is a polymath by design. It has the data set across all of that. So the middle part is a foregone conclusion, folks. You need to get, get good at the front and the back end.

Aidan Gomez

Yeah, I was going to say another thing, which is teaching is a skill in the same way coding is a skill or doing math is a skill. And so it’s a core capability that we as model developers need to invest in. And it’s not something that is easily benchmarked. And it’s not something that is accurately tracked at the moment. But I think the more this rolls out, I mean, it’s already in the hands of every student on the face of the planet. It’s going to become imperative that we’re able to track the performance of models in teaching tasks to ensure that they’re actually effective and improve that over time. That’s just so like a technical level that is not done presently.

I don’t know of a teaching benchmark, but I can point to probably 30 code ones, 50 math ones, you know, biology, et cetera.

Audience

All right. it happens from time to time I think that psychology is rubbing off well when you say AI is a polymath by design, it’s a brilliant thought you know, it was you articulated it very well which also means that by definition humans cannot compete so we basically have to end the session and say that doom is nigh

Hugo Sarazen

well, I don’t think so I mean, I’m more optimistic so the polymath thing is real I mean, if you do, again, historical perspective he who had Leonardo da Vinci on his team had an advantage to build a war machine or a better court or whatever now there’s going to be a similar debate, like who assembles these polymath AI thingy has an advantage that is a foregone conclusion, that’s why there’s all these battles for a But I think we cannot, as the human race, give up that ability to influence. I think that we made a point, I think you did at the very beginning. Like, these models typically are not designed, though some of them can be designed, to explain their reasoning.

So if as a society we begin to rely on this thing that is super facile, that gives us an answer, and we don’t have the questioning, and we don’t kind of do the checking and the validating, we lose agency on important decision. And I think that is one of the things that we need to focus on deeply as a society. It also leads to the guardrail, the ethical things, and all that other stuff. We need to go there, because in the middle, it’s going to come up with answers that will be amazing in biology, and will solve things in biology, because I got trained in English language, I don’t know. But it’s going to be pretty wild, but we cannot lose agency around this polymath.

I mean, every data center is going to have… hundreds of millions of polymaths in there.

Audience

yeah I just want to shed a thinking I believe there’s a type of paradox within companies about this critical thinking let me say it this way we senior professionals we know how to judge what the AI is doing so I ask them one day for the AI to model whatever and I could judge my juniors they were not able to judge because they don’t have the experience so but to some extent I could fire them because I don’t need them anymore because of these AI technologies but maybe there will be a gap so at some point in time AI can enhance a lot what I do but if you don’t train let’s say the new generation the junior who will be the future who will be in the future able to do this critical thinking on what AI is doing I don’t have the answers obviously companies need to take efficiency and we need to do our best to reduce cost whatever but I think it’s something we as a society will have to think a lot about

Debbie Prentice

it’s fair thank you here we’ve got one here you wanted you were up right yeah i didn’t just call call you

Audience

hi thank you for your insights i’m i’m kian i’m the CEO of an AI company called workera um i really like what you said even on um testing the human and i think in in the world of testing right now there’s almost two camps one that says you can test them with the calculator we can test them without the calculator and there’s also overlaid on top of it the risks of proctoring and understanding um who’s cheating who’s not cheating and what can you tell about it so how are you thinking about that idea of testing with or without the calculator

Aidan Gomez

yeah the uh the cheat like can you tell whether a piece of text was written by AI it’s really tough a lot of the detectors out there are total scams they’ll say 100 % AI even when it’s not used at all so they’re extremely overconfident very high error rate on both sides uh false positive false negative. And but the answer to that question is, you can. Like, you can insert into language models subtle cues to indicate for the reader, this was written by an AI. You can not sample from natural language, language that I’m drawing from right now. You can sample from a slightly shifted distribution and use certain words much more than any normal like any human would use.

And then as soon as those words appear, you have a good piece of evidence that this was written by a language model. And so us language modeling companies do that. We shift the distribution of the language model so that when its text gets read, we have some ability to say, you know, I can assign a likelihood that that was generated by my model. So you can detect that to some extent, but many of the tools are scams. And so I think we need to make better tools and put them in the hands of educators more readily. Thank you. On testing with and without the calculator, I have a pretty strong focus on without the calculator.

I think everything needs to be ripped away, and you, standing alone as yourself, need to prove your knowledge. That is like the gold standard test of what you have learned retained. But of course, like I was saying earlier, using the language model is a skill itself, and we should have space to test that, in which case, of course, you’re going to need the LLM in the loop.

Debbie Prentice

Let me seize the chair’s prerogative here to ask, because I’m curious what you would both say to this question. What happens in this brave new world of polymaths and not showing your work and not explaining your answer to expertise or authority? So, you know, we have at Cambridge, you know, library after library of big books that tell you the truth, or that was always the… the idea, right? You would go look it up somewhere. What do you do in a world in which looking it up is no longer… there’s not a dictionary, there’s not a truth?

Hugo Sarazen

I’ll start. I think most technology go back and forth. There’s a pendulum. We’re in the pendulum that bigger is better. We’re throwing everything under the sun. Every Reddit quote is now part of training every large language model. And that is good. It’s going to give you an average answer for average problem. Now, over time, I think we’re going to come back and say, you do need specialized, trusted, and we need to have confidence that we did use the right source. And I think there will be a space for that. At least I want to hope that that will be the case, that we’re going to come back and we’re going to have these specialized model that will not only be rag, but they are going to be defined from scratch with the right intent.

And they don’t need to be a zillion, trillion function points and whatever. I mean, they just need to be trained on the expertise. And then you do need to trust it. It’s going to be incredibly important. I think we also need a lot of research on explainability. And Ben Gio at the University of Montreal, one of the guys who got the Turin Awards, has been very vocal around this. We need to kind of go back and explain a lot more. These are statistical models. This is all this is. These are huge matrices, and they’re like weights assigned to different things. So this is not a piece of software where you say if, then, this, that.

This is just statistics. So it, on average, gives good answers. But it depends on the data. And you need to come back and put a bunch of tools to put the explainability into the model. And there are ways to do it. It’s not yet super advanced. And I think we need to invest in that so that we do have the confidence, build a trust. And I do think it’s part of the learning. The learning question you have. Because if the models are black box, you lose. the ability to learn from their deduction process, which doesn’t exist. It’s just a statistical model. There’s no deduction. So anyway, those are my two ideas.

Aidan Gomez

Yeah, over the course of last year, there was a paradigm shift in the type of model that gets to use now. We don’t just use input -output direct response models like you were alluding to. Every model now is a reasoning model. And so before it actually responds, it has an internal monologue where it thinks through the problem, tries to reason about it, and then delivers a response. It is primitive. It’s a year old, but it’s getting much better. And so I think exposing that to the user and showing these chains of thought, this reasoning is an important solution. And then like you say, RAG, which is retrieval augmented generation, where the model isn’t just drawing on its own knowledge, but it’s actually making direct and specific reference to external knowledge.

So we can plug it in into the Cambridge library. I went to Oxford, so the Bodley. And it can cite directly back from those sources. And that provides some degree of both reasoning and RAG provides some degree of auditability. So you can have a little bit more confidence in the response because you can check its work.

Debbie Prentice

Just out of curiosity, what’s driving that? What’s driving the need for reasoning?

Aidan Gomez

Because the models were brittle. They would very confidently answer with the wrong solution. And it turns out humans don’t put the same amount of energy into answering every question. But that was the prior expectation on these models. You would ask them, what’s 1 plus 1? And it would immediately respond and put the same amount of effort into answering that question. And you would ask it to prove some unsolved Erdos problem or something. And it would put the same amount of effort as 1 plus 1 into that. That was obvious. You know, there are some problems that we should spend. days, weeks, months, years, decades, putting effort in to solve, and there are others that can be responded to instantly.

It’s just a better, more robust intelligence.

Debbie Prentice

That’s fascinating. We have time for one more question. Anything pressing in there?

Audience

Thank you. Yeah, I’m very interested to ask the question of just circling back to the beginning where we said we have like public sector university as well as a technology, a tech platform being in the same room. The question I have on my mind is that with right now, like in the U.S. especially, education cost is so astronomically high and prohibitive. Lots of people are saying the narrative goes as like there’s no point going to university anymore. And I would see in that world, there would be a lot of attention turned to online education. I think we’re all very familiar with Udemy. What is the gaps between an online education and an accredited college or an elite college?

Has there ever been customer or market demand for online education to move towards a model or imitate a traditional college experience? Has that ever surfaced as a need? And just comparing the gaps there.

Hugo Sarazen

I’m going to say something maybe controversial, but it’s fun. The university degree is a bundle. It’s a convenient bundle that as a society we chose to create. So you learn something, you get an accreditation, and you get a degree. have a rite of passage. You know, these kids are at a moment, they leave home, they go, and they, and that bundle is a convenient, and we bundle that with research, because the same people could now pass on their knowledge to others. It is a convenient bundle as a society. It has worked well for, you know, a long time. Oxford and Cambridge are examples of long -standing institutions that had a version of this bundle. It changes over time.

Is it time to revisit whether all of these components need to fit together because of the economics and what AI can do to change the economic of delivery? Maybe. I think the second…

Debbie Prentice

Think it quickly.

Hugo Sarazen

Yeah, quickly. And the second piece is just the adaptability. If you have the labor market that moves so fast, you’re now going to begin to put more weight on the addressing a specific need for a specific skill. So I think that is a reality in addition to that potential unbundling of that whole experience.

Debbie Prentice

You have a good word for the university. I’m actually interested to hear from the university’s perspective. Then I’ll just end by saying I think that they are currently serving very different functions. Right now, university does so much more than provide knowledge that it still is worth its weight in gold, and it is gold. But we’ll see how the space develops. With that, I’m getting all kinds of signals from the producers, so we’ve got to end it. But thank you very much. Thank you for your questions, and thank you to our panelists. Thank you. To be continued… To be continued… Thank you. Thank you.

H

Hugo Sarazen

Speech speed

170 words per minute

Speech length

3459 words

Speech time

1214 seconds

Attention bottleneck and trust erosion

Explanation

Hugo warns that the flood of information in an AI‑rich environment creates a scarcity of attention, which in turn undermines users’ trust in AI‑generated answers.


Evidence

“I think we also need a lot of research on explainability.” [35]. “And with AI, we can do so many different things.” [5].


Major discussion point

Scarce cognitive resources in an AI‑rich knowledge environment


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


Udemy’s AI‑driven adaptive platform

Explanation

Hugo describes Udemy’s shift toward an AI platform that can instantly assess learners, segment classes, and deliver bite‑size, adaptive content at scale.


Evidence

“Now, to Aidan’s point, with AI, you can personalize the experience.” [2]. “And with AI, you can do a quick assessment.” [15]. “So I think the arc now is moving in the enterprise to an ability to do in the flow of work learning, do it at bite size, do it in an adaptive way, and then we can come back to what adaptive means, and with an ROI, an ability to measure what skills people are deploying in real time.” [37].


Major discussion point

AI as a catalyst for personalized, adaptive learning


Topics

Artificial intelligence | Capacity development


Teachers as irreplaceable storytellers

Explanation

Hugo emphasizes that teachers bring a unique storytelling ability that AI cannot replicate, though AI can augment their teaching.


Evidence

“I do think the teachers, the instructors are partly the customers but I do think they need to be in the loop they’re amazing storytellers they have a way if I ask anybody in this room who was your favorite teacher in high school and I pause for 5 seconds, there’s somebody in your mind right now what was special about that person and you cannot replicate that but you can augment that you can make that person now be able to maybe teach you on something that they were not like my favorite teacher in high school was a physics teacher I loved the way he presented, I loved the way he engaged and it was so motivating my chemistry teacher was not that but now I can augment with AI and have the voice, not just the voice but the way he thought, the way he presented the information apply to a different topic.” [12]


Major discussion point

The continuing role of humans and teachers in AI‑driven education


Topics

Social and economic development | Capacity development


Degree as a societal bundle under pressure from AI

Explanation

Hugo argues that the traditional university degree bundles credential, rite of passage, and research community, but AI’s low‑cost delivery may challenge its relevance.


Evidence

“The AI will outdo the human.” [27]. “Traditional online learning and actually traditional learning.” [57].


Major discussion point

Future of traditional university degrees versus online/AI education


Topics

Social and economic development | Artificial intelligence


Need for explainability research and trusted models

Explanation

Hugo calls for investment in explainability and the creation of specialized, trustworthy models to restore confidence in AI outputs.


Evidence

“I think we also need a lot of research on explainability.” [35]. “At least I want to hope that that will be the case, that we’re going to come back and we’re going to have these specialized model that will not only be rag, but they are going to be defined from scratch with the right intent.” [36].


Major discussion point

Trust, explainability, and reasoning in AI models


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


AI‑enabled online education complements elite institutions

Explanation

Hugo notes that while AI‑driven platforms can scale knowledge, they can only complement—not replace—the broader research, community, and deep inquiry functions of elite universities.


Evidence

“Traditional online learning and actually traditional learning.” [57]. “And we’re going to come up and talk, I’m sure, about how learning needs to evolve, what the process, what’s the role of traditional institution in changing, what’s the role corporation need to, and what individual needs to do.” [59].


Major discussion point

Future of traditional university degrees versus online/AI education


Topics

Social and economic development | Artificial intelligence


A

Aidan Gomez

Speech speed

166 words per minute

Speech length

1973 words

Speech time

710 seconds

Deep mastery eroding; surface‑level AI answers

Explanation

Aidan warns that LLMs can give shallow answers that create a false sense of mastery, making it essential to test knowledge without AI assistance.


Evidence

“…when you exist in a world where it’s so fast and easy to get answers to whatever question you might have or to get a very surface level answer to even a complex question like whatever like how does quantum mechanics work it’ll give you a four paragraph response um but that’s not deep understanding of the subject matter and so i think llms can’t chat bots they can fool you into thinking that you understand something when you don’t and i view that as a core risk as we integrate these llms into an education environment, is this false sense of mastery or understanding.” [48]. “That is like the gold standard test of what you have learned retained.” [16]. “I think that testing is essential to it.” [17].


Major discussion point

Scarce cognitive resources in an AI‑rich knowledge environment


Topics

Artificial intelligence | Capacity development


Personalized, adaptive learning for different styles

Explanation

Aidan highlights that AI can tailor content to auditory, visual, or other learner preferences, improving engagement and attention.


Evidence

“But if we can have a very targeted, scalable approach for each individual, giving them something that’s engaging, exciting, if they are auditory learners or visual learners, we can tailor it to them and hopefully keep their attention better than we might otherwise would.” [1]. “I think AI can perhaps assist in resolving that by its ability to personalize the experience to the individual and engage them more effectively.” [4].


Major discussion point

AI as a catalyst for personalized, adaptive learning


Topics

Artificial intelligence | Capacity development


Cohere’s secure, on‑premise LLMs shift work to AI‑agent management

Explanation

Aidan explains that Cohere’s enterprise LLMs let organizations embed AI securely, moving workers from doing tasks to overseeing AI agents.


Evidence

“I think AI can be a part of speeding up delivery of new programs and courses and keeping up with changes in demand much faster than we have in the past.” [6].


Major discussion point

AI as a catalyst for personalized, adaptive learning


Topics

Artificial intelligence | The enabling environment for digital development


Humans remain the customers; educators must ensure tool‑free competence

Explanation

Aidan stresses that AI should be a tool, but educators must guarantee learners can succeed without it and maintain rigorous testing.


Evidence

“I think also a duty of the education system now is to teach people how to use this AI, how to engage with it, how to most effectively use that tool.” [22]. “And so having very strict testing regimens is going to be essential.” [23]. “But we still need to ground ourselves in the human without the tool.” [20].


Major discussion point

The continuing role of humans and teachers in AI‑driven education


Topics

Capacity development | Artificial intelligence


Testing without AI is the gold standard; separate AI‑proficiency assessments needed

Explanation

Aidan argues that the benchmark for learning should be testing without AI, while distinct assessments should evaluate skillful AI usage.


Evidence

“That is like the gold standard test of what you have learned retained.” [16]. “I think that testing is essential to it.” [17]. “On testing with and without the calculator, I have a pretty strong focus on without the calculator.” [19]. “But we still need to ground ourselves in the human without the tool.” [20].


Major discussion point

Assessment, testing, and evaluation in the age of AI


Topics

Capacity development | Artificial intelligence


Emerging reasoning models with internal monologue and RAG improve auditability

Explanation

Aidan describes new LLMs that generate internal reasoning steps and use Retrieval‑Augmented Generation to cite sources, enhancing transparency and trust.


Evidence

“And then like you say, RAG, which is retrieval augmented generation, where the model isn’t just drawing on its own knowledge, but it’s actually making direct and specific reference to external knowledge.” [28]. “And that provides some degree of both reasoning and RAG provides some degree of auditability.” [29]. “Every model now is a reasoning model.” [30]. “And so before it actually responds, it has an internal monologue where it thinks through the problem, tries to reason about it, and then delivers a response.” [31]. “And so I think exposing that to the user and showing these chains of thought, this reasoning is an important solution.” [32]. “And it can cite directly back from those sources.” [33].


Major discussion point

Trust, explainability, and reasoning in AI models


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


AI‑driven tutors can be deployed at scale to teach needed skills

Explanation

Aidan envisions large‑scale AI tutors that provide contextual, skill‑focused instruction to meet workforce demands.


Evidence

“So we have these incredibly compelling tutors to deploy at scale against the problem of teaching the population the skills that we need.” [46].


Major discussion point

AI as a catalyst for personalized, adaptive learning


Topics

Artificial intelligence | The digital economy


D

Debbie Prentice

Speech speed

124 words per minute

Speech length

1283 words

Speech time

616 seconds

Critical thinking and deep mastery are the most valued skills

Explanation

Debbie asserts that learners prioritize critical thinking, sustained attention, and deep mastery, which universities must continue to cultivate.


Evidence

“I mean, we’re teaching critical thinking, and we’re teaching deep mastery, and we’re teaching them to people at a critical moment in their lives, most of them, where they actually really need to have a go and learn these skills.” [40]. “Is it sustained human attention, independent judgment and critical thinking, deep understanding and mastery, motivation to learn in the first place, or trust in what we know and who to believe?” [42]. “And it is critical thinking.” [39].


Major discussion point

Scarce cognitive resources in an AI‑rich knowledge environment


Topics

Capacity development | Social and economic development


Universities defend fundamentals despite market pressure

Explanation

Debbie defends the university mission of teaching fundamentals like critical thinking and deep mastery, even as industry demands shift.


Evidence

“We’re teaching things that we believe are fundamentally important, and I would defend that.” [41]. “I think too, I mean, what I would say is I think that, you know, universities aren’t teaching to what businesses need necessarily.” [44]. “Right now, university does so much more than provide knowledge that it still is worth its weight in gold, and it is gold.” [24].


Major discussion point

Aligning education with labor‑market demands and applied knowledge


Topics

Social and economic development | Capacity development


Traditional degree as a societal bundle

Explanation

Debbie describes the university degree as a combination of credential, rite of passage, and research community, whose relevance may be questioned as AI lowers delivery costs.


Evidence

“Right now, university does so much more than provide knowledge that it still is worth its weight in gold, and it is gold.” [24]. “I’m Professor Debbie Prentice, and I’m the Vice Chancellor of the University of Cambridge.” [47].


Major discussion point

Future of traditional university degrees versus online/AI education


Topics

Social and economic development | Artificial intelligence


Universities provide broader functions beyond knowledge delivery

Explanation

Debbie emphasizes that universities offer research, community, and deep inquiry functions that online platforms cannot fully replace.


Evidence

“Right now, university does so much more than provide knowledge that it still is worth its weight in gold, and it is gold.” [24]. “We have some fascinating questions to discuss this afternoon around knowledge, misinformation, AI, attention spans, and even the nature of expertise.” [26].


Major discussion point

Future of traditional university degrees versus online/AI education


Topics

Social and economic development | Capacity development


A

Audience

Speech speed

164 words per minute

Speech length

886 words

Speech time

323 seconds

Market demand for university‑style online education

Explanation

Audience members question whether customers and the market are seeking online learning models that replicate the traditional college experience, signalling a stakeholder interest in hybrid or fully digital degree pathways.


Evidence

“Has there ever been customer or market demand for online education to move towards a model or imitate a traditional college experience?” [6].


Major discussion point

Aligning education with labor‑market demands and applied knowledge


Topics

Artificial intelligence | Social and economic development


Applied knowledge delivery gap

Explanation

The audience highlights a gap in delivering applied, job‑relevant knowledge at the right moment to the right learners, underscoring the need for more responsive, skill‑focused platforms.


Evidence

“And I think that’s the gap of applied knowledge delivered in the right way to the right people at the right time.” [8]. “So I think the core phrase to be used here is applied knowledge.” [12].


Major discussion point

AI as a catalyst for personalized, adaptive learning


Topics

Capacity development | Artificial intelligence


Disconnect between learner expectations and academic offerings

Explanation

Audience participants point out a substantial mismatch between what learners want and what universities currently provide, suggesting that current curricula may be out of sync with market needs.


Evidence

“The second part is there’s a huge disconnect between what they want and what academia is offering.” [13].


Major discussion point

Aligning education with labor‑market demands and applied knowledge


Topics

Social and economic development | Capacity development


Shift of attention toward online education

Explanation

One audience member observes that in a world where AI provides instant answers, attention is increasingly being directed toward online learning solutions.


Evidence

“I would see in that world, there would be a lot of attention turned to online education.” [10].


Major discussion point

Scarce cognitive resources in an AI‑rich knowledge environment


Topics

Building confidence and security in the use of ICTs | Social and economic development


Debate over AI regulation (ban vs. not ban)

Explanation

The audience raises a policy‑level question about whether AI technologies should be restricted or allowed, reflecting broader concerns about governance and ethical implications.


Evidence

“And what would you say to people who might be on the side of banning versus not banning it?” [14].


Major discussion point

Human rights and the ethical dimensions of the information society


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Perceived redundancy of traditional universities

Explanation

Audience sentiment suggests a growing narrative that university degrees may no longer be necessary, highlighting a challenge for higher‑education institutions to demonstrate unique value.


Evidence

“Lots of people are saying the narrative goes as like there’s no point going to university anymore.” [15].


Major discussion point

Future of traditional university degrees versus online/AI education


Topics

Social and economic development | Artificial intelligence


Agreements

Agreement points

AI should be treated as a tool that requires human oversight and critical evaluation

Speakers

– Hugo Sarazen
– Aidan Gomez

Arguments

Humans must focus on asking the right questions and quality assurance since AI will dominate the middle execution phase


AI should be treated as a tool like a calculator, but testing without the tool is essential to assess true human understanding


Summary

Both speakers agree that AI is a powerful tool but emphasize the need for human oversight, critical thinking, and the ability to evaluate AI outputs. They stress that humans must maintain agency in asking questions and assessing quality.


Topics

Artificial intelligence | Capacity development | Human rights and the ethical dimensions of the information society


Personalized learning through AI can address the limitations of traditional one-size-fits-all education

Speakers

– Hugo Sarazen
– Aidan Gomez

Arguments

AI can personalize learning experiences and create feedback loops that traditional teaching cannot achieve at scale


The shift moves from generic education offerings to highly targeted, scalable approaches for individuals


Summary

Both speakers recognize that AI enables personalized education that can adapt to individual learning styles and needs, overcoming the historical limitation of teaching to the average student in traditional classroom settings.


Topics

Artificial intelligence | Capacity development | Social and economic development


Critical thinking and questioning skills are essential in an AI-dominated world

Speakers

– Hugo Sarazen
– Debbie Prentice

Arguments

Humans must focus on asking the right questions and quality assurance since AI will dominate the middle execution phase


Universities must adapt curricula to emphasize critical thinking and evaluation skills over information retention


Summary

Both speakers emphasize that as AI handles information processing and answers, humans must develop stronger critical thinking abilities to ask the right questions and evaluate AI outputs effectively.


Topics

Capacity development | Human rights and the ethical dimensions of the information society | Artificial intelligence


There is a significant skills gap between what educational institutions provide and what the job market needs

Speakers

– Hugo Sarazen
– Aidan Gomez
– Audience

Arguments

Traditional online learning addressed accessibility but failed to solve personalization and ROI measurement problems


There’s a major mismatch between what educational institutions offer and what the job market demands


Applied knowledge delivered at the right time to the right people is crucial for employability across all job types


Summary

All speakers acknowledge a disconnect between traditional education offerings and actual workforce needs, with emphasis on the need for more practical, applied knowledge and better measurement of learning outcomes.


Topics

Social and economic development | Capacity development | The digital economy


Trust and explainability in AI systems are crucial concerns that need addressing

Speakers

– Hugo Sarazen
– Aidan Gomez

Arguments

Trust is equally important since LLMs give answers without explaining their sources


Reasoning models and retrieval-augmented generation provide auditability by showing chains of thought and citing sources


Summary

Both speakers recognize the critical importance of making AI systems more transparent and trustworthy through better explainability and source attribution, though they approach solutions differently.


Topics

Artificial intelligence | Building confidence and security in the use of ICTs | Data governance


Similar viewpoints

Both speakers see AI as enabling new forms of practical learning through simulation and role-playing, while simultaneously recognizing the need for rigorous testing to ensure genuine learning has occurred.

Speakers

– Hugo Sarazen
– Aidan Gomez

Arguments

AI role-playing and simulation can accelerate learning through practice and immediate feedback


Testing and assessment become absolutely critical as students can more easily fake their way through education


Topics

Artificial intelligence | Capacity development | Social and economic development


Both speakers express concern about humans losing important cognitive and evaluative abilities when AI makes processes too easy, emphasizing the need to maintain human agency and self-awareness in learning and decision-making.

Speakers

– Hugo Sarazen
– Debbie Prentice

Arguments

Society risks losing agency over important decisions if we rely on AI without questioning and validation


Self-knowledge for learners is critical as they lose cues about what is difficult and compelling


Topics

Human rights and the ethical dimensions of the information society | Capacity development | Artificial intelligence


Both speakers worry about the false sense of understanding that AI can create, where learners lose the ability to accurately assess their own knowledge and competence.

Speakers

– Aidan Gomez
– Debbie Prentice

Arguments

Deep understanding and mastery are at risk as LLMs can fool people into thinking they understand when they don’t


Self-knowledge for learners is critical as they lose cues about what is difficult and compelling


Topics

Capacity development | Human rights and the ethical dimensions of the information society | Artificial intelligence


Unexpected consensus

The need for specialized, trusted AI models over general-purpose ones

Speakers

– Hugo Sarazen
– Aidan Gomez

Arguments

There will be a pendulum swing back toward specialized, trusted models trained on verified expertise rather than general internet data


Reasoning models and retrieval-augmented generation provide auditability by showing chains of thought and citing sources


Explanation

Despite representing different business models, both speakers converged on the idea that the future lies not in bigger, more general AI models, but in more specialized, trustworthy systems that can explain their reasoning and cite reliable sources.


Topics

Artificial intelligence | Data governance | Building confidence and security in the use of ICTs


Universities still serve essential functions despite AI advances

Speakers

– Debbie Prentice
– Hugo Sarazen

Arguments

Universities serve broader functions beyond knowledge transfer and remain valuable despite AI advances


AI enables adaptive, bite-sized, in-the-flow-of-work learning with measurable skill deployment


Explanation

Unexpectedly, the for-profit education technology leader acknowledged the continued value of traditional universities, while the university leader recognized the legitimate role of AI-powered workplace learning, suggesting complementary rather than competitive roles.


Topics

Social and economic development | Capacity development | Human rights and the ethical dimensions of the information society


Overall assessment

Summary

The speakers demonstrated remarkable consensus on key issues despite representing different sectors (for-profit tech vs. non-profit academia). Main areas of agreement included: the need for human oversight of AI, the importance of personalized learning, the critical role of questioning and evaluation skills, the skills gap between education and employment, and the necessity of trustworthy AI systems.


Consensus level

High level of consensus with complementary perspectives rather than fundamental disagreements. This suggests a mature understanding of AI’s role in education across different stakeholders, with implications for collaborative approaches to addressing educational challenges in the AI era.


Differences

Different viewpoints

What is the scarcest resource in an AI-driven world

Speakers

– Hugo Sarazen
– Aidan Gomez
– Debbie Prentice

Arguments

Sustained human attention is becoming scarce due to information wealth creating poverty of attention


Deep understanding and mastery are at risk as LLMs can fool people into thinking they understand when they don’t


Self-knowledge for learners is critical as they lose cues about what is difficult and compelling


Summary

The speakers identified different primary concerns: Hugo focused on attention scarcity due to information overload, Aidan emphasized the risk of false mastery, and Debbie highlighted the loss of self-awareness in learning


Topics

Artificial intelligence | Capacity development | Human rights and the ethical dimensions of the information society


Testing approach with or without AI tools

Speakers

– Aidan Gomez
– Hugo Sarazen

Arguments

AI should be treated as a tool like a calculator, but testing without the tool is essential to assess true human understanding


AI role-playing and simulation can accelerate learning through practice and immediate feedback


Summary

Aidan strongly advocates for testing without AI tools to assess true human knowledge retention, while Hugo emphasizes using AI for enhanced learning through simulation and role-playing during the learning process


Topics

Artificial intelligence | Capacity development | Social and economic development


University education versus market-driven skills training

Speakers

– Debbie Prentice
– Hugo Sarazen
– Audience

Arguments

Universities teach fundamentally important skills like critical thinking rather than just business-specific needs


Traditional online learning addressed accessibility but failed to solve personalization and ROI measurement problems


Applied knowledge delivered at the right time to the right people is crucial for employability across all job types


Summary

Debbie defends universities’ focus on fundamental skills like critical thinking, while Hugo and audience members emphasize the need for practical, market-relevant skills with measurable ROI


Topics

Social and economic development | Capacity development | The digital economy


Unexpected differences

The role of human teachers and instructors in AI-powered education

Speakers

– Aidan Gomez
– Hugo Sarazen

Arguments

AI should be treated as a tool like a calculator, but testing without the tool is essential to assess true human understanding


AI can personalize learning experiences and create feedback loops that traditional teaching cannot achieve at scale


Explanation

While both speakers are from the AI/tech industry, they had different views on human involvement – Aidan sees humans primarily as customers to be served by AI tools, while Hugo sees teachers as essential partners who can be augmented by AI to extend their unique storytelling and engagement abilities


Topics

Artificial intelligence | Capacity development | Social and economic development


The future of traditional educational institutions

Speakers

– Debbie Prentice
– Hugo Sarazen

Arguments

Universities serve broader functions beyond knowledge transfer and remain valuable despite AI advances


Traditional online learning addressed accessibility but failed to solve personalization and ROI measurement problems


Explanation

Despite coming from different sectors (academic vs. business), there was less conflict than expected, with Hugo actually acknowledging the value of the university ‘bundle’ while suggesting it might need unbundling, and Debbie recognizing that additional workplace skills can come from other platforms


Topics

Social and economic development | Capacity development | The digital economy


Overall assessment

Summary

The main disagreements centered on educational priorities (fundamental vs. applied skills), assessment methods (with vs. without AI tools), and the primary risks of AI in education (attention, mastery, or self-knowledge). Despite different sectoral backgrounds, speakers showed surprising alignment on the need for critical thinking and human agency.


Disagreement level

Moderate disagreement with significant areas of convergence. The disagreements reflect different priorities and perspectives rather than fundamental incompatibilities, suggesting potential for collaborative solutions that address multiple concerns simultaneously.


Partial agreements

Partial agreements

All speakers agree that critical thinking and human judgment are essential, but they disagree on implementation – Hugo focuses on front-end questioning and back-end quality assurance, Aidan emphasizes strict testing without AI tools, and Debbie advocates for curriculum adaptation in universities

Speakers

– Hugo Sarazen
– Aidan Gomez
– Debbie Prentice

Arguments

Humans must focus on asking the right questions and quality assurance since AI will dominate the middle execution phase


Testing and assessment become absolutely critical as students can more easily fake their way through education


Universities must adapt curricula to emphasize critical thinking and evaluation skills over information retention


Topics

Artificial intelligence | Capacity development | Human rights and the ethical dimensions of the information society


Both agree on AI’s potential for personalized education, but Hugo emphasizes practical business applications and ROI measurement, while Aidan focuses on maintaining educational integrity through proper assessment

Speakers

– Hugo Sarazen
– Aidan Gomez

Arguments

AI can personalize learning experiences and create feedback loops that traditional teaching cannot achieve at scale


The shift moves from generic education offerings to highly targeted, scalable approaches for individuals


Topics

Artificial intelligence | Capacity development | Social and economic development


Both recognize the need for more trustworthy and explainable AI systems, but Hugo emphasizes the need for specialized models trained on trusted sources, while Aidan focuses on technical solutions like reasoning models and RAG for auditability

Speakers

– Hugo Sarazen
– Aidan Gomez

Arguments

There will be a pendulum swing back toward specialized, trusted models trained on verified expertise rather than general internet data


Reasoning models and retrieval-augmented generation provide auditability by showing chains of thought and citing sources


Topics

Artificial intelligence | Data governance | Building confidence and security in the use of ICTs


Similar viewpoints

Both speakers see AI as enabling new forms of practical learning through simulation and role-playing, while simultaneously recognizing the need for rigorous testing to ensure genuine learning has occurred.

Speakers

– Hugo Sarazen
– Aidan Gomez

Arguments

AI role-playing and simulation can accelerate learning through practice and immediate feedback


Testing and assessment become absolutely critical as students can more easily fake their way through education


Topics

Artificial intelligence | Capacity development | Social and economic development


Both speakers express concern about humans losing important cognitive and evaluative abilities when AI makes processes too easy, emphasizing the need to maintain human agency and self-awareness in learning and decision-making.

Speakers

– Hugo Sarazen
– Debbie Prentice

Arguments

Society risks losing agency over important decisions if we rely on AI without questioning and validation


Self-knowledge for learners is critical as they lose cues about what is difficult and compelling


Topics

Human rights and the ethical dimensions of the information society | Capacity development | Artificial intelligence


Both speakers worry about the false sense of understanding that AI can create, where learners lose the ability to accurately assess their own knowledge and competence.

Speakers

– Aidan Gomez
– Debbie Prentice

Arguments

Deep understanding and mastery are at risk as LLMs can fool people into thinking they understand when they don’t


Self-knowledge for learners is critical as they lose cues about what is difficult and compelling


Topics

Capacity development | Human rights and the ethical dimensions of the information society | Artificial intelligence


Takeaways

Key takeaways

In an AI-driven world, critical thinking and sustained human attention are becoming the scarcest resources, with deep mastery at risk as AI can create false confidence in understanding


AI should be treated as a tool like a calculator, but testing without AI assistance is essential to assess true human learning and retention


The future of education lies in AI-powered personalization that can provide one-on-one tutoring experiences at scale, addressing the Bloom two-sigma problem economically


Humans must focus on the ‘front-end’ (asking right questions, critical thinking) and ‘back-end’ (quality assurance, validation) of problem-solving, as AI will dominate the middle execution phase


There’s a fundamental mismatch between what educational institutions offer and what the job market demands, requiring faster adaptation and applied knowledge delivery


AI models are evolving from simple input-output systems to reasoning models that show their thought processes, improving auditability and trust


Traditional university education serves broader functions beyond knowledge transfer (socialization, critical thinking development, research) that remain valuable despite AI advances


Society risks losing agency over important decisions if we become overly reliant on AI without maintaining questioning and validation capabilities


Resolutions and action items

Educational institutions need to adapt curricula to emphasize critical thinking, question-asking, and evaluation skills over information retention


Better tools for detecting AI-generated content need to be developed and made available to educators


Investment in AI explainability research is needed to build trust and enable learning from AI reasoning processes


Testing regimens must become more rigorous and include both with-AI and without-AI assessments


Development of specialized, trusted AI models trained on verified expertise rather than general internet data


Creation of better benchmarks for measuring AI teaching effectiveness, similar to existing coding and math benchmarks


Unresolved issues

How to measure return on investment (ROI) for learning and skill development beyond completion rates and hours spent


The generational gap where senior professionals can judge AI output while juniors cannot, potentially creating a future shortage of critical thinkers


Whether the traditional university ‘bundle’ (learning, accreditation, socialization, research) should be unbundled given economic pressures and AI capabilities


How to maintain human expertise and authority in a world where traditional sources of truth (libraries, dictionaries) are being replaced by AI systems


The challenge of keeping educational content current when skills and market demands change rapidly


How to ensure AI systems provide trustworthy, explainable answers rather than statistically average responses


Suggested compromises

Treating AI as an augmentation tool for teachers rather than a replacement, allowing favorite teaching styles to be applied across different subjects


Implementing a hybrid approach where AI handles personalization and feedback while humans provide motivation, storytelling, and critical oversight


Using AI for specialized skill training and simulation while maintaining traditional education for fundamental critical thinking and socialization


Developing a pendulum approach where the current ‘bigger is better’ AI model trend will swing back toward specialized, trusted models for specific domains


Creating educational systems that teach both how to use AI effectively and how to function without it through rigorous testing protocols


Thought provoking comments

When you have a wealth of information, you have a poverty of attention… Now today, you have LLMs that can learn everything, and they can learn across different domains, and they can become the polymath. So every data center, every time we say there’s a new infrastructure that’s being added, we’re adding millions and millions, millions of polymath.

Speaker

Hugo Sarazen


Reason

This comment reframes AI not just as a tool but as creating millions of polymaths – historically rare individuals with expertise across multiple domains. It introduces the profound concept that we’re democratizing what was once extremely scarce (polymathic knowledge) while creating scarcity in what was once abundant (human attention).


Impact

This fundamentally shifted the discussion from viewing AI as simply another educational tool to understanding it as a paradigm shift that changes the entire knowledge economy. It established the framework for later discussions about human competitive advantage and the need to focus on skills AI cannot replicate.


I think llms can’t chat bots they can fool you into thinking that you understand something when you don’t and i view that as a core risk as we integrate these LLMs into an education environment, is this false sense of mastery or understanding.

Speaker

Aidan Gomez


Reason

This identifies a critical psychological risk of AI in education – the Dunning-Kruger effect amplified by technology. It’s particularly insightful because it comes from someone building these systems, acknowledging their fundamental limitation in creating genuine understanding.


Impact

This comment elevated the discussion beyond efficiency gains to examine the deeper epistemological risks of AI in education. It led to sustained focus on the importance of testing without AI assistance and the need to distinguish between surface-level answers and deep mastery.


The middle part, you don’t have a competitive advantage. You will be outgunned… AI is a polymath by design. It has the data set across all of that. So the middle part is a foregone conclusion, folks. You need to get, get good at the front and the back end.

Speaker

Hugo Sarazen


Reason

This provides a strategic framework for human adaptation in the AI age by dividing cognitive work into three parts: problem formulation (front), solution generation (middle), and quality assessment (back). It’s a practical roadmap for where humans should focus their development.


Impact

This comment became a central organizing principle for the remainder of the discussion. Multiple subsequent questions and answers referenced this framework, and it provided a concrete answer to the existential question of human relevance in an AI-dominated world.


The university degree is a bundle. It’s a convenient bundle that as a society we chose to create. So you learn something, you get an accreditation, and you get a degree. have a rite of passage… Is it time to revisit whether all of these components need to fit together because of the economics and what AI can do to change the economic of delivery?

Speaker

Hugo Sarazen


Reason

This deconstructs higher education into its component parts, challenging the assumption that learning, credentialing, and social development must be packaged together. It’s provocative because it questions fundamental assumptions about educational institutions.


Impact

This comment introduced a disruptive perspective that challenged the university representative’s worldview directly. It shifted the conversation from ‘how do we improve education’ to ‘should we fundamentally restructure how education is delivered and validated,’ forcing a more radical examination of educational futures.


Every model now is a reasoning model. And so before it actually responds, it has an internal monologue where it thinks through the problem, tries to reason about it, and then delivers a response… You would ask them, what’s 1 plus 1? And it would immediately respond and put the same amount of effort into answering that question. And you would ask it to prove some unsolved Erdos problem or something. And it would put the same amount of effort as 1 plus 1 into that.

Speaker

Aidan Gomez


Reason

This reveals a fundamental shift in AI architecture toward more human-like reasoning processes, addressing earlier concerns about explainability and trust. The insight about effort allocation shows sophisticated understanding of how intelligence should work.


Impact

This technical insight provided hope that some of the trust and explainability issues raised earlier in the discussion might be solvable. It demonstrated that the AI industry is actively working on the problems identified by educators, potentially bridging the gap between technological capability and educational needs.


Any business process or any endeavor, you have the problem statement asking the right question, you have the solving, and then you have the quality assurance in the back. It’s a feedback loop… What AI does well is that middle part. It doesn’t do a whole lot in the front end and the back end.

Speaker

Hugo Sarazen


Reason

This provides a universal framework for understanding AI’s role across all domains, not just education. It’s insightful because it clearly delineates where humans remain essential while acknowledging AI’s strengths.


Impact

This comment provided practical guidance for educators and students about where to focus their development efforts. It answered the audience’s concerns about human relevance and gave concrete direction for curriculum development and skill building.


Overall assessment

These key comments fundamentally transformed the discussion from a surface-level exploration of AI in education to a deep examination of how artificial intelligence is reshaping the nature of knowledge, expertise, and human value. Hugo Sarazen’s polymath concept established the macro framework, while Aidan Gomez’s insights about false mastery and reasoning models provided technical depth and nuance. Together, they created a narrative arc that moved from identifying the problem (attention scarcity, false understanding) to providing strategic solutions (focus on front-end and back-end skills) to addressing systemic questions (should we unbundle education?). The discussion evolved from asking ‘how do we use AI in education’ to ‘how do we fundamentally restructure human development in an age of artificial polymaths.’ These comments elevated what could have been a typical ed-tech panel into a profound examination of human adaptation and institutional evolution.


Follow-up questions

How do you measure the ROI of learning in enterprise settings?

Speaker

Hugo Sarazen


Explanation

Hugo mentioned that very few CHROs could explain the ROI of learning, with most defaulting to completion rates and hours of learning, which isn’t particularly helpful for business leaders. This represents a critical gap in understanding learning effectiveness.


How can we develop better teaching benchmarks for AI models?

Speaker

Aidan Gomez


Explanation

Aidan noted that teaching is a skill that needs to be invested in by model developers, but there are no teaching benchmarks currently available, unlike the many benchmarks for coding, math, and other subjects.


How do we advance explainability in AI for learning contexts?

Speaker

Hugo Sarazen


Explanation

Hugo emphasized that explainability in AI is a whole field that needs more research investment, particularly because current LLMs are statistical models that don’t provide reasoning for their answers, which is crucial for learning.


How can we create better AI detection tools for educational settings?

Speaker

Aidan Gomez


Explanation

Aidan pointed out that many current AI detection tools are ‘total scams’ with high error rates, and there’s a need for better tools to be developed and put in the hands of educators.


How do we address the skills gap between what education institutions offer and what the market demands?

Speaker

Pranjal Sharma (audience member)


Explanation

This represents a fundamental disconnect in the education-to-employment pipeline that needs systematic addressing, particularly around applied knowledge delivery.


How do we train junior professionals to develop critical thinking skills when AI can perform many of their traditional learning tasks?

Speaker

Audience member


Explanation

This addresses a critical paradox where senior professionals can judge AI output due to experience, but juniors may not develop this capability if they rely too heavily on AI from the start.


Should the traditional university bundle (learning, accreditation, rite of passage, research) be unbundled given new AI capabilities and economic pressures?

Speaker

Hugo Sarazen


Explanation

Hugo suggested this as a potentially controversial but important question about whether all components of traditional higher education need to remain bundled together given changing economics and AI capabilities.


How can we develop specialized, trusted AI models trained on expert knowledge rather than general internet data?

Speaker

Hugo Sarazen


Explanation

Hugo suggested the need to move away from models trained on everything (including Reddit quotes) toward specialized models trained on trusted, expert sources for specific domains.


How do we maintain human agency in decision-making as AI becomes more capable?

Speaker

Hugo Sarazen


Explanation

Hugo emphasized the critical importance of ensuring humans don’t lose the ability to influence important decisions as AI systems become more sophisticated and widely adopted.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.