Driving Enterprise Impact Through Scalable AI Adoption

20 Feb 2026 12:00h - 13:00h

Driving Enterprise Impact Through Scalable AI Adoption

Session at a glanceSummary, keypoints, and speakers overview

Summary

The town-hall convened to examine how the abundance of AI-generated knowledge creates new dilemmas for learners and educators [1][3]. Panelists argued that, while AI makes information instantly available, the real scarcity is now human attention and the ability to judge trustworthiness [40][43][46-47]. Hugo highlighted Herbert Simon’s “poverty of attention” and warned that large language models often provide answers without explaining their sources, eroding trust [40][46-47]. Aidan warned that easy access to surface-level answers fosters a false sense of deep mastery, making rigorous testing essential to verify what learners truly understand [48][50-53]. Debbie reported that the audience’s votes favored critical thinking and sustained attention as the most needed skills in this environment [65-68].


Hugo described Udemy’s evolution from a massive catalogue of 250 000 courses and 80 million learners to an AI-driven reskilling platform that can assess individuals and personalize feedback [73-78][91-107]. Cohere, explained Aidan, builds enterprise-grade LLMs that stay within a client’s security perimeter and helps organisations shift workers from performing tasks to orchestrating AI agents [110-118][119-121]. Both speakers agreed that AI can augment but not replace teachers, citing the Bloom two-sigma finding that one-on-one coaching dramatically outperforms large classes and that AI could scale such personalised tutoring [149-157][184-188].


They also stressed the need for explainability, noting that future models must provide reasoning traces or retrieval-augmented citations so users can audit answers [346-353][374-382]. Hugo warned that reliance on black-box models could diminish human agency and ethical guardrails, urging societies to retain the ability to question and validate AI output [346-353][369-370]. Aidan added that while reasoning-enabled models are emerging, they remain brittle, so exposing their chain-of-thought is crucial for trust [374-376][386-390].


The panel concluded that education must adapt by emphasizing front-end skills such as asking the right questions and back-end skills like critical evaluation, while leveraging AI for personalization and scalable assessment [274-277]. Overall, the discussion underscored that AI will reshape knowledge delivery, but preserving critical thinking, explainability, and human oversight is essential for effective learning [65-68][311-313].


Keypoints


Major discussion points


Attention, critical thinking and deep mastery are becoming scarce resources in an AI-driven world.


Hugo notes Herbert Simon’s “wealth of information, poverty of attention” and highlights attention and trust as key challenges [40-44][46-47]. Aidan warns that LLMs can give a false sense of deep mastery, making genuine understanding the most at-risk skill [48-53]. Debbie reports that the audience ultimately favored critical thinking and sustained attention as the most valuable traits [65-70].


AI can personalize and scale learning, enabling rapid reskilling and adaptive education.


Hugo describes Udemy’s pivot to an AI platform that uses rapid assessment, role-play simulations and feedback loops to tailor learning to each individual [93-107]. He also cites the “Bloom two-sigma” research and the shift toward bite-size, in-the-flow learning for enterprises [124-141]. Aidan adds that Cohere’s enterprise LLMs focus on secure, on-premise deployment, giving businesses the tools to embed AI into their workforce [110-118].


Rigorous testing and assessment are essential to preserve human judgment and avoid superficial competence.


Aidan stresses that testing without AI tools is the “gold-standard” for measuring true understanding [48-53][321-334]. Hugo argues that human teachers remain indispensable as storytellers and mentors, and that AI-driven tutors must augment-not replace-this human element [156-166]. Both panelists agree that without strong assessment, learners can “fake” their way through education.


A tension exists between for-profit ed-tech models and traditional universities, raising questions about the future of degrees and possible unbundling.


Debbie frames the panel as representing “for-profit educational technology” versus the “not-for-profit” university sector [13-15]. Hugo later calls the university degree a “convenient bundle” that may need to be re-examined in light of AI-enabled delivery [408-416] and discusses the need for more adaptable, skill-focused credentials [419-421]. Audience members ask directly about gaps between online platforms like Udemy and accredited colleges [403-406].


Explainability, trust, and agency are major concerns when AI provides answers without transparent reasoning.


Hugo points out that most LLMs do not explain how an answer was derived, threatening trust [44-47]. He later calls for research on explainability and specialized, trusted models [340-368]. Aidan describes emerging “reasoning” models that generate internal monologues and Retrieval-Augmented Generation (RAG) to cite sources, aiming to improve auditability and user confidence [372-383][386-393].


Overall purpose / goal of the discussion


The town-hall was convened to surface and interrogate the “dilemmas around knowledge” that arise as AI makes information instantly accessible. Participants examined how AI reshapes the scarcity of attention, critical thinking, and mastery; explored ways AI can enhance personalized, scalable learning; debated the need for robust assessment and human oversight; and considered the shifting relationship between traditional universities and for-profit ed-tech providers. The ultimate aim was to identify challenges and opportunities for educators, businesses, and policymakers in an AI-infused knowledge ecosystem.


Tone of the discussion


The conversation begins with a formal, inquisitive tone as Debbie introduces the panel and the poll question. As the dialogue progresses, Hugo and Aidan adopt an optimistic, solution-oriented tone, highlighting AI’s potential for personalization and reskilling. Mid-session, the tone shifts to a more cautionary and reflective stance, emphasizing the risks of attention loss, superficial mastery, and loss of agency [40-47][48-53][340-368]. Toward the end, the tone becomes balanced-acknowledging both the transformative promise of AI and the need for rigorous testing, explainability, and thoughtful redesign of educational structures. Throughout, the discussion remains professional and collaborative, with occasional moments of urgency when addressing trust and ethical concerns.


Speakers

Hugo Sarazen – President, Chairperson and Chief Executive Officer of Udemy; expertise in online learning platforms, corporate training, and AI-driven education. [S1]


Debbie Prentice – Professor and Vice Chancellor of the University of Cambridge; expertise in higher-education leadership and the not-for-profit education sector. [S2]


Audience – Various participants representing industry and academia (e.g., Anna Van Niels, Director of the Livium Trust; Nathaniel, founder of an education company in Australia; Pranjal Sharma, author and analyst; Kian, CEO of Workera). Roles/titles as noted. [S3][S4][S5]


Aidan Gomez – Co-founder and Chief Executive Officer of Cohere, an enterprise AI company; expertise in large language models, AI product development, and enterprise AI deployment. [S6][S7][S8]


Additional speakers:


– None


Full session reportComprehensive analysis and detailed insights

The World Economic Forum town-hall opened with Professor Debbie Prentice, Vice-Chancellor of Cambridge, welcoming participants and framing the session as an exploration of “dilemmas around knowledge” that have persisted since the invention of schools and libraries but are now amplified by AI-driven instant access to information [1-4]. She introduced the panel: Aidan Gómez, co-founder and CEO of Cohere, an enterprise AI firm building large language models (LLMs), and Hugo Sarazen, President, Chairperson and CEO of Udemy, a global online-learning platform [5-10]. The moderator highlighted the diversity of perspectives – for-profit ed-tech versus not-for-profit university – and invited the audience to engage via the Slido app and the hashtag #WEF26 [12-22].


Poll question & live results


The first poll asked participants which resource is becoming scarcest in a world of instant AI answers, offering options: sustained attention, independent judgment, deep mastery, motivation, and trust [22-28]. The live results showed critical thinking receiving the most votes, with sustained attention a close second [65-70].


Panel responses


Hugo Sarazen argued that attention is the most pressing shortage, invoking Herbert Simon’s insight that “when you have a wealth of information, you have a poverty of attention” and warning that LLMs often provide answers without explaining their provenance, thereby undermining trust [40-44][46-47].


Aidan Gómez counter-pointed that the greatest risk is a false sense of deep mastery: learners can obtain surface-level responses that feel comprehensive, so rigorous testing that removes the tool is essential to verify what the human actually knows [48-53][321-334].


Debbie Prentice rejected all five poll options, noting that without cues about difficulty or interest students cannot gauge their own understanding [54-61]; she then pointed to the audience vote, which favored critical thinking (and sustained attention) as the leading choice [65-70].


Udemy’s evolution (deep-dive)


Hugo described how Udemy has moved from a catalogue of 250 000 courses and 80 million learners to an AI-driven reskilling platform. By assessing each learner quickly, breaking courses into adaptive pathways, and providing real-time feedback loops-including role-play simulations (e.g., sales-pitch practice) and automated scoring rubrics-Udemy can keep users engaged longer than generic courses [73-78][91-107][199-207][144-146][215-217]. He also referenced Bloom’s two-sigma problem, noting that one-on-one tutoring yields a two-sigma improvement over classroom instruction but has been economically infeasible to scale, a gap AI can now begin to fill [149-157][184-188].


Cohere’s approach (deep-dive)


Aidan explained that Cohere supplies enterprise-grade LLMs that run inside a client’s security perimeter, ensuring no data leaves the organisation while enabling workers to shift from performing tasks to orchestrating AI agents [110-118][119-121]. He highlighted recent advances: an “internal monologue” or chain-of-thought reasoning that structures problem-solving before output, and Retrieval-Augmented Generation (RAG) that cites external sources (e.g., the Cambridge library) to improve auditability and user confidence [372-383][386-394]. Both panelists agreed that explainability is crucial; Hugo called for specialised, trusted models and research into transparent reasoning, while Aidan stressed exposing chain-of-thought and source citations as a technical route [340-353][374-383].


Discussion on attention, personalization, assessment, and detection


The conversation returned to attention scarcity, with Hugo emphasizing AI-driven personalization-quick learner assessment, adaptive pathways, and instant feedback-as a way to mitigate the deficit [93-107][144-146][215-217]. Aidan reiterated that the “gold-standard” remains testing without AI to gauge true retention, but also recognised that proficiency with AI tools is itself a skill that should be evaluated with the tool in the loop [171-182][321-334]. He warned that current AI-text detectors are unreliable and described a technique of embedding subtle cues in model outputs to enable more robust detection [321-330]. Both warned that more reliable detection mechanisms are needed [321-330].


An audience member raised a paradox in companies: senior professionals can judge AI output while junior staff cannot, creating concerns about future job security and underscoring the need to train the next generation in critical evaluation of AI [460-470].


Audience Q&A


Motivation: Anna Van Niels asked how AI can sustain motivation without a human teacher; Hugo answered with AI-driven role-play and feedback loops that mimic gym-style repetition to keep learners engaged [194-207][215-217].


Physical classrooms: Nathaniel from Australia queried AI’s role amid a social-media ban for under-16s; Aidan argued AI should be taught as a calculator-like tool with safeguards, while Hugo stressed teaching students to ask the right questions and develop critical judgment [217-250].


Applied knowledge: Pranjal Sharma highlighted the gap between academic credentials and applied knowledge; Aidan noted AI can accelerate programme creation but that skill mapping must remain human-led [247-266][254-266].


Degree bundle: Hugo described the university degree as a “convenient bundle” of credential, rite of passage, and research, suggesting AI-enabled delivery may prompt a re-evaluation or unbundling of these components [408-416]; Debbie defended the broader mission of universities-fostering critical thinking, deep mastery, and research-while acknowledging graduates may need additional AI-supported skill development for the workplace [425-426][267-272].


Closing remarks


The panel concluded that AI will irrevocably reshape knowledge delivery, but effective education in the AI era will require:


– preserving and cultivating human attention, critical thinking, and self-knowledge;


– deploying secure, enterprise-grade LLMs that can be personalised and audited;


– maintaining human teachers as mentors and storytellers; and


– establishing robust, dual-track assessment regimes that combine tool-free testing with AI-enhanced simulations.


Consensus highlights


– Attention and critical thinking are the most endangered cognitive resources.


– AI-driven personalization can help alleviate attention scarcity.


– Explainability and trust are non-negotiable for widespread adoption.


– Human educators remain indispensable, with AI serving as an augmentative tool.


These points reflect the transcript’s emphasis on trust, explainability, and human agency as central pillars for responsible AI integration in education [40-44][65-70][340-353][184-188][169-176].


Session transcriptComplete transcript of the session
Debbie Prentice

Good afternoon, everyone, and thank you for joining this town hall discussion where we will be talking about a topic that university and education leaders are all buzzing about, which is namely dilemmas around knowledge. This has been a topic for us since schools were first invented, libraries were first invented, and it’s still with us today. It’s extremely relevant today in an age in which AI is changing, making knowledge available broadly to everybody all the time. But it doesn’t mean that there aren’t still dilemmas around knowledge, and we’re going to probe these today. I’m Professor Debbie Prentice, and I’m the Vice Chancellor of the University of Cambridge. I’m very pleased to introduce you to our panelists for this session.

So we have Aidan Gomez, who is the co -founder and chief executive officer of Cohere, an enterprise AI company developing advanced language models for use by business. And we also welcome Hugo Sarazen, who is president and chairperson. Chief Executive Officer of Udemy, which provides a wide range of business and leadership development courses, including AI courses, to businesses and organizations around the world in fields such as financial services, higher education, government, manufacturing, and technology. We have some fascinating questions to discuss this afternoon around knowledge, misinformation, AI, attention spans, and even the nature of expertise. And we’re going to bring the audience in early and often, so I hope that you’ll all participate with us. We, as panelists, come from very different perspectives.

Aidan and Hugo run very successful businesses selling a product. They are from the for -profit educational technology sector, and I’m from the not -for -profit sector. So there are different pressures, different opportunities, different challenges that we face in this space. Before we get started with… Before we get started with our panel discussion, I’d like to remind the online audience that… If you are sharing with us through your social channels, you should use the hashtag, hashtag WEF26. And whether you’re joining online today or here in person, and it’s great to see so many of you here. Thank you so much for coming. Please feel free to get involved in the session by reacting to the questions we discuss in our conversation and also by submitting questions to panelists via the Slido app.

Okay? Okay, so our first question is, in a world of instant answers and AI assistance, what is becoming the scarcest resource? Okay, the answers are from a list of options. Is it sustained human attention, independent judgment and critical thinking, deep understanding and mastery, motivation to learn in the first place, or trust in what we know and who to believe? And actually I said or. That could be and. You can choose as many of these as you. as you want. Okay? So you can see on the screen, actually, as people are responding via the Slido app, but I want to ask our panelists, what would you say? So you can see the answers on the screen.

What would you say, Hugo?

Hugo Sarazen

Well, I think it’s a complicated question, and I think there’s a lot of all of the above. If you take a historical perspective, knowledge was scarce. That was a source of power. Our countries fought for that. And we also had experts that built knowledge over time, but very few polymath. Very few. Those ones that were were very, very, very important. Now today, you have LLMs that can learn everything, and they can learn across different domains, and they can become the polymath. So every data center, every time we say there’s a new infrastructure that’s being added, we’re adding millions and millions, millions of polymath. And that becomes a democratization. of that knowledge. The problem is, and there’s an amazing quote from Herbert Simon, when you have a wealth of information, you have a poverty of attention.

And I think that’s what’s happening for a lot of learners, and that’s why traditional methods need to change. And we’re going to come up and talk, I’m sure, about how learning needs to evolve, what the process, what’s the role of traditional institution in changing, what’s the role corporation need to, and what individual needs to do. So I think attention is one big component. The second is a lot of, when you go to LLM and AI and you ask for a question, it will give you an answer. It will feel very comfortable with that answer. It doesn’t explain. Explainability in AI is a whole field, a whole domain, and most of these LLMs don’t give you that.

So if you have a society that begins to rely on product, that give you an answer but don’t tell you where that answer is, answer came from how do you learn and what do you have in terms of trust so i think the trust piece is also equally important so i’ll stop at that we can go well further but

Aidan Gomez

yeah i was looking at the uh the poll up there and i for whatever reason the first one that came to me was deep mastery which seems to be the most unpopular choice so i think um you know when you exist in a world where it’s so fast and easy to get answers to whatever question you might have or to get a very surface level answer to even a complex question like whatever like how does quantum mechanics work it’ll give you a four paragraph response um but that’s not deep understanding of the subject matter and so i think llms can’t chat bots they can fool you into thinking that you understand something when you don’t and i view that as a core risk as we integrate these LLMs into an education environment, is this false sense of mastery or understanding.

We can discuss the different solutions to that. I think that testing is essential to it. The idea that you need to take away the tool and see what the human alone understands and has retained. The ability for you to assess depth has to take away those tools. I think that is, from my perspective, what’s most at risk.

Debbie Prentice

That’s interesting. My answer is a variant on yours. I, of course, wanted to reject all five. But I think it’s because of where I come from, coming from the university sector. I wanted to say self -knowledge for the learner. It’s part of what you’re saying. You don’t know if you’re mastered and you don’t know if you’re interested in it. You don’t know if you get it. It comes to you. So much of what you learn, so much of what you learn, what you learn comes from what is difficult and what is compelling. So for those cues to no longer be actually useful cues for self -understanding means how will you even know, but that’s my answer anyway.

So we can see what the – whoops, it went away. I think critical thinking was the one that won out at the end. It looked like critical thinking was actually the audience preferred. We can keep coming back to this, but I want to use this as a jumping – oh, there we go. Okay, yeah, critical thinking and then sustained attention. They were neck and neck for most of the time, yeah, and then trust and then deep mastery, right. That’s interesting. So I want to talk a little bit about each of what you do. So we can start with you, Hugo. Tell us about Udemy.

Hugo Sarazen

So Udemy is a 15 -year -old company that, at the time, did a – pretty cool thing around introducing online learning. It was a great innovation to change accessibility and the cost of reaching out to millions and millions of people and created a creator economy around that. So we now have 250 ,000 courses, 80 million learners on a regular basis. We serve 17 ,000 large enterprise. We have 85 ,000 instructors that kind of come to this marketplace to offer their wear. They’re very deeply committed. They know stuff and they want to share it to the world. And we do it in about 40 % of our revenues are in the U .S. The rest is around the world. So we’re in tons of languages, 46 plus. And the funny story, I’ve only been in the world for less than a year.

When I came in my first on -haul and the people who may be listening online who were working on this, they were like, oh, I’m going to do this. I’m going to do this. I’m going to do this. I’m going to do this. I’m going to do this. I’m going to do this. I’m going to do this. town hall, I came in and I said, we’re going to exit online learning. That is a wonderful innovation. It did a bunch of great things, but it doesn’t solve the problem of today. And with AI, we can do so many different things. So I want to make a hard pivot of the business toward becoming an AI platform to reskill the workforce of the future.

And we can talk about that. And I don’t want to take too much time, but there’s a lot of ways you can use AI to do some of the things you were suggesting to kind of help build the mastery, how to do assessment using AI, how to use AI role play to immerse people. And it also does the thing that I think is so, so important. Traditional online learning and actually traditional learning. You’re an instructor and you teach to the average, right? You create your curriculum and you think you’re going to hit like the most of the people. You can’t get for the super fast. You can’t get for the super slow. You’re on online learning.

And then different people have different starting points, and we don’t have an easy way to accommodate that. Now with AI, you can do a quick assessment. You can break apart the class. You can have feedback loop and reinforce that in a very, very powerful way. And I think that’s one of the things that’s going to emerge of using AI to kind of re -skill the workforce. It’s going to build on that previous generation of online learning to do something pretty remarkable and quite different moving forward.

Debbie Prentice

Thank you. Aidan?

Aidan Gomez

Yes, so Cohere builds large language models. So we’re one of the developers of this core piece of technology that powers things like ChatGPT and all these different applications. We’re focused purely on the enterprise side of house, and so we work with businesses to put those models to work inside the organization. We give them access to internal data and systems that the humans have access to. And then we teach or we work with our customer to teach. the workforce to shift their role from being the ones individually doing the work to managing a team of these models or agents to carry out that work. Our big differentiator is on the security side. So there’s no data exiting our customer’s perimeter.

Instead, we send all of our models and software to them, and they keep it self -contained. Yeah.

Debbie Prentice

So you have certain customers who will only subscribe to you, right?

Aidan Gomez

Yeah. Certainly critical industries, financial services, telco, healthcare, and then, of course, government applications as well. Anything that’s a national security concern, and arguably education is within that remit, that’s a place that we do extremely well.

Debbie Prentice

That’s interesting. So, Hugo, what can we learn from the arc of progress from MOOCs and… online education to now AI -driven?

Hugo Sarazen

I think a few things. The first one is, you know, if you look at the traditional learning processes and methods that we had, there was a void. And that’s why online learning took off and that’s why there’s a whole industry. And it addressed a bunch of problems around, you know, getting to skills, specific skills, and also getting to certification and then helping organization rescale. So that was a very, very, very powerful thing. What is now becoming a lot more a priority, and in the last six months, I spent an enormous amount of time, I spoke to 400 CHROs and head of learning and development in a large enterprise. So the pattern that I saw is they had an enormous proliferation of tools and things that were bought during the pandemic.

During the COVID era. very few could explain the ROI. How do you measure the ROI of learning? It’s a really good question. And everybody kind of defaulted to, did they take the class? Did they complete the class? Hours of learning. And as a business leader, it’s not particularly helpful. And it gets even worse. When they get certification in Google Cloud or AWS or Cyber something, to know that you’ve certified yourself two years ago, I’m a business leader. I want to know, are you current? Are you relevant today? So I think the arc now is moving in the enterprise to an ability to do in the flow of work learning, do it at bite size, do it in an adaptive way, and then we can come back to what adaptive means, and with an ROI, an ability to measure what skills people are deploying in real time.

So you’re now beginning to create a workforce management tool that is powered by an operating learning system.

Debbie Prentice

so Aidan you said that you said that you were not as worried about uh sustained human attention as you were some of the others how does Coherence solve the attention problem

Aidan Gomez

um well I mean I don’t know if Coherence solves the attention problem I think it it’s definitely a concern there’s lots of pressures on our attention span I think um social media short -form content is uh driving a lot of that um I’m certainly on the receiving end of that you know after 30 seconds because of TikTok my attention span ends and I need to talk about something else and also just the way that we do business now are in these short 30 minute meetings where you completely swap context and so I think those are difficult challenges not related to AI that are still applying pressure on human attention span um but it has a pretty good impact on the the pretty strong consequence on how people learn and how students can learn when they’re constantly being distracted when they to sit with material over time.

I think AI can perhaps assist in resolving that by its ability to personalize the experience to the individual and engage them more effectively. And so if you have a generic education offering, which, you know, bores some part of the population, excites the other, you’re missing, you’re underserving that population that gets bored. But if we can have a very targeted, scalable approach for each individual, giving them something that’s engaging, exciting, if they are auditory learners or visual learners, we can tailor it to them and hopefully keep their attention better than we might otherwise would. So AI might be part of the solution as opposed to the source of the problem.

Debbie Prentice

Yugo, does your vision of AI comport with that?

Hugo Sarazen

It completely matches. And I think, you know, there is a well -known piece of research from the 80s from a university, a Chicago professor. It’s the Bloom two -sigma problem. And they did some research where they looked at the ability to learn with one -on -one coaching. It was two -sigma higher than the classroom. But the economics of doing that was not there. That’s why we have these big classrooms, and that’s why there are bigger classrooms for first years. It doesn’t deliver the same learning experience. Now, to Aidan’s point, with AI, you can personalize the experience. You can adapt it, and you can create feedback loops that a professor cannot today. You’ve got 40 students. You cannot pick up who’s not easily.

Some teachers are amazing, and they have the ability to do incredible things. But now you have the ability to have that feedback. So I think we’re going to see a lot of AI expert tutors and coaches that will have context and that will have been trained. on a body of knowledge that is hopefully trusted, hopefully accurate, and will help in the way that you like to learn. So if you’re an auditory learner, we’re going to give it to you that way. And if you’re a visual, we’ll give it to you that way. I think that’s a really exciting and promising world we’re entering from that point of view. So we’re going to go to questions from the audience in just a second.

So start thinking about your question. I’m just going to ask one more question of our panelists myself, which is where do humans fit in in this brave new world of AI -based education? I think all of us who are educators know that at some point we need human intervention in the process, even with the most fabulous technology. Where do you think they need to come in?

Aidan Gomez

I think they’re the customer. So they’re the ones that we’re serving with this technology. And so we need to be able to serve them. We need to create. the best possible product for them. If we just do surface -level education that’s very confirmatory, oh yeah, you’ve got it, great, you know, a bit sycophantic, then they won’t be effective in the real world when they actually enter the job market. And so there’s a burden on us as product creators to create the most effective product to teach people skills and give them knowledge. And I think that AI is actually an incredibly effective tool towards that. But I do still believe that it’s a tool. It’s like a calculator.

It’s something that you can lean on to give you faster answers, more thorough answers. But we still need to ground ourselves in the human without the tool. And so testing becomes, it’s always been important, of course, but I think it becomes absolutely critical now because you can fake your way through an education system much more easily. And so having very strict testing regimens is going to be essential.

Hugo Sarazen

I have a variation on this I do think the teachers, the instructors are partly the customers but I do think they need to be in the loop they’re amazing storytellers they have a way if I ask anybody in this room who was your favorite teacher in high school and I pause for 5 seconds, there’s somebody in your mind right now what was special about that person and you cannot replicate that but you can augment that you can make that person now be able to maybe teach you on something that they were not like my favorite teacher in high school was a physics teacher I loved the way he presented, I loved the way he engaged and it was so motivating my chemistry teacher was not that but now I can augment with AI and have the voice, not just the voice but the way he thought, the way he presented the information apply to a different topic.

And I think that gets pretty exciting as well. You may finally understand chemistry. I may finally understand chemistry. I stayed away from chemistry because of that. But physics I love.

Debbie Prentice

Okay, I wanna open up to questions from the audience. So I will call on you the old fashioned way. If you raise your hand. Oh, you have to, sorry, you have to speak into my ear.

Audience

Anna Van Niels, director of the Livium Trust. I guess learning is a bit like working out it’s got to hurt to be effective. How do you think AI enabled tech of various kinds can help with that motivation issue? You’ve talked about the teacher being the one that absolutely the motivates, but a lot of the systems we’re talking about in the workplace, et cetera, you’re not gonna have that human in the loop. So can we do things with AI and tech that could prompt that?

Hugo Sarazen

Yeah, I’m gonna offer a few suggestions. And this is not like future, this exists today. So you can do AI role -playing and you can do AI role -playing. in a way that makes you go through the learning process. And I’m going to use a business example. So if you’re a new salesperson and you have a new product that you need to sell, you can load up the specs of that product into an AI role play and practice selling to a person. And there will be a rubric against which we’re going to score you. And we’re going to discover whether or not you are competent at selling this product that you’re responsible for. So that’s a business example.

I can do the same thing in a call center. You know, we have one of the largest call center outsourcers. There are 20 ,000 call center agents they need to onboard every month. That is incredibly complicated. But now you can load, you know, the most common error cause, the most common tickets, the product specs. And instead of taking three weeks to onboard somebody, through the process of learning, of experimenting, you can load up the specs of the product that you need to sell. You can do a role play and get to accelerate that learning by doing a lot of practice. So it’s simulation. So that’s one powerful example. I think the other one is AI can give you feedback and monitor the progress you’re making in a way that we can bring you back to that point in the gym where you’re struggling with whatever exercise you’re doing.

We’re going to make you do that exercise more and more and get that repetition in a way that reinforces the gap that you have.

Audience

Hi, I’m Nathaniel. I run an education company in Australia. Now, as a region, Australia has an interesting relationship with technology. As many of you may know, we’ve just recently had a social media ban for young people under 16. And in a similar vein, we don’t really have a good consensus around the role of AI. So my question is, what do you believe the role is for AI in physical classrooms? And what would you say to people who might be on the side of banning versus not banning it?

Aidan Gomez

Yeah, I think I’m interested to hear your answer. But from my side, I think it’s a tool like a calculator. I think also a duty of the education system now is to teach people how to use this AI, how to engage with it, how to most effectively use that tool. And so it certainly should exist as part of the classroom and as part of schooling. But like I said, it can become a crutch and it can be used to cheat. And so we have to come up with ways to ensure that students aren’t misusing it or using it in the ways that are unproductive to their learning. I’m excited to hear your answer.

Hugo Sarazen

I’ve got two -part answer. The first one is any business process or any endeavor, you have the problem statement asking the right question, you have the solving, and then you have the quality assurance in the back. It’s a feedback loop that you go through a circle all the time. And education is… No different. What AI does well is that middle part. It doesn’t do a whole lot in the front end and the back end. So what we need to teach young students and adults is how to ask the right question. The critical thinking, I love that it came out at the very top. Super, super important. But you can, as you said, the calculator is a calculator.

The fact that I can’t do multiplication table all the way to 100 is not that relevant for my day -to -day job. But the fact that I can be critical in my thinking, I can summarize, I can contextualize, I think those are the skills you want. Second part, for those who are curious, I have no relationship, but I am just fascinated. There’s a school in the U .S. called Alpha School. And they’ve got a really powerful model. They are using AI. They are encouraging students to use AI. And they are demonstrating that I’m going to get all the stats wrong, but they get two. the learning in half the time or three times the learning half the time and then the kids in the afternoon they go learn and learn how to be a civic leader or you know a leader in all sorts of other contexts instead of spending all their time where you know historically you would have learned you know various dates it’s not that relevant to know the dates of specific things but it’s relevant to understand the context of those events and I think that’s where we can focus a lot of the effort

Audience

Thank you Terrific topic to be discussed at Davos I’m Pranjal Sharma I’m from India I’m an author and analyst we’re looking at a lot of the micro pieces but I’d like to focus on the macro we have a situation today where we’re all skilled up but nowhere to go right last year I think ILO says 7 million fewer jobs were created not to mention the existing jobs that disappeared So there is a cry from the industry. Firstly, they don’t know who to hire and why to hire and what to hire, and they don’t even know what to test that credentials on. The second part is there’s a huge disconnect between what they want and what academia is offering.

Plus, the concept of a degree shouldn’t exist, and even continuous learning in terms of applied knowledge is missing. So I think the core phrase to be used here is applied knowledge. How do you create information for a person to be able to earn a livelihood, irrespective of white, gray, blue collar? And I think that’s the gap of applied knowledge delivered in the right way to the right people at the right time.

Aidan Gomez

From a labor market perspective, I think there’s a good case to be concerned about the impact of AI and what might happen, and reskilling is going to be an essential component of that. Thank you. The mismatch in the market between what education institutions are offering and what the market is demanding, I think that is a major issue that we need to figure out how to solve. I think AI can be a part of speeding up delivery of new programs and courses and keeping up with changes in demand much faster than we have in the past. The process of scaling up educational infrastructure to meet a shift in market demand has been historically extremely slow and laborious.

But with AI, we’re able to create programs much faster. The models are infinitely scalable. They’re always awake 24 -7. They never get annoyed at the student. So we have these incredibly compelling tutors to deploy at scale against the problem of teaching the population the skills that we need. But I think the issue might be in identifying the skills that we need, and that’s still going to have to come first. From us, the humans, the business leaders. the policymakers. So that might be the core constraint. We need a direction to be set against to start building the solution.

Debbie Prentice

I think too, I mean, what I would say is I think that, you know, universities aren’t teaching to what businesses need necessarily. We’re teaching things that we believe are fundamentally important, and I would defend that. I mean, we’re teaching critical thinking, and we’re teaching deep mastery, and we’re teaching them to people at a critical moment in their lives, most of them, where they actually really need to have a go and learn these skills. They may need additional skills when they go out into the workplace, and that, as far as I’m concerned, is what the kinds of products that you’re talking about are for. Good, thank you. Let’s go back to the critical thinking because now in the university the students widely use the AI assistance and get the instant answer.

In that case, how can we teach them to increase their capability of critical thinking to make factual check, logical check, scientific check, ethical check to the instant answer they got from models?

Hugo Sarazen

conclusion. The AI will outdo the human. So where we can be competitively differentiated versus the AI is in the front and in the back end. So we need to adapt the curriculum to make sure that people are asking the right questions with the right context. And it is critical thinking. It is critical thinking, but we need to expand and we need to have a better way to evaluate the level of critical thinking these students have when they hit the workforce so that you can evaluate. And then the same on assessing. I mean, AI is marvelous right now. It generates codes like there’s no tomorrow, but it’s mostly garbage. It is, you know, we have bottlenecks and quality assurance in the back end.

So how do you kind of create the new tools and you teach people to have, you know, the critical thinking to see if this is using the right library, is it using the right pattern? Is it using the right data? I think that’s one of the core, you know, change that. academic institution, organization like me, an individual need to do, as you do your self -development, you need to kind of really lean into this ability to ask the right question. Because the middle part, you don’t have a competitive advantage. You will be outgunned. And the thing that is even more crazy, historically, like people did PhD. I have a PhD. I went like super deep on one little topic and I got buried somewhere in the sinkhole.

And it took my entire body of effort to get there. And to be a polymath is very hard. To be able to understand, I know nothing about chemistry. I know nothing about biology, psychology. My dad did that. So I got something rubbed up on me, maybe. But AI is a polymath by design. It has the data set across all of that. So the middle part is a foregone conclusion, folks. You need to get, get good at the front and the back end.

Aidan Gomez

Yeah, I was going to say another thing, which is teaching is a skill in the same way coding is a skill or doing math is a skill. And so it’s a core capability that we as model developers need to invest in. And it’s not something that is easily benchmarked. And it’s not something that is accurately tracked at the moment. But I think the more this rolls out, I mean, it’s already in the hands of every student on the face of the planet. It’s going to become imperative that we’re able to track the performance of models in teaching tasks to ensure that they’re actually effective and improve that over time. That’s just so like a technical level that is not done presently.

I don’t know of a teaching benchmark, but I can point to probably 30 code ones, 50 math ones, you know, biology, et cetera.

Audience

All right. it happens from time to time I think that psychology is rubbing off well when you say AI is a polymath by design, it’s a brilliant thought you know, it was you articulated it very well which also means that by definition humans cannot compete so we basically have to end the session and say that doom is nigh

Hugo Sarazen

well, I don’t think so I mean, I’m more optimistic so the polymath thing is real I mean, if you do, again, historical perspective he who had Leonardo da Vinci on his team had an advantage to build a war machine or a better court or whatever now there’s going to be a similar debate, like who assembles these polymath AI thingy has an advantage that is a foregone conclusion, that’s why there’s all these battles for a But I think we cannot, as the human race, give up that ability to influence. I think that we made a point, I think you did at the very beginning. Like, these models typically are not designed, though some of them can be designed, to explain their reasoning.

So if as a society we begin to rely on this thing that is super facile, that gives us an answer, and we don’t have the questioning, and we don’t kind of do the checking and the validating, we lose agency on important decision. And I think that is one of the things that we need to focus on deeply as a society. It also leads to the guardrail, the ethical things, and all that other stuff. We need to go there, because in the middle, it’s going to come up with answers that will be amazing in biology, and will solve things in biology, because I got trained in English language, I don’t know. But it’s going to be pretty wild, but we cannot lose agency around this polymath.

I mean, every data center is going to have… hundreds of millions of polymaths in there.

Audience

yeah I just want to shed a thinking I believe there’s a type of paradox within companies about this critical thinking let me say it this way we senior professionals we know how to judge what the AI is doing so I ask them one day for the AI to model whatever and I could judge my juniors they were not able to judge because they don’t have the experience so but to some extent I could fire them because I don’t need them anymore because of these AI technologies but maybe there will be a gap so at some point in time AI can enhance a lot what I do but if you don’t train let’s say the new generation the junior who will be the future who will be in the future able to do this critical thinking on what AI is doing I don’t have the answers obviously companies need to take efficiency and we need to do our best to reduce cost whatever but I think it’s something we as a society will have to think a lot about

Debbie Prentice

it’s fair thank you here we’ve got one here you wanted you were up right yeah i didn’t just call call you

Audience

hi thank you for your insights i’m i’m kian i’m the CEO of an AI company called workera um i really like what you said even on um testing the human and i think in in the world of testing right now there’s almost two camps one that says you can test them with the calculator we can test them without the calculator and there’s also overlaid on top of it the risks of proctoring and understanding um who’s cheating who’s not cheating and what can you tell about it so how are you thinking about that idea of testing with or without the calculator

Aidan Gomez

yeah the uh the cheat like can you tell whether a piece of text was written by AI it’s really tough a lot of the detectors out there are total scams they’ll say 100 % AI even when it’s not used at all so they’re extremely overconfident very high error rate on both sides uh false positive false negative. And but the answer to that question is, you can. Like, you can insert into language models subtle cues to indicate for the reader, this was written by an AI. You can not sample from natural language, language that I’m drawing from right now. You can sample from a slightly shifted distribution and use certain words much more than any normal like any human would use.

And then as soon as those words appear, you have a good piece of evidence that this was written by a language model. And so us language modeling companies do that. We shift the distribution of the language model so that when its text gets read, we have some ability to say, you know, I can assign a likelihood that that was generated by my model. So you can detect that to some extent, but many of the tools are scams. And so I think we need to make better tools and put them in the hands of educators more readily. Thank you. On testing with and without the calculator, I have a pretty strong focus on without the calculator.

I think everything needs to be ripped away, and you, standing alone as yourself, need to prove your knowledge. That is like the gold standard test of what you have learned retained. But of course, like I was saying earlier, using the language model is a skill itself, and we should have space to test that, in which case, of course, you’re going to need the LLM in the loop.

Debbie Prentice

Let me seize the chair’s prerogative here to ask, because I’m curious what you would both say to this question. What happens in this brave new world of polymaths and not showing your work and not explaining your answer to expertise or authority? So, you know, we have at Cambridge, you know, library after library of big books that tell you the truth, or that was always the… the idea, right? You would go look it up somewhere. What do you do in a world in which looking it up is no longer… there’s not a dictionary, there’s not a truth?

Hugo Sarazen

I’ll start. I think most technology go back and forth. There’s a pendulum. We’re in the pendulum that bigger is better. We’re throwing everything under the sun. Every Reddit quote is now part of training every large language model. And that is good. It’s going to give you an average answer for average problem. Now, over time, I think we’re going to come back and say, you do need specialized, trusted, and we need to have confidence that we did use the right source. And I think there will be a space for that. At least I want to hope that that will be the case, that we’re going to come back and we’re going to have these specialized model that will not only be rag, but they are going to be defined from scratch with the right intent.

And they don’t need to be a zillion, trillion function points and whatever. I mean, they just need to be trained on the expertise. And then you do need to trust it. It’s going to be incredibly important. I think we also need a lot of research on explainability. And Ben Gio at the University of Montreal, one of the guys who got the Turin Awards, has been very vocal around this. We need to kind of go back and explain a lot more. These are statistical models. This is all this is. These are huge matrices, and they’re like weights assigned to different things. So this is not a piece of software where you say if, then, this, that.

This is just statistics. So it, on average, gives good answers. But it depends on the data. And you need to come back and put a bunch of tools to put the explainability into the model. And there are ways to do it. It’s not yet super advanced. And I think we need to invest in that so that we do have the confidence, build a trust. And I do think it’s part of the learning. The learning question you have. Because if the models are black box, you lose. the ability to learn from their deduction process, which doesn’t exist. It’s just a statistical model. There’s no deduction. So anyway, those are my two ideas.

Aidan Gomez

Yeah, over the course of last year, there was a paradigm shift in the type of model that gets to use now. We don’t just use input -output direct response models like you were alluding to. Every model now is a reasoning model. And so before it actually responds, it has an internal monologue where it thinks through the problem, tries to reason about it, and then delivers a response. It is primitive. It’s a year old, but it’s getting much better. And so I think exposing that to the user and showing these chains of thought, this reasoning is an important solution. And then like you say, RAG, which is retrieval augmented generation, where the model isn’t just drawing on its own knowledge, but it’s actually making direct and specific reference to external knowledge.

So we can plug it in into the Cambridge library. I went to Oxford, so the Bodley. And it can cite directly back from those sources. And that provides some degree of both reasoning and RAG provides some degree of auditability. So you can have a little bit more confidence in the response because you can check its work.

Debbie Prentice

Just out of curiosity, what’s driving that? What’s driving the need for reasoning?

Aidan Gomez

Because the models were brittle. They would very confidently answer with the wrong solution. And it turns out humans don’t put the same amount of energy into answering every question. But that was the prior expectation on these models. You would ask them, what’s 1 plus 1? And it would immediately respond and put the same amount of effort into answering that question. And you would ask it to prove some unsolved Erdos problem or something. And it would put the same amount of effort as 1 plus 1 into that. That was obvious. You know, there are some problems that we should spend. days, weeks, months, years, decades, putting effort in to solve, and there are others that can be responded to instantly.

It’s just a better, more robust intelligence.

Debbie Prentice

That’s fascinating. We have time for one more question. Anything pressing in there?

Audience

Thank you. Yeah, I’m very interested to ask the question of just circling back to the beginning where we said we have like public sector university as well as a technology, a tech platform being in the same room. The question I have on my mind is that with right now, like in the U .S. especially, education cost is so astronomically high and prohibitive. Lots of people are saying the narrative goes as like there’s no point going to university anymore. And I would see in that world, there would be a lot of attention turned to online education. I think we’re all very familiar with Udemy. What is the gaps between an online education and an accredited college or an elite college?

Has there ever been customer or market demand for online education to move towards a model or imitate a traditional college experience? Has that ever surfaced as a need? And just comparing the gaps there.

Hugo Sarazen

I’m going to say something maybe controversial, but it’s fun. The university degree is a bundle. It’s a convenient bundle that as a society we chose to create. So you learn something, you get an accreditation, and you get a degree. have a rite of passage. You know, these kids are at a moment, they leave home, they go, and they, and that bundle is a convenient, and we bundle that with research, because the same people could now pass on their knowledge to others. It is a convenient bundle as a society. It has worked well for, you know, a long time. Oxford and Cambridge are examples of long -standing institutions that had a version of this bundle. It changes over time.

Is it time to revisit whether all of these components need to fit together because of the economics and what AI can do to change the economic of delivery? Maybe. I think the second…

Debbie Prentice

Think it quickly.

Hugo Sarazen

Yeah, quickly. And the second piece is just the adaptability. If you have the labor market that moves so fast, you’re now going to begin to put more weight on the addressing a specific need for a specific skill. So I think that is a reality in addition to that potential unbundling of that whole experience.

Debbie Prentice

You have a good word for the university. I’m actually interested to hear from the university’s perspective. Then I’ll just end by saying I think that they are currently serving very different functions. Right now, university does so much more than provide knowledge that it still is worth its weight in gold, and it is gold. But we’ll see how the space develops. With that, I’m getting all kinds of signals from the producers, so we’ve got to end it. But thank you very much. Thank you for your questions, and thank you to our panelists. Thank you. To be continued… To be continued… Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (20)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“Professor Debbie Prentice is the Vice‑Chancellor of the University of Cambridge”

The knowledge base identifies her as “Professor Debbie Prentice, Vice Chancellor of the University of Cambridge” confirming her role [S1] and also refers to her as “Deborah Prentice, Vice-Chancellor of the University of Cambridge” [S17].

Confirmedhigh

“Aidan Gómez is co‑founder and CEO of Cohere”

Both sources list him as the CEO (and co-founder) of Cohere, confirming his position [S1] and [S17].

Confirmedhigh

“Hugo Sarazen is President, Chairperson and CEO of Udemy”

The panelist is identified in the knowledge base as Hugo Sarrazin, President and CEO of Udemy, confirming the organisational role though the surname spelling differs [S1] and [S17].

Confirmedmedium

“The moderator highlighted the contrast between for‑profit ed‑tech companies and a not‑for‑profit university”

A source explicitly notes the for-profit nature of Aidan and Hugo’s businesses versus the not-for-profit sector of the academic speaker, matching the report’s description [S25].

!
Correctionhigh

“The report misspells Hugo Sarazen’s surname; the correct spelling is Sarrazin”

Both knowledge-base entries list the Udemy executive as Hugo Sarrazin, indicating the report’s spelling “Sarazen” is inaccurate [S1] and [S17].

Additional Contextlow

“The report refers to the panelist as “Debbie Prentice” while the knowledge base uses “Deborah Prentice””

The knowledge base records her full name as Deborah Prentice; “Debbie” is a common diminutive, providing additional naming context but not a factual error [S1] and [S17].

External Sources (89)
S1
Driving Enterprise Impact Through Scalable AI Adoption — – Hugo Sarazen- Aidan Gomez – Hugo Sarazen- Debbie Prentice
S2
Driving Enterprise Impact Through Scalable AI Adoption — -Debbie Prentice- Professor and Vice Chancellor of the University of Cambridge, representing the not-for-profit educatio…
S3
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S4
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S5
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S6
Media Briefing: Unlocking the North Star for AI Adoption, Scaling and Global Impact / DAVOS 2025 — – Aidan Gomez: CEO of Cohere Aidan Gomez: And there were definitely indications that it was a promising architecture f…
S7
Lift-off for Tech Interdependence? / DAVOS 2025 — – Aidan Gomez: CEO at Cohere Aidan Gomez: I’ll be quick. So I think, from our perspective, Cohere is focused on prod…
S8
AI expert Aidan Gomez joins Rivian board — Aidan Gomez, co‑founder and chief executive of AI specialist Cohere, has been appointed to theboard of electric‑vehicle …
S9
Pre 6: Countering Disinformation and Harmful Content Online — Valentyn Koval: Democracy is by their very nature. open societies, war of censorship, and bound by bureaucratic inertia …
S10
Education, Inclusion, Literacy: Musts for Positive AI Future | IGF 2023 Launch / Award Event #27 — Eve Gaumond:Thank you very much. I would like to thank you for inviting me to comment . I would like to build upon three…
S11
IGF 2024 Global Youth Summit — AI technology has the capability to create virtual classroom environments and interactions. This can offer educational e…
S12
Rethinking learning: Hope, solutions, and wisdom with AI in the classroom — But teachers need support. They need professional development around AI literacy, reasonable class sizes that allow for …
S13
Can AI replace the transmission of wisdom? — However, in all these cases, we must keep the role of AI as a supportive tool, not as a teacher. This is because technol…
S14
AI (and) education: Convergences between Chinese and European pedagogical practices — **Norman Sze** (former Chair of Deloitte China) provided industry perspective on AI’s impact on professional work, notin…
S15
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Lt Gen Vipul Shinghal — “The black box of data must become a glass box.”[11]. “the commander taking a decision based on an AI -enabled system bu…
S16
How Small AI Solutions Are Creating Big Social Change — Artificial intelligence | Building confidence and security in the use of ICTs Reliability, Safety & Verifiability
S17
Knowledge in the Age of AI: World Economic Forum Town Hall Discussion — This is a critical business challenge as organizations struggle to demonstrate the value and impact of their learning in…
S18
From the Parthenon to Patterns: Ancient Greek philosophy for the AI Era — However, some new possibilities emerge as well. For example, AI platforms such as Chat GPT could simulate dialogue aroun…
S19
Keynote-Bejul Somaia — This is psychologically and strategically insightful because it identifies the mental model that has shaped Indian entre…
S20
Keynote-N Chandrasekaran — “It is the age of abundant intelligence where the scarce resources are trust, stewardship, and human capability.”[39]. “…
S21
WS #208 Democratising Access to AI with Open Source LLMs — Abraham Fifi Selby: All right, thank you very much for the session, and I’m very happy to join this panel. I’m from th…
S22
Large Language Models on the Web: Anticipating the challenge | IGF 2023 WS #217 — By making models publicly accessible, flaws and issues can be identified and fixed by a diverse range of researchers, im…
S23
Keynote-Rishi Sunak — “And for just a few dollars a month, their rate of learning has doubled.”[40]. “These children are being provided with p…
S24
Education meets AI — Artificial intelligence has the potential to revolutionize education by offering personalized learning experiences to ev…
S25
https://dig.watch/event/india-ai-impact-summit-2026/driving-enterprise-impact-through-scalable-ai-adoption — And I think that’s what’s happening for a lot of learners, and that’s why traditional methods need to change. And we’re …
S26
Generative AI in Education — In conclusion, the summary underscores the need for a balanced integration of GAI in education, advocating for its use a…
S27
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Factors such as restricted access to computing resources and data further impede policy efficacy. Nevertheless, the cont…
S28
Enhancing rather than replacing humanity with AI — People’s judgment remains crucial, particularly for decisions that involve values, context, or individual circumstances.
S29
Main Session on Cybersecurity, Trust & Safety Online | IGF 2023 — Another argument put forth is the crucial involvement of stakeholders outside of government in cybercrime discussions. T…
S30
A Guide for Practitioners — – What are the current macroeconomic, political and social environments, and how do they relate to health? A thoro…
S31
BOOK LAUNCH: The law and politics of Global Competition — In regards to developing and least developed economies, the speakers raise a question regarding the approach these econo…
S32
The Gig Economy: Positioning Higher Education at the Center of the Future of Work (USAID Higher Education Learning Network) — A challenge faced by universities is the disconnect between the skills and knowledge they provide and the skills demande…
S33
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan — Traditional education system faces challenges as students question value of expensive degrees
S34
Artificial intelligence (AI) – UN Security Council — Across different sessions, participants expressed concerns about the lack of transparency in AI algorithms, which can le…
S35
What is it about AI that we need to regulate? — The lack of transparency in AI systems was identified as a fundamental issue requiring regulation.Abel Pires da Silva fr…
S36
WS #189 AI Regulation Unveiled: Global Pioneering for a Safer World — Juliana Sakai: Hi everyone, thank you. So we have like right now the policy question three with the theme enhancing en…
S37
How Trust and Safety Drive Innovation and Sustainable Growth — Alexandra Reeve Givens This insight identifies a critical gap in current regulatory approaches – that AI creates an ‘en…
S38
Tech Transformed Cybersecurity: AI’s Role in Securing the Future — The role of universities and educational institutions is also emphasized. It is noted that many universities still utili…
S39
Driving Enterprise Impact Through Scalable AI Adoption — Audience sentiment suggests a growing narrative that university degrees may no longer be necessary, highlighting a chall…
S40
INTRODUCTION — Given the increasing needs of the workforce for personnel with advanced digital competencies and the curren t gap in…
S41
Secure Finance Risk-Based AI Policy for the Banking Sector — Trust is built when systems are predictable, explainable, and accountable. Trust deepens when innovation aligns with pub…
S42
Artificial intelligence (AI) – UN Security Council — Algorithmic transparency is a critical topic discussed in various sessions, notably in the9821st meetingof the AI Securi…
S43
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — Workflow woes: Even if an AI model performs well in a lab, integrating it into a real-world radiology workflow is a whol…
S44
High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative — Amal El Fallah Seghrouchni: Thank you very much for the question. Yes, Morocco is Arabic-African. We have we are close t…
S45
The sTaTe of The — Survey respondents felt that the overall quality of aid workers in the field seemed to have improved overall, but not in…
S46
Policies and platforms in support of learning: towards more coherence, coordination and convergence — 35. The first and most meaningful observation that should be highlighted is that, despite general agreement on the princ…
S47
Young voices from Africa – Harnessing digital tools for sustainable trade — The lack of comprehensive understanding and data collection on the informal sector is identified as a major hindrance to…
S48
AI growth faces data shortage — The surge in AI, particularly with systems like ChatGPT,is facinga potential slowdown due to the impending depletion of …
S49
Keynote-N Chandrasekaran — “It is the age of abundant intelligence where the scarce resources are trust, stewardship, and human capability.”[39]. “…
S50
Upskilling for the AI era: Education’s next revolution — Doreen Bogdan Martin: Good afternoon, ladies and gentlemen. Yesterday morning on this very stage I spoke about skills. I…
S51
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Factors such as restricted access to computing resources and data further impede policy efficacy. Nevertheless, the cont…
S52
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Legal and regulatory | Sustainable development | Development Reports consistently identify governance of artificial int…
S53
AI (and) education: Convergences between Chinese and European pedagogical practices — – The irreplaceable importance of human emotional intelligence and mentorship 1. **Universities and teachers remain ess…
S54
The National Education Association approves AI policy to guide educators — The US National Education Association (NEA) Representative Assembly (RA) delegates haveapprovedthe NEA’s first policy st…
S55
Education meets AI — Artificial intelligence has the potential to revolutionize education by offering personalized learning experiences to ev…
S56
Empowering India & the Global South Through AI Literacy — Okay. So if you want to analyze the transformative bets, the major transformation that AI can bring into the classroom, …
S57
IGF 2024 Global Youth Summit — AI has the potential to tailor education to each student’s specific requirements. This personalization can enhance the l…
S58
AI & Child Rights: Implementing UNICEF Policy Guidance | IGF 2023 WS #469 — Steven:Thanks, Vicky. And good afternoon, everyone. Good morning to those online. It’s a pleasure to be here. So I’m a d…
S59
AI-generated ads face new disclosure rules in South Korea — South Korea will require advertisers to labelAI-generated or AI-assisted advertisingfrom early 2026, marking a shift in …
S60
Revitalising trust with AI: Boosting governance and public services — AI is reshaping public governance, offering innovative ways to enhance services and restore trust in institutions. The d…
S61
Day 0 Event #261 Navigating Ethical Dilemmas in AI-Generated Content — Human rights | Legal and regulatory | Sociocultural Information Integrity and Human Rights Framework There must be dis…
S62
Democratizing AI Building Trustworthy Systems for Everyone — I think that’s a fantastic question. I’m going to start with a very broad context and then narrow it down to that specif…
S63
Knowledge in the Age of AI: World Economic Forum Town Hall Discussion — The central question explored what becomes the scarcest resource when AI can provide instant answers to virtually any qu…
S64
Driving Enterprise Impact Through Scalable AI Adoption — Deep understanding and mastery are at risk as LLMs can fool people into thinking they understand when they don’t Sustai…
S65
Defying Cognitive Atrophy in the Age of AI: A World Economic Forum Stakeholder Dialogue — But even those skills can be eroded without regular practice and engagement. Core cognitive capabilities, such as judgme…
S66
Keynote-N Chandrasekaran — “It is the age of abundant intelligence where the scarce resources are trust, stewardship, and human capability.”[39]. “…
S67
Education meets AI — Additionally, the speakers emphasized the need for personalized learning and adaptive teaching methods. They discussed t…
S68
IGF 2024 Global Youth Summit — AI has the potential to tailor education to each student’s specific requirements. This personalization can enhance the l…
S69
Why apprenticeship and storytelling are the future of learning in the AI Era — AI, through approaches such as apprenticeship models and storytelling, can help swing the ‘learning pendulum’ back. It c…
S70
Education, Inclusion, Literacy: Musts for Positive AI Future | IGF 2023 Launch / Award Event #27 — The use of Artificial Intelligence (AI) in education has both positive and negative impacts. On one hand, AI has the pot…
S71
Enhancing rather than replacing humanity with AI — People’s judgment remains crucial, particularly for decisions that involve values, context, or individual circumstances.
S72
THIRD SECTION — considers that what is essential for the protection of individuals’ rights in the context of the regime …
S73
One-Person Enterprise — Human oversight is still needed for important decisions
S74
Artificial intelligence and machine learning in armed conflict: A human-centred approach — Human control and judgement will be particularly important for tasks and decisions that can lead to injury or loss of li…
S75
UNSC meeting: Artificial intelligence, peace and security — Brazil:Thank you, Mr. President, Mr. President, dear colleagues. I thank the Secretary General for his briefing today an…
S76
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan — Traditional education system faces challenges as students question value of expensive degrees
S77
The Gig Economy: Positioning Higher Education at the Center of the Future of Work (USAID Higher Education Learning Network) — Jennifer DeBoer has herself gone through traditional university education and holds multiple formal degrees The readine…
S78
Artificial intelligence (AI) – UN Security Council — Across different sessions, participants expressed concerns about the lack of transparency in AI algorithms, which can le…
S79
Can we test for trust? The verification challenge in AI — Anja Kaspersen: Massively so. So let me, I’m just gonna rewind a little bit to our title of this session if you allow me…
S80
Artificial Intelligence & Emerging Tech — If a computer cannot explain its behavior, people will not trust it Issues mentioned include transparency, explainabili…
S81
Workshop 6: Perception of AI Tools in Business Operations: Building Trustworthy and Rights-Respecting Technologies — Domenico Zipoli: Thank you very much. It’s always fascinating to be in a room with both stakeholders coming from compani…
S82
High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative — 2. Data privacy: Gong Ke highlighted data privacy concerns as a challenge for transparency. Amal El Fallah Seghrouchni:…
S83
WS #136 Leveraging Technology for Healthy Online Information Spaces — Nighat Dad: Yeah, no, thank you so much. Julia, I would, so I’ll talk a little bit about the UN Secretary General, HLAB,…
S84
WSIS Action Lines for Advancing the Achievement of SDGs | IGF 2023 Open Forum #5 — Interfaces exist between different diverging opinions The diversity of perspectives in defining feminism was also ackno…
S85
Ethical principles for the use of AI in cybersecurity | IGF 2023 WS #33 — The panelists and online audience agreed on the equal importance of these ethical principles and called for further disc…
S86
World in Numbers: Risks / DAVOS 2025 — The speakers encouraged engagement with the report using the hashtag #WEF25 and mentioned that Channel 2 was available f…
S87
https://dig.watch/event/india-ai-impact-summit-2026/keynote-n-chandrasekaran — Finally, in conclusion, I just want to say that we are standing here at a very defining moment. It is the age of abundan…
S88
WS #231 Address Digital Funding Gaps in the Developing World — Online moderator: Yeah, sure. Thank you, Neeti. So we have an insight. One of the participants, Maarten, says that in ou…
S89
Filtered data not enough, LLMs can still learn unsafe behaviours — Large language models (LLMs) caninherit behavioural traits from other models, even when trained on seemingly unrelated d…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
H
Hugo Sarazen
12 arguments170 words per minute3459 words1214 seconds
Argument 1
Attention scarcity (Hugo) – The abundance of information creates a “poverty of attention,” making sustained human focus the most limited resource.
EXPLANATION
Hugo argues that the sheer volume of information available today overwhelms individuals, leading to a scarcity of sustained attention. This scarcity, he suggests, is a critical bottleneck for learning in the AI era.
EVIDENCE
He cites Herbert Simon’s observation that “when you have a wealth of information, you have a poverty of attention” and emphasizes that attention is a major component for learners today [40-43].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion notes that sustained human attention is becoming scarce due to information wealth creating a poverty of attention [S1] and references scarcity thinking in the AI era [S19].
MAJOR DISCUSSION POINT
Attention scarcity
DISAGREED WITH
Aidan Gomez, Debbie Prentice
Argument 2
Adaptive AI tutoring (Hugo) – AI can deliver individualized, multimodal learning experiences, role‑play simulations, and real‑time feedback to keep learners engaged.
EXPLANATION
Hugo describes how AI can personalize learning pathways, adapt content to different learner modalities, and provide immediate feedback through simulations. This approach aims to maintain engagement and improve skill acquisition.
EVIDENCE
He explains Udemy’s pivot toward an AI platform for workforce reskilling, highlighting quick assessments, class segmentation, and feedback loops that enhance learning outcomes [94-107]; later he gives concrete role-play examples for sales and call-center training with scoring rubrics [199-207].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Personalized learning experiences enabled by AI are highlighted as a way to engage learners and adapt to modalities [S24]; examples of AI-driven virtual classrooms support role-play and interactive feedback [S11]; rapid personalization that doubles learning rates is also discussed [S23].
MAJOR DISCUSSION POINT
Adaptive AI tutoring
DISAGREED WITH
Aidan Gomez
Argument 3
Human storytelling & augmentation (Hugo) – Teachers remain irreplaceable storytellers; AI can augment their style and expertise but not replace the human connection.
EXPLANATION
Hugo emphasizes the unique role of teachers as storytellers who inspire learners, asserting that AI can only augment—not replace—their personal teaching style. This augmentation could extend a teacher’s influence to new subjects.
EVIDENCE
He recounts how teachers are memorable storytellers and suggests AI could replicate a favorite teacher’s voice and presentation style for different topics, enhancing learning without supplanting the teacher [184-188].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need to keep teachers as central, supportive figures while using AI as a tool is emphasized in discussions about AI not replacing the transmission of wisdom [S13].
MAJOR DISCUSSION POINT
Human storytelling & augmentation
Argument 4
Need for explainable AI (Hugo) – Reliance on black‑box answers erodes trust; specialized, transparent models and explainability research are required.
EXPLANATION
Hugo points out that AI systems often provide answers without revealing their reasoning, which undermines user trust. He calls for development of explainable models and dedicated research to restore confidence.
EVIDENCE
He notes that many LLMs give answers without indicating sources, creating a trust issue, and later stresses the necessity for explainable AI, specialized trusted models, and research on model transparency [46-47][340-353].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for transparent, “glass-box” AI systems that reveal data sources and training provenance are made in trusted-AI discussions [S15]; reliability, safety, and verifiability of AI outputs are also stressed [S16].
MAJOR DISCUSSION POINT
Explainable AI
DISAGREED WITH
Aidan Gomez
Argument 5
ROI ambiguity and adaptive learning (Hugo) – Companies struggle to measure learning ROI; AI enables bite‑size, in‑flow, skill‑tracking solutions that align with business outcomes.
EXPLANATION
Hugo reports that enterprises find it difficult to quantify the return on investment of learning initiatives. He proposes AI-driven, bite‑size, adaptive learning that can be measured in real time to better align with business goals.
EVIDENCE
He references conversations with 400 CHROs revealing a proliferation of tools with unclear ROI, and describes AI’s ability to deliver bite-size, adaptive, in-flow learning with real-time skill tracking [128-140].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The difficulty of demonstrating learning ROI and the shift toward adaptive, bite-size learning measured in real time are highlighted [S1]; Bloom’s two-sigma problem and its relevance to ROI are discussed [S17]; AI-driven personalized learning that improves outcomes is noted [S23].
MAJOR DISCUSSION POINT
Learning ROI and adaptive solutions
Argument 6
Degree as a societal bundle (Hugo) – Traditional degrees combine credential, rite of passage, and research; AI‑driven economics may prompt a re‑evaluation and possible unbundling of these components.
EXPLANATION
Hugo characterizes university degrees as a societal bundle that includes certification, a rite of passage, and research output. He suggests that AI’s impact on education economics could lead to reconsidering or unbundling these elements.
EVIDENCE
He explains that a degree bundles credential, rite of passage, and research, noting that AI may force a re-evaluation of this structure and potentially lead to unbundling [408-416].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The degree is described as a “convenient bundle” of credential, rite of passage, and research, with AI prompting reconsideration of this structure [S1]; further analysis of the bundle appears in the Knowledge in the Age of AI discussion [S17].
MAJOR DISCUSSION POINT
Future of degree bundles
DISAGREED WITH
Debbie Prentice
Argument 7
Historical knowledge scarcity conferred power, and AI is reshaping that dynamic.
EXPLANATION
Hugo notes that in earlier eras knowledge was scarce and a source of geopolitical power, with nations fighting over it. He contrasts this with today’s AI, which can make vast amounts of information widely accessible, altering the traditional power structures tied to knowledge.
EVIDENCE
He references a historical perspective where knowledge was scarce, served as a source of power, and few polymaths existed, highlighting the shift brought by LLMs that can learn everything and democratize knowledge [31-36][37-40].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The shift from scarce knowledge as a source of power to democratized AI-mediated knowledge is discussed in the panel on enterprise impact [S1] and in the context of scarcity thinking [S19].
MAJOR DISCUSSION POINT
Historical knowledge scarcity vs AI democratization
Argument 8
Large language models act as digital polymaths, democratizing access to knowledge across domains.
EXPLANATION
Hugo describes how modern LLMs can acquire expertise in multiple fields simultaneously, effectively becoming polymaths that anyone can query. This widespread availability reduces the exclusivity of expertise.
EVIDENCE
He states that LLMs can learn everything, become polymaths, and each new data center adds millions of such polymaths, leading to a democratization of knowledge [37-40].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Open-source LLMs are presented as a way to democratize AI access globally, especially for the Global South [S21]; the broader impact of publicly accessible models on knowledge democratization is explored [S22].
MAJOR DISCUSSION POINT
AI as a democratizing polymath
Argument 9
AI can deliver one‑on‑one tutoring comparable to Bloom’s two‑sigma effect, overcoming economic barriers.
EXPLANATION
Citing Bloom’s research on the superior outcomes of individualized coaching, Hugo argues that AI can provide similar personalized instruction at scale, offering the benefits of one‑on‑one tutoring without the prohibitive costs.
EVIDENCE
He mentions the Bloom two-sigma problem, noting that one-on-one coaching yields two-sigma higher learning, and explains that AI can replicate this personalized feedback loop at scale [149-156].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bloom’s two-sigma problem is cited as evidence of the power of individualized tutoring, and AI is positioned as a scalable way to achieve similar gains [S17]; personalized lessons that double learning rates further support this claim [S23].
MAJOR DISCUSSION POINT
AI‑enabled personalized tutoring
Argument 10
AI can help learners formulate the right questions, a prerequisite for effective learning.
EXPLANATION
Hugo emphasizes that teaching students how to ask the correct question is essential, and AI tools can support this skill by prompting, providing feedback, and guiding inquiry, thereby strengthening critical thinking.
EVIDENCE
He asserts that education must teach how to ask the right question and that AI can aid in this process, highlighting the importance of question formulation for learning outcomes [236-239].
MAJOR DISCUSSION POINT
Question‑formulation support by AI
Argument 11
AI‑driven platforms can halve learning time by focusing on contextual understanding rather than rote memorization.
EXPLANATION
Referencing the Alpha School example, Hugo explains that AI can accelerate learning by emphasizing the context of information, allowing students to achieve mastery in a fraction of the traditional time.
EVIDENCE
He describes Alpha School’s use of AI to achieve learning in half the time by prioritizing contextual learning over isolated facts, illustrating a practical outcome of AI-enhanced education [242-250].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Evidence that AI can double learning speed by emphasizing context over memorization is provided in discussions of personalized, contextual learning [S23] and adaptive learning systems [S24].
MAJOR DISCUSSION POINT
Accelerated learning through contextual AI
Argument 12
Rapid labor‑market changes demand adaptable, bite‑size learning focused on specific skills rather than broad curricula.
EXPLANATION
Hugo argues that because skills become obsolete quickly, education must be flexible and deliver targeted, bite‑size modules that align with immediate business needs, moving away from one‑size‑fits‑all programs.
EVIDENCE
He notes the necessity for adaptability and skill-specific focus due to fast-moving labor markets, emphasizing the shift toward modular, real-time skill development [419-421].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for modular, skill-specific learning to keep pace with fast-moving labor markets is highlighted in analyses of adaptive learning and the future of education [S24]; calls for rethinking traditional curricula appear in broader education reform commentary [S25].
MAJOR DISCUSSION POINT
Adaptability and skill‑specific learning
A
Aidan Gomez
9 arguments166 words per minute1973 words710 seconds
Argument 1
Deep‑mastery erosion (Aidan) – Rapid, surface‑level answers from LLMs foster a false sense of mastery, threatening deep understanding.
EXPLANATION
Aidan warns that LLMs provide quick, superficial answers that can give learners an illusion of mastery without true comprehension. This false confidence jeopardizes deep learning.
EVIDENCE
He describes how LLMs deliver brief responses that appear to answer complex questions, creating a false sense of mastery, and argues that testing without tools is essential to assess true depth of understanding [48-53].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Balanced integration of AI that safeguards deep learning and critical thinking is advocated in discussions of responsible AI use in education [S26].
MAJOR DISCUSSION POINT
Erosion of deep mastery
DISAGREED WITH
Hugo Sarazen, Debbie Prentice
Argument 2
Enterprise LLM deployment (Aidan) – Cohere equips organizations with secure, on‑premise models that let employees manage AI agents, shifting work from manual execution to AI‑augmented decision‑making.
EXPLANATION
Aidan outlines Cohere’s strategy of providing large language models that run within a client’s own infrastructure, ensuring data security while enabling employees to orchestrate AI agents for tasks, thereby augmenting human work.
EVIDENCE
He details Cohere’s development of core LLM technology, its focus on enterprise customers, and the security model that keeps data on-premise without leaving the customer’s perimeter [110-118].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The enterprise-centric AI model, emphasizing on-premise deployment for security, is described in the panel on scalable AI adoption [S1]; industry perspectives on AI reshaping professional work support this view [S14].
MAJOR DISCUSSION POINT
Secure enterprise LLMs
Argument 3
Human grounding & testing (Aidan) – Humans are the ultimate customers; AI is a tool that must be complemented by rigorous, tool‑free testing to ensure genuine competence.
EXPLANATION
Aidan stresses that while AI serves as a powerful assistant, humans remain the end‑users who must be evaluated without reliance on AI to verify true competence. Rigorous testing without AI is therefore crucial.
EVIDENCE
He states that humans are the customers, emphasizes the need for testing without AI tools to gauge real knowledge, and calls testing critical especially as AI can enable cheating [171-182].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of testing without AI tools to assess true competence is emphasized in recommendations for assessment practices [S26]; the role of educators in validating AI-augmented learning is also noted [S13].
MAJOR DISCUSSION POINT
Testing without AI
DISAGREED WITH
Hugo Sarazen
Argument 4
Reasoning & citation mechanisms (Aidan) – New “reasoning” models with internal monologues and retrieval‑augmented generation can expose chains of thought and cite sources, improving auditability.
EXPLANATION
Aidan introduces advanced LLMs that generate internal reasoning steps before answering and can retrieve and cite external documents, making their outputs more transparent and verifiable.
EVIDENCE
He explains that modern models perform internal monologues, reason through problems, and use retrieval-augmented generation to cite sources such as Cambridge or Oxford libraries, enhancing auditability [374-383].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for transparent, auditable AI outputs that reveal reasoning and sources are made in trusted-AI discussions [S15]; safety and verifiability concerns reinforce the need for such mechanisms [S16].
MAJOR DISCUSSION POINT
Reasoning and source citation
DISAGREED WITH
Hugo Sarazen
Argument 5
Curriculum‑market mismatch (Aidan) – Academic offerings often lag behind labor‑market needs; AI can accelerate creation of new programs but skill identification must come from humans.
EXPLANATION
Aidan highlights the gap between university curricula and the rapidly evolving skill demands of the labor market. He suggests AI can speed up program development, yet the identification of needed skills must be driven by human stakeholders.
EVIDENCE
He points out the mismatch between education and market demand, proposes AI to quickly create new courses, and notes that humans must first define the required skills [254-266].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The gap between university curricula and labor-market demand is highlighted, with AI proposed as a tool to speed program creation while human expertise defines skill needs [S24]; broader calls for curriculum reform appear in education change commentary [S25].
MAJOR DISCUSSION POINT
Education‑industry alignment
Argument 6
Calculator‑free assessment (Aidan) – The gold standard is testing without AI to gauge true retention, while also recognizing AI‑usage as a distinct skill to be evaluated.
EXPLANATION
Aidan argues that the most reliable assessment removes AI tools to measure what learners truly retain, but also acknowledges that proficiency in using AI is itself a skill that should be assessed separately.
EVIDENCE
He critiques existing AI-detectors, then asserts that testing without AI is the gold standard for measuring retention, while also noting that using AI effectively is a skill worth evaluating [321-334].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Recommendations for tool-free assessments as the gold standard, alongside evaluation of AI-usage competence, are discussed in responsible AI assessment guidelines [S26].
MAJOR DISCUSSION POINT
Assessment without AI tools
DISAGREED WITH
Hugo Sarazen
Argument 7
Enterprise focus over formal credentials (Aidan) – Cohere’s enterprise model emphasizes skill development within organizations rather than formal academic certification.
EXPLANATION
Aidan explains that Cohere’s business model targets corporate clients and critical industries, focusing on building employee capabilities directly rather than providing traditional academic degrees.
EVIDENCE
He mentions that Cohere serves critical sectors such as finance, telecom, healthcare, and education, emphasizing enterprise-centric skill development over formal credentials [120-121] (building on the earlier description of Cohere’s enterprise focus [110-118]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The shift toward enterprise-centric skill development over traditional degrees is noted in industry analyses of AI’s impact on professional training [S14]; personalized learning that doubles outcomes supports this enterprise focus [S23].
MAJOR DISCUSSION POINT
Enterprise‑centric skill building
Argument 8
Robust benchmarks are needed to evaluate the teaching effectiveness of AI models.
EXPLANATION
Aidan points out that while teaching with AI is a skill, there are currently no standardized metrics to assess model performance across subjects, calling for the creation of comprehensive benchmarks.
EVIDENCE
He states that teaching is a skill requiring benchmarks, notes the lack of existing teaching benchmarks, and mentions possible benchmark domains such as code, math, and biology [300-307].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The lack of standardized benchmarks for AI teaching effectiveness is identified, with calls for comprehensive evaluation frameworks in AI education research [S26].
MAJOR DISCUSSION POINT
Need for AI teaching benchmarks
Argument 9
The brittleness of early LLMs, which answered all queries with equal confidence, necessitates reasoning and internal monologue mechanisms.
EXPLANATION
Aidan explains that prior models were brittle, providing confident but sometimes incorrect answers regardless of question complexity, prompting the development of reasoning models that incorporate internal deliberation before responding.
EVIDENCE
He describes how models responded with the same effort to simple arithmetic and complex unsolved problems, revealing brittleness, and then introduces reasoning models with internal monologues to improve reliability [386-394].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Early model brittleness and the need for reasoning, internal monologue, and source citation to improve reliability are discussed in trusted-AI and safety literature [S15][S16].
MAJOR DISCUSSION POINT
Brittleness driving reasoning models
D
Debbie Prentice
7 arguments124 words per minute1283 words616 seconds
Argument 1
Critical‑thinking priority (Debbie) – Audience poll shows critical thinking is most valued; self‑knowledge and the ability to judge one’s own learning are essential.
EXPLANATION
Debbie notes that the live poll indicated critical thinking as the top choice among participants, underscoring its importance for self‑assessment and autonomous learning.
EVIDENCE
She reports that the audience’s preferred option was critical thinking, with sustained attention close behind, based on the poll results displayed during the session [65-69].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Critical thinking and literacy as essential competencies for a positive AI future are emphasized in youth summit and inclusion discussions [S10][S26].
MAJOR DISCUSSION POINT
Critical thinking as priority
DISAGREED WITH
Hugo Sarazen, Aidan Gomez
Argument 2
Attention‑boosting personalization (Debbie) – AI‑driven personalization can counter short‑attention spans by tailoring content to auditory, visual, or other learner preferences.
EXPLANATION
Debbie suggests that AI can mitigate declining attention spans by customizing learning material to match individual learner modalities, thereby keeping learners engaged longer.
EVIDENCE
She argues that AI can personalize experiences for auditory, visual, or other learners, making content more engaging and helping sustain attention [144-146].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-enabled personalization that adapts to learner modalities to sustain attention is highlighted in adaptive learning research [S24] and virtual classroom innovations [S11].
MAJOR DISCUSSION POINT
Personalized attention
Argument 3
Human intervention necessity (Debbie) – Even with sophisticated technology, educators must intervene to guide, validate, and contextualize learning.
EXPLANATION
Debbie emphasizes that technology cannot replace the role of educators, who need to provide guidance, validation, and context to ensure meaningful learning outcomes.
EVIDENCE
She states that despite advanced tools, educators still need to step in to guide, validate, and contextualize learning experiences [169-170].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for teacher support, professional development, and human guidance alongside AI tools is discussed in educator-centric AI integration studies [S12][S13].
MAJOR DISCUSSION POINT
Need for educator intervention
Argument 4
Expertise without visible work (Debbie) – Raises concern that future AI may provide answers without showing reasoning, challenging traditional notions of authority and verification.
EXPLANATION
Debbie questions how society will handle AI outputs that lack transparent reasoning or citations, which could undermine established methods of verifying expertise and authority.
EVIDENCE
She asks what to do in a world where AI provides answers without showing the work or citing sources, challenging the traditional reliance on libraries and expert verification [335-339].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Concerns about AI outputs lacking transparent reasoning and the call for “glass-box” AI to maintain authority are raised in trusted-AI discussions [S13][S15].
MAJOR DISCUSSION POINT
Opaque AI outputs
Argument 5
University’s broader mission (Debbie) – Universities prioritize critical thinking and deep mastery, offering value beyond immediate job skills despite industry pressure for applied knowledge.
EXPLANATION
Debbie defends the university role in fostering critical thinking and deep mastery, arguing that higher education provides broader societal value beyond immediate vocational training.
EVIDENCE
She argues that universities teach critical thinking and deep mastery, providing essential skills even if employers seek more applied knowledge, and positions this as a core university contribution [267-272].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The university’s role in fostering critical thinking and deep mastery, distinct from vocational training, is defended in higher-education commentary [S13][S25].
MAJOR DISCUSSION POINT
University mission vs. job skills
DISAGREED WITH
Hugo Sarazen
Argument 6
Detecting AI‑generated work (Debbie) – Current AI‑detectors are unreliable; better tools are needed to differentiate human from machine output in assessments.
EXPLANATION
Debbie highlights the inadequacy of existing AI‑detection tools and calls for improved mechanisms to reliably identify AI‑generated content in educational assessments.
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The unreliability of existing AI-detectors and the need for improved verification mechanisms are highlighted in trusted-AI and safety literature [S15][S16].
MAJOR DISCUSSION POINT
AI detection reliability
Argument 7
Universities’ enduring value (Debbie) – Despite cost pressures, universities deliver research, critical inquiry, and cultural functions that remain “gold” in society.
EXPLANATION
Debbie asserts that universities continue to provide essential research, critical inquiry, and cultural contributions, making them invaluable despite rising tuition costs.
EVIDENCE
She describes universities as delivering research, critical inquiry, and cultural functions, referring to them as “gold” in society [425-426].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The enduring societal value of universities in research, critical inquiry, and cultural contributions is emphasized in discussions of higher-education purpose [S13][S25].
MAJOR DISCUSSION POINT
Enduring value of universities
A
Audience
6 arguments164 words per minute886 words323 seconds
Argument 1
AI‑enabled technologies can boost learner motivation through interactive role‑play and gamified feedback loops.
EXPLANATION
An audience member asks how AI can address motivation, and Hugo later illustrates AI role‑play simulations and feedback mechanisms that make learning feel like a workout, keeping participants engaged.
EVIDENCE
The audience member (Anna) questions motivation and AI’s role [194-198]; Hugo responds with AI role-play examples for sales and call-center training, plus feedback loops that act like gym repetitions to reinforce learning [199-216].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Interactive, gamified AI learning experiences that increase motivation are described in adaptive learning and personalized feedback studies [S24][S23].
MAJOR DISCUSSION POINT
Motivation via AI‑driven interactive learning
Argument 2
Integrating AI into physical classrooms requires teaching critical thinking about AI use and a balanced policy on bans.
EXPLANATION
A participant from Australia raises concerns about AI’s role in classrooms and potential bans, prompting Aidan to argue that AI should be taught as a tool while emphasizing the need to prevent misuse and ensure critical engagement.
EVIDENCE
The audience member (Nathaniel) asks about AI’s role in physical classrooms and bans [217-222]; Aidan replies that AI must be part of schooling, taught as a tool, with safeguards against cheating [223-229].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Guidelines for integrating AI in classrooms while teaching critical AI literacy and establishing balanced policies are discussed in educator-focused AI integration literature [S12][S13].
MAJOR DISCUSSION POINT
AI in classrooms and policy balance
Argument 3
Education should focus on delivering applied knowledge that directly supports livelihoods across sectors, moving beyond traditional degree structures.
EXPLANATION
An audience speaker highlights the mismatch between academic offerings and job market needs, calling for curricula that provide practical, employable skills for all types of work rather than emphasizing degrees.
EVIDENCE
The audience member (Pranjal Sharma) describes the gap between skills demanded by industry and what academia offers, argues that degrees should be reconsidered, and emphasizes the need for applied knowledge that enables livelihood across white-, gray-, and blue-collar jobs [247-252].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for applied, livelihood-focused curricula and a shift away from traditional degree-centric models appear in education reform commentary [S25][S24].
MAJOR DISCUSSION POINT
Shift toward applied, livelihood‑focused education
Argument 4
Reliable detection of AI‑generated work and balanced testing strategies are essential, combining tool‑free assessment with evaluation of AI‑assisted skills.
EXPLANATION
A CEO questions how to test with or without AI calculators and the reliability of detectors; Aidan acknowledges the shortcomings of current detectors and stresses the importance of both strict, tool‑free testing and assessing AI‑usage competence.
EVIDENCE
The audience member (Kian) raises concerns about AI detection tools and testing approaches [320-327]; Aidan discusses the high error rates of existing detectors, the need for better tools, and the value of testing without AI while also recognizing AI-usage as a skill [321-334].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The necessity of robust AI-detection tools and the combination of tool-free testing with assessment of AI-usage skills are highlighted in trusted-AI and assessment guidelines [S15][S16][S26].
MAJOR DISCUSSION POINT
AI detection and testing methodology
Argument 5
The emergence of AI polymaths raises existential concerns about human relevance, underscoring the need to preserve human agency.
EXPLANATION
An audience comment expresses a pessimistic view that AI’s polymath capabilities could render humans obsolete, highlighting the urgency of maintaining human decision‑making and oversight.
EVIDENCE
The audience participant remarks that AI’s polymath nature means humans cannot compete and suggests a bleak outlook for humanity [308-312].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Debates about AI democratizing knowledge while preserving human agency and the need for transparent, accountable AI systems are discussed in AI ethics and governance literature [S13][S21].
MAJOR DISCUSSION POINT
Human agency versus AI polymath
Argument 6
Training junior workers in critical thinking about AI outputs is crucial, as senior professionals can assess AI but juniors often cannot.
EXPLANATION
An audience member points out that senior staff can judge AI‑generated results, whereas junior employees lack the experience, emphasizing the need for capacity‑building programs that develop critical evaluation skills in the next generation.
EVIDENCE
The audience comment notes the gap in critical thinking abilities between senior professionals and junior staff, calling for training to bridge this divide [318-322].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of building AI literacy and critical evaluation skills in junior staff, alongside educator support, is emphasized in professional development and AI education research [S12][S26].
MAJOR DISCUSSION POINT
Building critical AI literacy in junior workforce
Agreements
Agreement Points
Attention scarcity and the need for AI‑driven personalization to sustain it
Speakers: Hugo Sarazen, Debbie Prentice, Aidan Gomez
Attention scarcity (Hugo) Attention‑boosting personalization (Debbie)
All three speakers note that the abundance of information creates a poverty of attention, making sustained human focus a scarce resource, and suggest that AI-enabled personalization can help keep learners engaged [40-43][144-146][143].
Critical thinking is essential for navigating AI‑generated information
Speakers: Debbie Prentice, Hugo Sarazen
Critical‑thinking priority (Debbie) Human storytelling & augmentation (Hugo)
Debbie highlights that the audience prioritized critical thinking, and Hugo stresses teaching learners how to ask the right questions and apply critical judgment, indicating shared emphasis on critical thinking as a key competency [65-69][236-239].
AI outputs must be explainable and trustworthy
Speakers: Hugo Sarazen, Aidan Gomez, Debbie Prentice
Need for explainable AI (Hugo) Reasoning & citation mechanisms (Aidan) Expertise without visible work (Debbie)
The panel agrees that AI systems that provide answers without showing reasoning or sources erode trust; they call for models that expose internal reasoning, cite references, and improve auditability [46-47][340-353][374-383][386-394][335-339].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with emerging AI governance frameworks that stress explainability, accountability, and trust, as highlighted in UN AI Security Council discussions on algorithmic transparency [S42] and India’s Secure Finance Risk-Based AI Policy emphasizing predictable, explainable systems [S41]. Ethical AI sessions also call for transparent, trustworthy tools [S44], and broader initiatives aim to revitalize trust in public services through AI governance [S60].
Human educators remain indispensable and should be augmented, not replaced, by AI
Speakers: Hugo Sarazen, Debbie Prentice, Aidan Gomez
Human storytelling & augmentation (Hugo) Human intervention necessity (Debbie) Human grounding & testing (Aidan)
All speakers assert that teachers’ storytelling and human guidance are crucial; AI can augment but not supplant the educator’s role in guiding, validating, and contextualizing learning [184-188][169-170][171-176].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy statements from UNESCO and the NEA underscore that teachers should transition to coaching roles while AI serves as a supportive augmentative tool [S53][S54]. UNICEF’s AI policy guidance further stresses the need for human oversight to protect children’s rights when integrating AI in education [S58].
AI can accelerate learning and provide personalized, one‑on‑one tutoring comparable to high‑impact human coaching
Speakers: Hugo Sarazen, Aidan Gomez
AI can deliver one‑on‑one tutoring comparable to Bloom’s two‑sigma effect (Hugo) Reasoning & citation mechanisms (Aidan)
Hugo cites the Alpha School example where AI halves learning time, while Aidan notes AI can speed creation of new programs and provide reasoning capabilities, together indicating AI’s potential to dramatically accelerate learning [242-250][254-266][374-383].
POLICY CONTEXT (KNOWLEDGE BASE)
Research collaborations such as those with Stanford demonstrate adaptive learning systems that deliver personalized tutoring at scale [S55]. The IGF Youth Summit also notes AI’s capacity to tailor education to individual student needs, enhancing learning outcomes [S57].
Similar Viewpoints
Both argue that modern LLMs need internal reasoning steps and source citation to become trustworthy and auditable tools [340-353][374-383].
Speakers: Hugo Sarazen, Aidan Gomez
Need for explainable AI (Hugo) Reasoning & citation mechanisms (Aidan)
Both describe enterprise‑focused AI solutions that embed models within organizations to deliver personalized, secure, and scalable learning experiences for workforce reskilling [94-107][110-118].
Speakers: Hugo Sarazen, Aidan Gomez
Adaptive AI tutoring (Hugo) Enterprise LLM deployment (Aidan)
Both stress that educators must remain in the loop to provide guidance, motivation, and validation, with AI serving as a supportive tool rather than a replacement [184-188][169-170].
Speakers: Hugo Sarazen, Debbie Prentice
Human storytelling & augmentation (Hugo) Human intervention necessity (Debbie)
Unexpected Consensus
Attention as the scarcest resource despite differing sectoral perspectives
Speakers: Hugo Sarazen, Debbie Prentice
Attention scarcity (Hugo) Attention‑boosting personalization (Debbie)
Although Hugo represents a for-profit ed-tech firm and Debbie a non-profit university, both converge on the view that sustained human attention is the most limited resource in the AI era and that personalization is the remedy, a convergence not anticipated given their institutional differences [40-43][144-146].
Overall Assessment

The discussion reveals strong convergence on four main fronts: (1) attention scarcity and the role of AI personalization; (2) the centrality of critical thinking; (3) the necessity for explainable, trustworthy AI; (4) the enduring, augmentable role of human educators. Additionally, both speakers see AI as a catalyst for faster, individualized learning and for enterprise‑level workforce development.

High consensus across speakers on the challenges posed by AI (attention, trust, critical thinking) and on AI‑enabled solutions (personalization, adaptive tutoring, explainability). This broad agreement suggests a shared understanding that future education policies must balance AI integration with human oversight, prioritize critical thinking, and invest in transparent, learner‑centric AI systems.

Differences
Different Viewpoints
What is the scarcest or most threatened resource for learning in the AI era?
Speakers: Hugo Sarazen, Aidan Gomez, Debbie Prentice
Attention scarcity (Hugo) – The abundance of information creates a “poverty of attention,” making sustained human focus the most limited resource. Deep‑mastery erosion (Aidan) – Rapid, surface‑level answers from LLMs foster a false sense of mastery, threatening deep understanding. Critical‑thinking priority (Debbie) – Audience poll shows critical thinking is most valued; self‑knowledge and the ability to judge one’s own learning are essential.
Hugo argues that sustained human attention is the bottleneck due to information overload [40-43]. Aidan counters that the real danger is a superficial sense of mastery caused by quick LLM answers, eroding deep understanding [48-53]. Debbie notes that the audience prioritized critical thinking, implying that the ability to evaluate one’s own learning may be the most needed skill [65-69].
POLICY CONTEXT (KNOWLEDGE BASE)
AI policy forums have identified trust, stewardship, and human capability as the most scarce resources in the AI decade, echoing concerns about limited attention and capability [S49]. UN learning policy discussions also highlight strategic allocation of learning resources as a critical challenge [S46].
How to restore trust and transparency in AI‑generated answers?
Speakers: Hugo Sarazen, Aidan Gomez
Need for explainable AI (Hugo) – Reliance on black‑box answers erodes trust; specialized, transparent models and explainability research are required. Reasoning & citation mechanisms (Aidan) – New “reasoning” models with internal monologues and retrieval‑augmented generation can expose chains of thought and cite sources, improving auditability.
Hugo calls for dedicated research on explainable, specialized models and mechanisms to show sources, stressing that black-box answers undermine trust [46-47][340-353]. Aidan proposes technical solutions: reasoning steps and retrieval-augmented generation that reveal the model’s chain of thought and citations, thereby increasing auditability [374-383][386-394].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple governance initiatives call for algorithmic transparency, rigorous testing, and explainability to rebuild trust, as seen in UN AI Security Council recommendations [S42], India’s AI trust and accountability framework [S41], and dedicated sessions on transparency and explainability [S44]. Recent efforts to revitalize public trust focus on AI governance, disclosure, and oversight mechanisms [S60][S61][S62].
Future role of the university degree versus AI‑driven online credentials.
Speakers: Hugo Sarazen, Debbie Prentice
Degree as a societal bundle (Hugo) – Traditional degrees combine credential, rite of passage, and research; AI‑driven economics may prompt a re‑evaluation and possible unbundling of these components. University’s broader mission (Debbie) – Universities prioritize critical thinking and deep mastery, offering value beyond immediate job skills despite industry pressure for applied knowledge.
Hugo suggests that AI could force a re-thinking of the bundled university degree, potentially unbundling credential, rite of passage, and research functions [408-416]. Debbie defends the university’s broader mission, emphasizing its role in fostering critical thinking, deep mastery, and research, which she views as “gold” for society [425-426][267-272].
POLICY CONTEXT (KNOWLEDGE BASE)
Industry analyses note a growing narrative that traditional university degrees may become less essential, challenging higher-education institutions to demonstrate unique value [S39]. Concurrently, calls for universities to modernize curricula and strengthen AI-focused degree programs are documented [S38][S40], while policy discussions stress the need for higher-education to adapt amid AI-driven credentialing trends [S53].
Approach to assessment: tool‑free testing versus AI‑driven evaluation.
Speakers: Aidan Gomez, Hugo Sarazen
Human grounding & testing (Aidan) – Humans are the ultimate customers; AI is a tool that must be complemented by rigorous, tool‑free testing to ensure genuine competence. Calculator‑free assessment (Aidan) – The gold standard is testing without AI to gauge true retention, while also recognizing AI‑usage as a distinct skill to be evaluated. Adaptive AI tutoring (Hugo) – AI can deliver individualized, multimodal learning experiences, role‑play simulations, and real‑time feedback to keep learners engaged.
Aidan stresses that the most reliable assessment removes AI tools to measure true knowledge retention and that testing without AI should be the gold standard, while also acknowledging AI-usage as a skill [171-182][321-334]. Hugo promotes AI-enabled quick assessments, class segmentation, and feedback loops as a way to personalize and accelerate learning, implying AI can be part of the assessment process [104-107][199-207].
Unexpected Differences
Optimism about AI as a democratizing polymath versus concern over false mastery.
Speakers: Hugo Sarazen, Aidan Gomez
Attention scarcity (Hugo) – The abundance of information creates a “poverty of attention,” making sustained human focus the most limited resource. Deep‑mastery erosion (Aidan) – Rapid, surface‑level answers from LLMs foster a false sense of mastery, threatening deep understanding.
Both speakers come from AI-focused companies, yet Hugo is upbeat about AI’s ability to democratize knowledge and solve attention problems, while Aidan warns that AI may give learners a misleading sense of mastery, undermining deep learning. The contrast between optimism about AI’s benefits and caution about its superficial impact was not anticipated given their similar industry backgrounds [40-43][48-53].
Different views on the primary bottleneck for learning—attention versus deep mastery.
Speakers: Hugo Sarazen, Aidan Gomez
Attention scarcity (Hugo) – The abundance of information creates a “poverty of attention,” making sustained human focus the most limited resource. Deep‑mastery erosion (Aidan) – Rapid, surface‑level answers from LLMs foster a false sense of mastery, threatening deep understanding.
While both discuss challenges posed by AI, Hugo identifies attention as the scarce resource, whereas Aidan points to the erosion of deep mastery as the core problem. The divergence in diagnosing the primary learning bottleneck was unexpected given the shared context of AI-enhanced education [40-43][48-53].
Overall Assessment

The panel reveals substantive disagreements on which learning resource is most endangered (attention vs deep mastery vs critical thinking), how to secure trust in AI outputs (explainability research vs technical reasoning/citation), the future of the university degree bundle, and the proper role of AI in assessment. While all participants agree AI will reshape education, they diverge on priorities and implementation pathways.

Moderate to high disagreement: the speakers share a common recognition of AI’s transformative potential but differ sharply on strategic focus areas, indicating that consensus on policy and practice will require careful negotiation across academia, industry, and education providers.

Partial Agreements
Both agree that AI should be leveraged to improve learning outcomes, but Hugo emphasizes personalization through tutoring and simulations, whereas Aidan focuses on technical enhancements (reasoning, citation) to make AI outputs trustworthy. Their shared goal is better learning, but the pathways differ [104-107][199-207][374-383].
Speakers: Hugo Sarazen, Aidan Gomez
Adaptive AI tutoring (Hugo) – AI can deliver individualized, multimodal learning experiences, role‑play simulations, and real‑time feedback to keep learners engaged. Reasoning & citation mechanisms (Aidan) – New “reasoning” models with internal monologues and retrieval‑augmented generation can expose chains of thought and cite sources, improving auditability.
Both see critical thinking (and the ability to evaluate information) as essential. Hugo links it to the need for explainable AI to support critical evaluation, while Debbie highlights the audience’s preference for critical thinking as a skill to be cultivated. They converge on the importance of critical evaluation but differ on whether the focus should be on AI transparency or pedagogical emphasis [340-353][65-69].
Speakers: Hugo Sarazen, Debbie Prentice
Need for explainable AI (Hugo) – Reliance on black‑box answers erodes trust; specialized, transparent models and explainability research are required. Critical‑thinking priority (Debbie) – Audience poll shows critical thinking is most valued; self‑knowledge and the ability to judge one’s own learning are essential.
Takeaways
Key takeaways
In the AI era, human cognitive resources—especially sustained attention, deep mastery, and critical thinking—are becoming scarcer than information itself. Audience consensus places critical thinking as the most valued skill, followed closely by sustained attention. LLMs provide rapid, surface‑level answers that can create a false sense of mastery, threatening deep understanding. AI can enable highly personalized, multimodal learning experiences (adaptive tutoring, role‑play simulations, real‑time feedback) that may help mitigate attention deficits. Enterprise‑focused AI (Cohere) emphasizes secure, on‑premise deployment and shifting workers from manual execution to managing AI agents. Human teachers remain essential as storytellers and mentors; AI should augment, not replace, the human connection and pedagogical judgment. Trust and explainability are critical; current black‑box LLM outputs erode confidence, prompting calls for reasoning models, internal monologues, and retrieval‑augmented generation with citations. There is a pronounced gap between academic curricula and rapidly evolving industry skill demands; AI can accelerate program creation but skill identification must be driven by humans. Assessment strategies need a dual approach: tool‑free testing to verify true retention and AI‑enhanced simulations for competency verification; existing AI‑detectors are unreliable. The traditional university degree is a societal bundle (credential, rite of passage, research) that may be reconsidered or unbundled as AI lowers delivery costs, yet universities retain unique value in research and critical inquiry.
Resolutions and action items
Develop and deploy AI‑driven adaptive tutoring and role‑play assessment tools to personalize learning and provide immediate feedback (suggested by Hugo). Invest in explainable‑AI research, including reasoning models with internal monologues and retrieval‑augmented generation that can cite sources (suggested by Aidan). Create more robust AI‑generated content detection mechanisms for educators (Aidan). Implement testing regimes that include both AI‑free assessments for core knowledge retention and AI‑in‑the‑loop assessments for tool proficiency (Aidan). Align enterprise learning platforms with real‑time skill‑tracking and ROI measurement to demonstrate business impact (Hugo). Identify emerging labor‑market skill needs through human‑led analysis to guide AI‑generated curriculum development (Aidan). Maintain human teacher involvement to augment AI outputs, preserving storytelling and mentorship aspects (Hugo).
Unresolved issues
How to reliably ensure trust and explainability of AI answers at scale, especially in high‑stakes contexts. Standardized methods for measuring learning ROI that go beyond completion metrics. Effective ways to integrate AI into physical classrooms while preventing misuse or over‑reliance (debate on bans vs. adoption). Long‑term implications of unbundling the university degree and how credentialing will evolve. Scalable solutions for detecting AI‑generated work that are accurate and not prone to false positives/negatives. Clear frameworks for balancing AI‑augmented learning with the development of deep, disciplined mastery.
Suggested compromises
Combine AI personalization with human storytelling and mentorship, using AI to augment rather than replace teachers. Adopt a dual assessment model: retain traditional, tool‑free exams for core competence while adding AI‑enhanced simulations for applied skills. Use AI for rapid curriculum creation and skill‑mapping, but keep human experts responsible for defining market‑relevant skill sets. Maintain the degree bundle for its cultural and research value while allowing modular, AI‑driven skill certifications for specific job needs.
Thought Provoking Comments
When you have a wealth of information, you have a poverty of attention.
He invoked Herbert Simon’s classic insight to frame attention as the scarcest resource in the AI era, shifting the focus from data abundance to human cognitive limits.
This remark redirected the conversation from what knowledge is scarce to how learners cope with overload. It prompted Hugo and Aidan to discuss attention‑related challenges, leading to later dialogue about AI‑driven personalization as a possible remedy.
Speaker: Hugo Sarazen
LLMs can fool you into thinking you understand something when you don’t… testing is essential – you need to take the tool away to see what the human alone retains.
He identified a core risk of AI‑augmented education: a false sense of deep mastery, and proposed rigorous, tool‑free assessment as a safeguard.
This sparked a deeper debate on assessment, influencing both panelists to stress the importance of testing (Aidan on strict testing regimes, Hugo on AI‑augmented feedback) and setting up later discussion about measuring AI‑generated learning outcomes.
Speaker: Aidan Gomez
The Bloom two‑sigma problem shows one‑on‑one tutoring can double learning outcomes, but economics prevented scaling – AI can now provide that personalized coaching at scale.
He connected a well‑known educational research finding to current AI capabilities, suggesting a concrete way AI could overcome historic scalability limits.
This comment opened a new thread about AI‑driven adaptive learning and role‑play simulations, leading to audience questions on motivation and concrete examples of AI‑based practice environments.
Speaker: Hugo Sarazen
Modern models now have an internal monologue – a reasoning step – and can be coupled with retrieval‑augmented generation to cite sources, giving auditability and explainability.
He introduced the technical evolution from pure input‑output LLMs to reasoning and RAG architectures, directly addressing concerns about trust and explainability raised earlier.
This shifted the discussion from philosophical concerns to concrete technical solutions, prompting Hugo to elaborate on the need for specialized, trusted models and influencing the later debate on “front‑end” vs. “back‑end” AI capabilities.
Speaker: Aidan Gomez
The university degree is a bundle – a convenient social contract – and AI may force us to unbundle its components (knowledge, credential, rite of passage).
He challenged the entrenched notion that a degree must remain a monolithic credential, opening space to reconsider higher‑education economics in the AI age.
This provoked a reflective response from Debbie about the broader role of universities beyond knowledge delivery, and set up the final segment comparing online platforms to elite colleges, influencing the audience’s perception of future education models.
Speaker: Hugo Sarazen
If we rely on black‑box AI that gives answers without reasoning, we lose agency and the ability to validate decisions – we must preserve human critical thinking.
He warned of societal risks of over‑reliance on opaque systems, emphasizing the need for human oversight and critical questioning.
This comment reinforced the earlier theme of critical thinking, leading the panel to stress teaching question‑asking skills, and it resonated with audience concerns about testing and trust, culminating in the discussion of explainability tools.
Speaker: Hugo Sarazen
Testing without the calculator is the gold‑standard; yet using the LLM is itself a skill that should be assessed with the tool in the loop.
He nuanced the earlier stance on tool‑free testing by recognizing AI proficiency as a legitimate competency, bridging the gap between pure knowledge assessment and AI‑augmented skill evaluation.
This nuanced view broadened the conversation about assessment strategies, influencing the audience’s follow‑up question on detecting AI‑generated work and prompting Aidan to discuss detection methods and the need for better testing frameworks.
Speaker: Aidan Gomez
Overall Assessment

The discussion’s trajectory was shaped by a handful of pivotal remarks that reframed the problem space. Hugo’s attention‑poverty observation and his reference to the Bloom two‑sigma study introduced human‑centric constraints and a concrete AI solution, steering the dialogue toward personalization. Aidan’s warnings about false mastery and his exposition of reasoning‑enabled, retrieval‑augmented models supplied both a problem statement and a technical answer, deepening the analysis of trust and explainability. Hugo’s challenge to the traditional degree bundle and his caution about losing agency to black‑box AI broadened the conversation from pedagogy to societal structures. Together, these comments sparked new sub‑topics (assessment design, AI‑driven tutoring, credential unbundling, explainability) and prompted participants and the audience to reconsider assumptions, thereby elevating the discussion from a surface‑level inventory of scarce resources to a nuanced exploration of how AI reshapes learning, evaluation, and the very purpose of higher education.

Follow-up Questions
How can AI-enabled technology help with learner motivation, especially when there is no human teacher in the loop?
Addresses the challenge of sustaining motivation in AI‑driven, largely automated learning environments, which is crucial for effective skill acquisition.
Speaker: Anna Van Niels (Audience)
What is the role of AI in physical classrooms, and how should we address arguments for banning versus not banning AI?
Seeks guidance on policy and pedagogical integration of AI in traditional school settings, a pressing issue given recent regulatory actions.
Speaker: Nathaniel (Audience)
How can we create applied knowledge resources that enable people across all job types to earn a livelihood?
Targets the mismatch between academic offerings and labor‑market needs, emphasizing the need for practical, employable knowledge for diverse occupations.
Speaker: Pranjal Sharma (Audience)
How can universities teach students to increase critical thinking to fact‑check, logically verify, scientifically evaluate, and ethically assess instant AI answers?
Highlights the risk of over‑reliance on AI outputs and the necessity of embedding robust critical‑thinking skills in higher education curricula.
Speaker: Debbie Prentice
How can we train junior employees to develop critical thinking about AI outputs, given senior staff can judge but juniors cannot?
Points to a future workforce gap where younger professionals may lack the experience to evaluate AI, underscoring the need for systematic upskilling.
Speaker: Audience member (unnamed)
What happens in a world of AI polymaths that provide answers without showing work or explaining reasoning, and how do we preserve expertise and authority?
Raises concerns about transparency, trust, and the erosion of expert authority when AI delivers unexplainable answers.
Speaker: Debbie Prentice
What is driving the need for reasoning capabilities in AI models?
Seeks to understand the motivations behind developing reasoning‑oriented models, which impacts model design and user trust.
Speaker: Debbie Prentice
What are the gaps between online education platforms (e.g., Udemy) and accredited elite colleges, and is there market demand for online models that emulate a traditional college experience?
Aims to identify functional differences and potential demand for hybrid or unbundled education models, informing strategic direction for both sectors.
Speaker: Audience member (unnamed)
Research on explainability of AI models to build trust and understand reasoning processes
Explainability is essential for users to validate AI outputs, ensure accountability, and maintain confidence in AI‑augmented learning.
Speaker: Hugo Sarazen
Develop methods to measure ROI of learning interventions and real‑time skill deployment in enterprises
Current tools lack clear ROI metrics; robust measurement is needed for organizations to justify and optimize learning investments.
Speaker: Hugo Sarazen
Create more reliable detection tools for AI‑generated text and improve accuracy of AI‑detectors
Existing detectors have high false‑positive/negative rates, hindering academic integrity and trust in AI‑generated content.
Speaker: Aidan Gomez
Establish teaching benchmarks for AI models to assess their effectiveness in educational tasks
A lack of standardized benchmarks makes it difficult to evaluate and compare AI tutoring systems, limiting their adoption.
Speaker: Aidan Gomez
Identify specific labor‑market skills needed to guide AI‑driven reskilling programs
Aligning AI‑enabled training with actual skill demand is critical to address the mismatch between education outputs and employer needs.
Speaker: Aidan Gomez
Develop evaluation methods for graduates’ critical‑thinking levels as they enter the workforce
Measuring critical‑thinking outcomes is necessary to ensure that education translates into effective workplace performance.
Speaker: Hugo Sarazen, Debbie Prentice
Research specialized, trusted AI models trained on curated expert data (RAG) for reliable answers
Specialized, retrieval‑augmented models could improve answer accuracy and trustworthiness compared to generic LLMs.
Speaker: Hugo Sarazen, Aidan Gomez
Investigate the impact of AI on attention spans and how personalized AI can mitigate distraction
Understanding and counteracting attention fragmentation is vital for effective learning in an AI‑rich environment.
Speaker: Aidan Gomez, Hugo Sarazen
Study the effectiveness of AI role‑play simulations for skill acquisition and learner motivation
AI‑driven simulations could provide immersive, feedback‑rich practice, but empirical evidence of their impact is needed.
Speaker: Hugo Sarazen
Explore ethical implications of reliance on AI without human agency, especially regarding decision‑making and trust
Ensuring human oversight and agency is essential to prevent over‑dependence on opaque AI systems.
Speaker: Hugo Sarazen

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.