Defying Cognitive Atrophy

22 Jan 2026 10:30h - 11:15h

Session at a glance

Summary

This World Economic Forum stakeholder dialogue, moderated by former UK Foreign Secretary William Hague, focused on “defying cognitive atrophy” in the age of artificial intelligence. The panel included Omar Abbosh (CEO of Pearson), Anna Frances Griffiths (Director of the Leverhulme Trust), and Karl Eschenbach (CEO of Workday), who discussed how to preserve and enhance human cognitive abilities as AI adoption accelerates.


The discussion centered on a fundamental paradox: while cognitive skills are becoming more economically valuable, humans are increasingly outsourcing these abilities to machines. Anna Griffiths highlighted three major risks in AI-enabled education systems: humans’ natural tendency to conserve energy and take shortcuts, declining student engagement due to fears about job displacement, and inadequate technology investment in public schools that could widen educational inequality. Omar Abbosh emphasized that learning requires effort and neuroplasticity, warning that outsourcing cognitive tasks without proper design could prevent skill development in both children and adults.


Karl Eschenbach presented a more optimistic vision, arguing that AI should shift from being something humans use to technology that works in the background, freeing people to focus on uniquely human skills like collaboration and networking. However, he acknowledged the need to change business narratives from pure cost-cutting to growth through human augmentation. The panelists agreed that age-appropriate implementation is crucial, with particular caution needed for young children whose brains are still developing critical neural pathways.


The discussion concluded that any positive AI future requires substantial investment in human development, with technology serving to enhance rather than replace human cognitive capabilities.


Keypoints

Major Discussion Points:

AI’s Impact on Cognitive Development in Children: The panel extensively discussed how AI use in education could lead to cognitive atrophy in young people, as children naturally take shortcuts and may not develop essential neural pathways if they rely too heavily on AI tools during critical developmental stages.


The Workplace Transformation Dilemma: A central debate emerged around whether AI will primarily serve as a cost-cutting automation tool that reduces human jobs, or whether businesses can shift the narrative toward using AI for growth and human augmentation, freeing workers to focus on higher-level social and creative skills.


Educational Equity and the Digital Divide: Significant concern was raised about AI creating greater inequality between well-resourced private schools that can effectively integrate AI tools and under-funded public schools that lack basic technology infrastructure, potentially widening existing educational gaps.


Age-Appropriate AI Implementation: The discussion highlighted the critical need for different approaches to AI integration based on age groups, with particular caution needed for elementary-age children versus teenagers and adults, drawing parallels to the harmful effects of social media on youth development.


The Need for Human-Centric Skills Investment: All panelists agreed that regardless of AI advancement, there must be deliberate investment in developing uniquely human capabilities like empathy, critical thinking, collaboration, and social skills that cannot be easily automated.


Overall Purpose:

The discussion aimed to address how society can prevent “cognitive atrophy” in the age of AI – essentially exploring strategies to ensure humans maintain and develop their cognitive abilities, critical thinking skills, and social capabilities even as AI systems become more prevalent in education and the workplace. The session was part of a broader “reskilling revolution” initiative targeting better education and economic opportunities for a billion people by 2030.


Overall Tone:

The discussion began with a cautiously optimistic tone, acknowledging both opportunities and risks. However, the tone became increasingly concerned and urgent as the conversation progressed, particularly after audience interventions from experts like Jonathan Haidt, who warned about the “greatest destruction of human capital in human history” from technology’s impact on children. The panel ultimately settled on a tone of determined pragmatism – recognizing that AI adoption is inevitable but emphasizing the critical importance of thoughtful, regulated implementation with strong safeguards for human development, especially for children.


Speakers

Speakers from the provided list:


William J. Hague – Former Foreign Secretary of the United Kingdom, Chancellor of the University of Oxford, Session Moderator


Anna Frances Griffiths (Vignoles) – Director of the Leverhulme Trust, Former Professor of Education


Carl Eschenbach – Chief Executive Officer of Workday


Omar Abbosh – Chief Executive Officer of Pearson


Audience – Multiple audience members who asked questions during the session


Additional speakers:


Mohammed Jalfar – Audience member from Kuwait


Jonathan Haidt – Author of “The Anxious Generation”


[Unnamed audience member] – Asked question about children and iPad usage before age seven


Full session report

Defying Cognitive Atrophy in the Age of AI: A World Economic Forum Stakeholder Dialogue

Executive Summary

This World Economic Forum stakeholder dialogue, moderated by William J. Hague (Chancellor of Oxford University and former UK Foreign Secretary), addressed a critical challenge of our technological era: how to preserve and enhance human cognitive abilities as artificial intelligence adoption accelerates. The session, part of the Centre for the New Economy and Society’s reskilling revolution aiming to reach a billion people by 2030, brought together Omar Abbosh (CEO of Pearson), Anna Frances Griffiths (Director of the Leverhulme Trust), and Carl Eschenbach (CEO of Workday).


As Hague explained in his opening, the session title prompted curious reactions: “this was the session I was moderating and they said, what on earth is that about, defying cognitive atrophy?” The discussion centered on a fundamental paradox: whilst cognitive skills are becoming increasingly economically valuable, humans are simultaneously outsourcing these abilities to machines at unprecedented rates.


The Central Challenge: Evolutionary Biology Meets Artificial Intelligence

Anna Frances Griffiths established the scientific foundation by framing the challenge through evolutionary biology: “We’re up against evolutionary biology. All species like to conserve energy. Humans are no different. Learning takes effort. Naturally, people will tend to take the easy route… But as a child, if you don’t get the chance to develop those cognitive skills, to develop those neural pathways, we’re in trouble.”


This biological imperative creates extraordinary challenges for AI implementation in education. Griffiths identified three major risks: humans’ natural tendency to conserve energy and take shortcuts, declining student engagement due to fears about job displacement, and inadequate technology investment in schools that could widen educational inequality.


The stakes are particularly high for children. As Griffiths emphasized: “You only get one shot at being five years old, right?” Children who rely too heavily on AI during critical developmental stages may fail to develop essential neural pathways, creating potentially irreversible cognitive deficits.


The Haidt Intervention: A Pivotal Warning

The discussion’s tone shifted dramatically when audience member Jonathan Haidt, author of “The Anxious Generation,” delivered a stark warning. He argued that “social media came into childhood and always on on your phone came into childhood and the result is the greatest destruction of human capital in human history.”


Haidt warned that AI poses an even greater threat: “when you hack the attachment system, when you have chatbots, when kids are developing relationships with AI before they’re 16 or 18 the results are likely to be devastating.”


This intervention created a pivotal moment, moving the discussion from cautious optimism to urgent concern. An anonymous audience member reinforced this perspective, arguing that “the child that doesn’t have the iPad before the age of seven, I guarantee that’s the child you’re going to want to hire. That child is the child developing the connection between the hand and the brain.”


Educational Implementation: Opportunities and Risks

Omar Abbosh presented evidence of AI’s potential benefits in education, citing Pearson’s experience with 10 million students using AI study tutors and tools like the Communications Coach developed with Microsoft. He noted that GED learners often find AI tutors less embarrassing than human interaction, potentially increasing engagement.


However, Abbosh acknowledged fundamental challenges. Traditional homework has become “100% cheatable,” requiring format changes in educational assessment. He advocated for age-appropriate implementation, suggesting AI could handle content delivery whilst classroom time focuses on assessment and collaboration.


The discussion revealed deep concerns about educational equity. Griffiths painted a stark picture: “Many schools lack basic internet connectivity and textbooks,” making advanced AI implementation unrealistic. She warned of “AI-enabled divides between private and state education” and highlighted affordability issues with basic educational materials.


World Economic Forum research, mentioned by Hague, showed that mathematical and statistical thinking skills are easily replaceable by AI, whilst empathy and active listening remain uniquely human capabilities.


Workplace Transformation: From Cost-Cutting to Human Augmentation

Carl Eschenbach presented the most optimistic workplace vision, arguing for a fundamental shift in AI implementation philosophy. “The narrative needs to shift from cost savings to growth,” he explained, emphasizing that productivity gains have been the greatest driver of corporate growth over decades.


Eschenbach advocated for moving from “humans using technology to technology working for humans in the background,” freeing people to focus on uniquely human skills like collaboration and creative problem-solving. He reframed workplace competition: “An employee is not competing against AI. They’re now going to be competing against their peers who are leveraging AI.”


Abbosh supported this vision, citing projections that augmenting people with AI could unlock $4.8-6.6 trillion in value in the US alone by 2034. However, moderator Hague expressed skepticism about whether businesses would actually reinvest AI savings in human development.


Age-Appropriate Implementation: The Unresolved Debate

The most contentious issue involved age-appropriate AI implementation. All speakers agreed children’s developing brains require special protection, but disagreed on specifics.


Abbosh emphasized that “age-appropriate implementation is crucial, with different approaches needed for children versus adults,” arguing children need technology exposure to avoid workforce exclusion whilst protecting developmental needs.


However, the Haidt intervention and anonymous audience member’s comments suggested much more restrictive approaches might be necessary. This tension between preparing children for an AI-enabled future whilst protecting cognitive development remained unresolved, highlighting the need for clearer age-specific guidelines.


Corporate Responsibility and Regulatory Frameworks

Mohammed Jalfar’s question from Kuwait introduced innovative policy concepts, asking whether AI developers should invest in ensuring “the original brain, at the age of five, when it’s most vulnerable, continues to be built up through sleep, through proper nutrition, through interaction, through exercise.”


This suggestion of a potential global fund or taxation mechanism would make AI developers financially responsible for protecting human cognitive development. Griffiths strongly supported regulatory intervention, arguing that “safety regulation is needed for classroom AI implementation, particularly for young children,” explicitly rejecting voluntary approaches.


Skills for an AI-Enabled Future

The speakers agreed that future workforce requirements will fundamentally change as AI handles increasingly sophisticated tasks. Eschenbach predicted the “future workforce will become more generalist as AI handles specialised tasks like software development and legal contract work.”


This necessitates focusing on three key areas: foundational skills including mathematical literacy and critical thinking, durable technical skills that complement AI, and human-centric capabilities like empathy and collaboration.


Abbosh noted that students are losing motivation because they “believe AI will eliminate high-skilled jobs,” requiring narrative changes about uncertain futures and the continued value of human capabilities.


Conclusion: Critical Choices Ahead

The discussion concluded with broad agreement that successful AI integration requires coordinated action across multiple domains. Educational institutions must redesign curricula, businesses must shift from cost-cutting to augmentation models, and policymakers must develop appropriate regulatory frameworks.


Hague referenced Erik Brynjolfsson’s observation that “the next decade could be the best ever, or it could be the worst,” emphasizing the critical nature of current decisions. The speakers emphasized that there is no positive AI future without strong focus on human development and skills investment.


The dialogue revealed that whilst AI presents unprecedented challenges to human cognitive development, thoughtful implementation can potentially enhance rather than diminish human capabilities. However, achieving this outcome requires deliberate effort, substantial investment, and recognition that the choices made today about AI implementation—particularly regarding children—will have profound and potentially irreversible consequences.


The window for making these critical decisions may be narrowing, making evidence-based approaches more urgent than ever. As the discussion made clear, the stakes involve nothing less than the preservation and enhancement of human cognitive capabilities in an age of artificial intelligence.


Session transcript

William J. Hague

Well, welcome everybody to this stakeholder dialogue on defying cognitive atrophy. Thank you for joining us. There are some people who enthusiastically went along to the Board of Peace signing and now can’t escape from there.

But congratulations to those who have done so. Thank you for joining us. This is, I think, one of the most important subjects of the coming years.

I told someone on the way here that this was the session I was moderating and they said, what on earth is that about, defying cognitive atrophy? But I think it will become one of the big topics of the next decade because it’s about really the future of the human mind in the age of artificial intelligence. And I am joined by a great panel here.

I am William Hague, the former Foreign Secretary of the United Kingdom and now Chancellor of the University of Oxford. And I am joined here by three expert colleagues, by Omar Abbosh, who is Chief Executive Officer of Pearson, and by Anna Frances Griffiths (Vignoles), who is the Director of the Leverhulme Trust, and by Karl Eschenbach, who is the Chief Executive Officer of WorkDay.

Now, as I say, this is a vital topic for the coming years. This session is connected to the Centre for the New Economy and Society and its flagship reskilling revolution, which aims to reach a billion people by 2030 with better education and skills and economic opportunity.

I want to remind the online audience that if they’re sharing about us through their social channels, they should use the hashtag WEF26. We’re entering a paradoxical moment in human history. As AI adoption accelerates, cognitive skills are becoming more economically valuable.

Yet we are outsourcing more of those than ever before. So the question for today is how do we ensure people retain the ability to question and reason and judge? in a world where machines will increasingly be thinking for us.

The World Economic Forum’s Future of Jobs report shows that analytical thinking remains the most sought-after core skill among employers. But the latest forum research also shows that human-centric skills are unlikely to be automated due to AI, but they can be quietly eroded. So if you look at the slide that is displayed there, that is the research that shows that mathematical and statistical thinking is easy for AI to replace, but not empathy or active listening, for instance.

But even those skills can be eroded without regular practice and engagement. Core cognitive capabilities, such as judgment and critical thinking, deteriorate over time. And then there’s the issue of whether AI will widen the gap between learners who are taught to question, guide and critically evaluate AI and those who merely consume its output.

Will we divide as humanity between people whose mental faculties are leveraged and enhanced by AI and others for whom it is reduced? So then we have to work out, if this is a problem, how we redesign education, technology, workforce systems for the future to strengthen human cognition. So this small subject is what we are engaged with today.

And let me turn to Anna, first of all, and ask you to think about what do we know about the current situation? It’s a fast-moving situation. We’re short of data on this.

But what do you think we know? What does the latest research tell us about AI’s impact on cognitive development?

Anna Frances Griffiths (Vignoles)

So before we get to the impact on cognitive development, we need to really start with what are the risks around a fully AI-enabled education system? Because I think that also leads into, you know, what are we not ready for? So taking the potential of AI as given, there are three problems, I think.

We’re up against evolutionary biology. All species like to conserve energy. Humans are no different.

Learning takes effort. Naturally, people will tend to take the easy route, as we all know. When you’re an adult, that might be fine.

You’re outsourcing your cognitive thinking, as you said. But as a child, if you don’t get the chance to develop those cognitive skills, to develop those neural pathways, we’re in trouble. And I think whatever we do in the AI space has to really be mindful of that, that children will take that shortcut.

So AI in education needs to prompt more effort. And if you think about what it’s doing in the workplace, in a way, it’s designed to reduce effort, right? So it’s a very different way of thinking about it.

The second major issue, I think, is the lack of purpose that we’re seeing in schools. Pupil engagement, as any teacher in the room will know, is a massive issue at the moment, for lots of reasons, the attention economy, other things in their lives. The big one, though, is we painted this picture of an AI-enabled future that doesn’t have many high-skilled jobs.

They’re thinking that their jobs will be very much about manual or low skill, and that’s a really demotivating thing to be thinking. So we need to change the narrative on it. And if I could, I would say that that narrative needs to be more.

The future is incredibly uncertain. And so you need every skill you can get your hands on. And some of those listed on the board, obviously, the human-centric skills are vital, but the cognitive skills need to be developed too.

We need to make sure our AI systems do that. And then the third, and very briefly, when we’re thinking about what does the evidence show on the system as it stands, we have never, ever had enough investment in public education systems, state-funded education systems, in technology.

I mean, teachers would be grateful if they could get the internet to work, the technology to work, and access tech in an affordable way, and they can’t. So I think there’s a real challenge there, and it speaks to your point earlier, which is there will be schools and individuals using AI tools to fantastic effect. But if you think about those state schools that are not going to use that, you’re getting that divide that you referred to.

So leaving people behind in the equity consequences of some schools and some people being AI-enabled is a major thing that we need to think about. So that’s what I said.

William J. Hague

Well, those are very valuable comments because you’ve already identified three important issues there. There’s the impact on the young brain, of the brain taking the easy route, AI-enabled, and not developing the right skills. There is the hopes and ambitions of young people who might be put off things that actually could be a great future.

And then there is the resourcing, the possible divide in education. There could be the highly resourced private schools who actually, it widens the gap between the children going there and the state schools. So already three very important issues, and we haven’t even started on adult education, but Omar, you’re a good person to…

Omar Abbosh

I mean, let me amplify some of the points Anna was saying, and I’ll pick up. I mean, so back to the risks. I mean, so one, I mean, Anna said it.

I talk to professors across universities around the world, teachers, et cetera, and what they’ll tell you is that for a kid to even access AI, they need a digital device. Those things are weapons of mass distraction. And, you know, the sort of things that you’ll hear professors complain about is a kid will be literally standing in front of a professor and ask them a question.

And as the professor starts to answer, the kid goes into TikTok. So that’s not even just a question of manners. It’s like the distraction thing is a huge thing.

That’s without getting to AI. The second thing that I think Anna talked about is the gap between not knowing something and knowing something is learning. And learning requires neuroplasticity.

It requires effort. And actually I think that applies to adults as well. In an era where the half-life of skills are declining, adults are also going to have to learn new stuff.

And so if we don’t go to the trouble of learning, that is problematic. So if you use ChatGPT to grab knowledge. You know you outsource the the research component But then you may not learn enough and so again we can go to the different stages of learning You know in the higher order cognitive elements of that you can use AI to do that extremely effectively if you design it, right?

But that isn’t necessarily what people automatically will go and do and then the third thing I’d like to point out is Humans are incredible incredibly social There is a small proportion of people about 5% who are effective at self-directed learning So in other words sitting in front of a screen consuming digital content and learning There’s a high proportion of those in the tech industry In many many other industries that doesn’t work people need human contact and one of the bigger issues with applying AI in the wrong way In education and in adult learning is you disconnect the learner from the educator and you can erode trust there Because if the teachers are teaching in a particular way with a particular pedagogy and then you use these tools that come up with information That’s perfectly valid and good But in a different way then the student can become skeptical about what they’re learning as well So there are different aspects of this that we have to treat thoughtfully as we apply these tools because we will wisely

William J. Hague

So you are say you’re identifying that there is a right there’s going to be a right way in a wrong way To do that there will be a way there are ways you always ate it for up for the use of AI in adult or education and including adult education to enhance Critical thinking yes mental faculties in general, but it would be easy to do that wrong So this is going to be a and and given that presumably we’re going to need much more adult education in this world of AI There’s so many people are going to have to reinvent them themselves and their skills in their working lives This is gonna be a huge issue 100% So and then there’s the workplace so can’t let me bring you in because and comment before I’m sure This is what we’re here for.

Yeah. Yeah

Carl Eschenbach

One thing I’d like to elaborate on that you said today if you think about whether it’s it’s our children Or it’s adults in their professional lives Today what we spend a lot of time doing is engaging with technology Whether it’s your phone, whether it’s your screen, whether it’s your computer.

We are actually engaging with technology We are the users of Technology and as we think about the power of AI and as to how it starts to transform our business Something’s going to happen here We’re going to move from a world where we use technology In the front of everything we do to a world where now technology starts to work for us And we don’t even know it.

One of the powers of AI in the in the professional world and in business is this shift from us Using and engaging with technology to a start to move to the background We don’t have to do that and we can go and focus on some of these skills that you had on your chart up there Freeing us up from those mundane tasks Freeing us up from sitting behind a terminal, a PC, a mobile device To allowing us to do exactly what we’re doing here at the World Economic Forum There’s a reason everyone descends upon this place every single year because as humans we are meant to network We’re meant to collaborate We’re meant to learn from one another and we come here to get that because in our professional lives today the opposite is happening We are working on technology as opposed to technology working for us So I think it’s a really important Dimension of what AI can bring as a benefit to the world of business in the world of work in the future Right, so that that’s a very strong argument I think that’s that’s very clearly put But let me just challenge you a little bit on what the incentives will be for employers to do that because in education at least We know that the you know The purpose of the school or the university is among its purposes to develop the cognitive skills of the of the young people But actually that’s not really the purpose of most employers, isn’t it, or to widen those human skills.

Those employers might be investing huge amounts now in AI. They clearly are. There are billions of pounds that businesses are spending on AI systems.

Somehow they have to get that money back. And is it really one of their concerns as to whether what you’re describing is happening. People can actually enlarge their human skills.

It’s an opportunity to do that. Or is it really an overwhelming opportunity, though, not only to reduce the number of people, but to reduce how much they have to think. And that it’s not really the employer or business’ responsibility to maintain the cognitive skills of their workforce.

Yeah, that is a great question. And let me address it in two different ways. Let me start with the first.

Today there’s a narrative out there in business around the power of AI and the way to justify the cost or the dollars companies are spending on AI technology. It’s a very simple equation that people use. What is my return on investment for the technology, AI, that I’m spending?

Too often the conversation stops there. How much money am I saving? And when the conversation stops there about pure cost savings through the use of this incredibly powerful technology, a couple things happen.

Number one, our employees become very nervous. When it’s all about ROI, in their minds, it’s about how many jobs will be replaced because of technology. And a level of distrust starts to set in between your employees and management.

It’s all about cost. The narrative, William, needs to shift. from cost savings to growth.

If you think about corporate growth over the last 40, 50, 60 years, the greatest driver of corporate growth is productivity gains. The greatest driver of productivity gains is technology. We need to start to have a conversation about how this drives corporate growth.

Every company, including mine at Workday, we have this backlog of things that we need to get done, but we can’t move with speed and purpose and drive business outcomes because it’s not automated. At the same time, we can’t invest enough in our people’s skills. All the things we’re talking about that are on your chart here.

So we need to change the narrative. It’s not purely cost savings, or maybe it is, but what do we do with those cost savings? We reinvest them in the business and people.

Because AI today is talked about as a technology transformation. AI is about business transformation. And when you do business transformation, there’s three pieces.

There’s technology, there’s people, and there’s process. So I advocate for changing that narrative. How do we focus on growth?

The growth comes through humans.

William J. Hague

Very good. And this is what we’re here for, to work out these things. But you can understand my skepticism about that.

And you’re saying we have to change the narrative. You’re not saying this is what all businesses will think. But on all these topics we’ve discussed so far, we can see a lot of dangers emerging where we’re going to have to push people in the right.

I’ll come back to Anna, but let me just, Omar, did you want to respond to that? I’m going to build on what

Omar Abbosh

Carl was just saying. So we actually just published some research this week called Mind the Learning Gap. And Carl’s exactly right.

If you’re a business leader, you’re under pressure to deliver results. Your investors want you to invest in AI, but they want the ROI. And so defaulting to using AI for automation is kind of an easy solve.

But I’ll tell you why it won’t work for the long run. First, the automation is simply making what you already have more efficient. So yes, that can drive near term results.

But it’s but only there’s only so much you can do with your existing process you businesses that are successful grow They develop new business models. So so that alone won’t deal with it. Secondly, we’re running out of demographics in the West There are fewer and fewer people.

I mean that that is very very well documented and so actually Augmenting people with AI is the thing and so what the research shows and we looked at the productivity of what happened when IT Happened in the 90s.

There was a productivity ramp then it flatlined AI could give us a next productivity ramp But it requires us to augment people in their work We looked at hundreds of occupations. We looked at the gross value added of those occupations We looked at where AI can augment or not and what level of efficiency you can drive and the unlock in the u.s Alone in white-collar spaces only is between 4.8 and 6.6 trillion dollars by 2034 That at the low end that’s 15% of the US entire GDP So if we do this in the right way, it is an enormously beneficial thing for business and for people

William J. Hague

Okay, so we really will also there’s a great prize here. There’s a right way in the great long-run benefit There could be but there’s a lot of short-term dangers and people could take the wrong turnings So let’s think about what advice we would give to employers to schools to governments about and how we win this argument About it.

So Anna, let me come back to you on that in education in particular. What do what do we now need to do?

Anna Frances Griffiths (Vignoles)

So I think you’re quite right long term the future might be rosy, but the transition costs are going to be huge And if we look at previous revolutions of various kinds the transition costs have been significant, you know I worked in HR up in the north of England back in the day when there was deindustrialization going on It was heartbreaking watching all these people who were incredibly productive at one moment Basically lose their jobs and therefore need to reskill So there’s a whole piece about adult retraining, but if I can return to the school system, it is different You only get one shot at being five years old, right?

Yes So what we might do in business about a difficult transition can’t necessarily apply to the school system And the other thing I’d like to pick up on two points one in the early years We’ve already got evidence that young children don’t respond well to screens, even interactive screens, in a sense they haven’t got those those neural pathways there yet, they need that human in the loop and we’ve seen degradation of language skills as a result of that, so we have to be incredibly careful with what we’re doing in that space.

And you also said that humans are inherently social, well we are, we come to Davos, right? That’s not what we’re seeing in the data post-Covid, when young teens, again a critical period in their biology where they develop those social skills and those social interactions, they were deprived of it and we’re still seeing the consequences with mental health, even learning outcomes, definitely work readiness, right?

So we need to get it right as we’re going through the education system and I think that means more caution and more regulation about what can be trialled on a system when we’re not quite sure whether AI is great at manipulating and encouraging and motivating students or actually could really be a crutch that’s used in the wrong way.

But what would, let me just

William J. Hague

press you a bit then for what these, what does that regulation say or what would be our guidelines as things stand? Because as you say, there are people now in education and AI is now changing, certainly at the university level, speaking from Oxford, the vast majority of students are now making use of AI and indeed we’re encouraging faculty to make use of AI for their own effectiveness, so what are, are there some rough rules about this that we can specify now or is it too early?

Anna Frances Griffiths (Vignoles)

No I don’t think it is too early, I mean in terms of the age appropriateness, trialling, not launching into things that you know you don’t know the consequence of, but also we need to be much more conscious that at the end of all of this, we actually want young people to come out with most of the skills sitting in that on that list, even if you’re not using it in your job, you’re certainly going to need it to interpret, you know.

You will even need all the mathematical and statistical skills to know whether the AI is making hopeless mistakes. Exactly, so to write it off and say well this generation can skip that bit I think is a huge risk, so I think actually you know if we go step by step there’s a potential here, but we need to just be very clear what the end goal is and that is the development of skills.

The other thing is that you know education is inherently a collective endeavour. and we want that to continue. We want it to be social, we want it to be collective because when they come out the other end they’re no use to you as employees if they can’t work nicely with others, right?

So you have to combine the

William J. Hague

That still has to exist. There is still a teacher standing in front of the class although there may be some AI enabled personalized learning going on as well. I was raising a lot of extreme, sorry, a lot of extremely valid points.

I mean a hundred percent and I’m with you and I think we need to treat this extremely seriously. On the one hand you’ll say well and I’ve seen some educational systems in different countries will literally shut down the use of AI and say like don’t use AI and then people graduate and the employers say what do you know about AI?

And you’ll find employers today will say we

Omar Abbosh

won’t hire someone if they don’t know AI. So employers need to get better at signaling future demand and as Anna’s saying we need local coalitions of lifelong learning ecosystems in education, in government, and in business to set the right standards and the right approaches.

Actually we do know a lot already about what does work and doesn’t work with AI and I think that… Just to tell us about that. So I was gonna go, it’s obviously easier with grown-ups than with small children.

I think we need to distinguish the approaches. So Pearson for example has higher education digital courseware tools with like 10 million students here in the US and we can see with those 10 million students they have millions and millions of interactions every semester with AI study tutors.

And the kids who engage with AI study tutors because they were designed with the learning science that I spoke about earlier, show higher-order cognitive outcomes. They engage in a better way with learning. So there’s a difference between just simply remembering something or understanding it versus applying it and then you get to the higher level of analyzing, critiquing, and creating.

And the kids who engage with AI study tools we can… Help them get to those higher order. So we know that that can work I mean Anna of course is right.

You the the human brain has a wide range of neurodiversity When and and in the formation stages of a child’s life I think there’s still a lot more to learn in terms of what really works and what doesn’t work We know we’ve seen some evidence that that people with ADHD for example actually do benefit with an interactive screen format And they struggle absolutely to sit there and focus at text on a piece of paper So I think we need to be wise about the steps, but I think the answer isn’t you know Shut it off or turn it off.

I mean that that will not prepare people for a future in employment

William J. Hague

But what do we do there? Are we heading for a world where we say for young people right? There’s part of your day where you’re really gonna use a you’re gonna be using these great things And yeah, you’re just describing and you’re gonna be using AI to the fullest possible extent So you are ready for the future of that and it will and it will help a lot of people with their cognitive However, there’s the other part of the day where you actually you’re not using there.

You don’t have access to that You are you that you’re gonna have a book and you’re gonna be in a classroom And you are going to be playing sports and you’re and you know So we’re literally gonna divide the day up.

So you’ve got the full range of your mind. Oh, is this I mean

Omar Abbosh

Anna knows this like the public educational systems around the world the stretch financially for sorts of reasons Eight billion people are about to feel the effects of AI coming into the economy. I mean, it’s gonna be transformational There’s no question the people who can help them manage that transformation are the hundreds of millions of educators around the world So so a Pearson we believe that there’s at least three areas that you want to happen in public education One of the foundational skills, so yes indeed maths literacy and incredibly importantly learning to learn You can teach people how to learn we know those techniques learning science has figured that out We don’t actually teach it Widely in the educational system, but those are foundational skills in a world where the skills keep changing you need that Secondly a durable skills and that’s what I was talking about with signals with employers in the US places like Singapore there’s a big focus on career and technical education where employees are signaling clearly what sort of things they want like AI skills, that’s okay, to build that into the curriculum.

And then you have all the human motivational interest skills, team skills, these things, that’s another dimension. So yes, I think curriculum overall, we need to think carefully about how do we evolve it in a way that sets people up for success in their future lives.

Carl Eschenbach

Listen, I am clearly not an educator, right, like you two, and don’t understand the education market and how we go about it to train students of all ages. But I think today, we’re maybe over-rotating about the impact of AI. Because eventually, it will peacefully coexist with humans in our education system.

Today, it’s front and center, right? Think about the internet, way back when. What’s the internet gonna do to learning?

And you guys actually now leverage it as part of your curriculum. I think it’s just gonna become part of how we learn. And I actually think of AI as being the great equalizer.

Because a lot of the things we all have to learn, the general skills we need to be productive in society, our knowledge will now come to us in a very simplified way. AI will serve that up to us, allowing us to go focus on more specialized tasks, all the things you have on your chart here. So I actually think AI will just become part of our daily lives.

Like I said, move from working with technology to technology working with us. And freeing us up from those mundane tasks in the business world that we do every day at our PC, at our terminal, reviewing everything. That goes away.

And we shift the focus from technology, and we focus now on amplifying human potential, not replacing. It’s all gonna come together. But today, it’s just so raw.

It’s so powerful. We’re nervous it’s gonna replace us. But over time, if we do this correctly, like we’ve always got right in our society, I think it just becomes part of our everyday life.

It goes to the background. We leverage it. And we actually advance education.

We advance knowledge. Because the generalized knowledge skills that we always have to focus on, everyone goes to college, you have to take all these core curriculums. You might not have to do that in the future, because that’s going to be served up to us and we don’t have to learn it.

Again, just an outside perspective on the education system and how to think about it.

William J. Hague

No, it’s a really inspiring vision that you’ve got the narrative, but do you accept that could all go wrong? You know, there could be this great scope to be more human, to augment our scope, and probably most of the people who come to the World Economic Forum are really going to be well-equipped to do that. But they might be employing a lot of people who then, actually, we don’t need their skills anymore.

We don’t think they… So, there is a great danger here as well, isn’t there? Of not being a great equalizer, but actually of creating even more inequality in the world.

We could go either way.

Carl Eschenbach

There’s danger if we don’t invest in human skills, and we go completely to the background, right? Right. You talked about how powerful the mind is.

If you compare it to AI, if we’re not exercising the mind, we’re not getting smarter. How does AI get better each and every day? It looks at data, it trains data, and the more it does it, the more powerful it gets.

It’s the same with us. The more we exercise our mind. So, what are we going to exercise it on in the future is really the question, because we’re not going to have to do what we do today to exercise our mind.

We’re going to think completely different about the future. We’re going to specialize, and I think in the education system, again, educators, so I’m probably not the right person to say this. Today, the more education you get around all of the things on your chart, the more better off you’re going to be in the future serving our society.

Today’s specialization, you go to school to be a software developer, 30, 40, 50% of that can be done by AI today, and we’re in the very early stages. People go to school to be a lawyer or an attorney. I will tell you, today, we can use AI to read contracts.

We can use AI to negotiate contracts. That can all be done. So, certain specialization skills that we go to school for today will kind of go to the side and will become more generalist in the future.

William J. Hague

Right, yes, a future of more generalists as perhaps an even wider subject, but I think it’s probably true. But here we’re getting towards, we’re developing a bit of a platform here of things we would recommend. You know, employers have to invest in human skills, people have to learn how to learn, and there are ways already where the use of AI can enhance critical thinking.

We’re going to need some caution and be age-appropriate and retain some important aspects of today’s education while using this. So we are identifying things we need to do, and in a few minutes we’re going to ask if there are any questions around the room. But what else am I missing here?

If we’re preparing a list here of what needs doing, what haven’t I got on my list so far, Anna?

Anna Frances Griffiths (Vignoles)

I would agree wholeheartedly with the vision of the positive future, and I think you’re right on many counts, but bringing us back down with a rather big bump, I was smiling at your reference to a textbook.

So many schools in… Well, a book, I said. Yeah, a book, a textbook.

In my previous life as Professor of Education we struggled with textbooks and books because actually schools can’t afford to have textbooks and books because they get degraded and you have to replace them regularly and they’re very expensive, right?

Internet. Yes, internet-enabled education would be wonderful, but you’d be surprised when you go into so many classrooms what they’re using. And they’re not using the internet partly because, partly your point, if you let children loose in a classroom with an internet connection it’s an absolute disaster, which means actually we need to invest in software and devices and however integrated that are locked down, that are appropriate for purpose, this is not the same as the way we’re working in businesses, which is actually to open things up much more.

So just as we want the AI to make people put in more effort in schools, we’re taking it away in the workplace, we need it more locked down in schools and probably more open in the workplace. So I just think we need to be very mindful of that big distinction.

William J. Hague

Yes, there’s a clear difference here between education and employment.

Omar Abbosh

I’ll give you some angles of things that we can. The genie is out of the bottle on devices where you can go and get any knowledge that is out there. I mean, as Carl said, these LLMs have basically sucked up all human knowledge and you can go and ask about it.

So, the traditional format in education was the teacher would bring you into a classroom, give you the learning, and then send you back home to practice it in homework. Homework, you can now cheat at 100%. So, the format clearly has to change.

Again, the teaching system knows what to do. There’s this thing called flip classroom. So, actually what happens now is you send the kids home and say use as much AI as you like, learn the content, and then come into the classroom and we’re going to assess you.

And you’re going to work with your colleagues and you’re going to do problems together right here in the classroom. So, that deals with the lockdown browser problem that Anna was talking about. So, that actually is fine, but we need to make that flip happen across the whole educational system.

Some of the positives of AI we know about. In the 1980s, it was shown that if you give a human a personal tutor, they could improve their learning acquisition and knowledge by two standard deviations, the two sigma problem. That means an average person goes right to the top of the class.

It’s a huge transformative thing. The problem is you need wealthy people who can pay for personal tutors. Now, as the cost of AI comes down, down, down, which the hundreds of billions of capex is causing, you can put a personal tutor that can travel with you, knows you, has a history with you, knows where you struggle, knows how you learn, and you can bring that around with everyone.

That will happen. That is an incredibly positive thing. Now, we need to do it right.

In the workplace, we launched something just a few months ago with Microsoft called Communications Coach. It turns out that a lot of people want help on things like how do I better communicate, but it’s embarrassing to go and ask others. Communication Coach will literally, after you wrap up a team’s call, it will say to you, hey, William, I love the way you got your points across, but think about this range of expression.

It will land better. It gives very specific feedback to you. Our GED learners in the US, these are people who are adults who are now later in life going for a high school diploma, have said to us, I love my AI tutor because it’s not embarrassing to ask these questions.

There are a lot of positive scenarios

William J. Hague

where you can use these tools in the right way, but it has to be done in the right way. Right, well, okay, so plenty of positive things. I’m gonna take, see if there are any questions and comments from around the room.

I think we have microphones. Yes, we do have it. So there’s one here, yes, Sam.

Just say who you are and plow in.

Audience

My name is Mohammed Jalfar from Kuwait. In this race between the organic brain and the inorganic brain, is there percentage in asking the people who are developing the inorganic brain to invest in ensuring that the original brain, at the age of five, when it’s most vulnerable, continues to be built up through sleep, through proper nutrition, through interaction, through exercise, so that the artificial brain doesn’t overcome the original brain that created it?

So the question is, could we ask the Nvidia’s of this world to pay attention, time and money into protecting, yes.

William J. Hague

Right, we’d have a global fund or a new tax. I’d be interested, as a businessman, I’d be interested in such a fund. I think it is important.

Right, okay, any views on that, Anna?

Anna Frances Griffiths (Vignoles)

I do, I mean, I don’t think we ask nicely. I mean, I think that is actually fundamentally what safety and regulation looks like. What we do in classrooms, particularly in young children, I’m separating them out from adults because it is different, we need to be incredibly careful.

And in any technology, we need to think about risks and downsides. And just a free-for-all in the education system could be a disaster. And the other thing is the scaling issue.

Coming back to your study of two standard deviations. When the study was replicated in the UK, for real, across the nation, it never produces that. and so that’s both the opportunity for AI because the quality if you scaled it would be there but it’s also the risk that a lot of these things when they get put into the actual real system with children who are not likely to end up at Davos it’s much harder people who find cognitive effort hard really struggle and I’ve yet to see the evidence that AI can turn that one around and that’s your point about preserving the health of that brain.

William J. Hague

Okay so a fund okay we’ve got we’ve got we’re adding to our ideas. Yes sir over there.

Audience

Hi I’m Jonathan Haidt author of The Anxious Generation and I want to strongly support the position taken by Anna. AI is coming into childhood and it’s not as though we don’t have a precedent for this. Social media came into childhood and always on on your phone came into childhood and the result is the greatest destruction of human capital in human history.

Hard to compare to World War I and World War II but in terms of the devastation across the world and the ability to pay attention, think, be happy, be not anxious. So childhood is so important to get the skills right and now we’re about to do it with AI. I totally understand the benefits of a specific chat specific tutor that I see but what I can say up front I think what we can all know without even having to wait for the evidence is that when you hack the attachment system, when you have chatbots, when kids are developing relationships with AI before they’re 16 or 18 the results are likely to be devastating.

The tech companies already hacked our attention and now they’re hacking our attachments the results will be much much worse. So I would urge and I guess my question for the two who are more optimistic here is would you endorse a general attitude of basically keep it the hell out of elementary school, keep it the hell out of childhood except for very specific things that have been proven to work.

What I’m afraid of is these companies are just pushing it in, pushing it in, making claims not based on valid research and we’re going to end up repeating the disaster for Gen Alpha that we did for Gen Z.

Well that’s a very important question. So I’ll make a comment I mean

William J. Hague

it’s a very tough set of questions. I mean the pragmatist in me says look the genie’s out of the bottle whether we like it or not

Omar Abbosh

And I mean if you want to challenge a whole model of shareholder capitalism that drives these companies I mean we could talk about that like that that is a conversation But but that that is I like to see the world as it is and so I mean for a long time Silicon Valley has exported technologies that have utility or convenience or you know huge uptake and And and some of the founders will kind of not think too carefully about the consequences And and and and those people are Very successful in our in our world.

I think to Anna’s point I think I and I’ve been speaking about this for a long time You know that long before I’d got to Pearson every technology has a side effect I mean the ships have shipwrecks planes have plane crashes and the internet has cyber warfare And and and but but humanity has progressed with technology so applying it right is the thing and So I’m with Anna, but I don’t want to throw like Digital devices in the hands of kids at schools at all I’ve got two young boys.

Well. They’re not that young anymore, but I saw actually some of the things that you’ve spoken about Jonathan But at the same time not giving a kid an iPad literally cuts them out of the workforce At some point at some point So so so the kids will have to learn how to interact with the technology because so much work happens in that way so Designing the transition that the key moments building on real-world experiments So and I said the point about you know can you really build a two to the third the answer is not yet actually you can’t Yet, but the thing that an AI tutor in principle could do that most human tutors would not be able to do is diagnose in detail the thousands of learning objectives that you meant to have gone through and have picked up on and Figure out where you got stuck in the past and then and solve and build that so so I am positive that we can use These things for good, and we have to do it thoughtfully and wisely

William J. Hague

Now we’re going to stop in four minutes, but did anybody else want to, or we can continue on this point, or, I just want to make sure. Yes, let’s get one more question, but we are going to stop in four minutes, so just a quick point. There you are.

Audience

The child that doesn’t have the iPad before the age of seven, I guarantee that’s the child you’re going to want to hire. That child is the child developing the connection between the hand and the brain. That’s the child taking the physical risks.

And so I just don’t understand how we can come out with those kind of statements.

Omar Abbosh

Yes, sorry, just to clarify, actually, I wasn’t talking about smaller children.

Audience

So what age are we talking about?

William J. Hague

So even in your concept, there is some age differentiation here. A hundred percent. You’re talking more about the AI coach is more an adult thing, isn’t it?

Omar Abbosh

The AI coach, I think, can apply, I mean, for sure with adults, I mean, a hundred percent. I think with late teens as well. But the reality is every human is different.

Our brains are different. And so some people mature and achieve some of the social skills earlier than others. And I think we have to be mindful of that in our design of the evolution of the educational system.

But I think what I’m just saying practically is we can cut it out of the school, but then they’re going to go home and play with it. And so we just need to be thoughtful about how these things can be used properly.

William J. Hague

Last comment for about one minute. Karl and Anne, did you want to make a comment just to – we’ve got three minutes left, and I don’t want all the last word to go to Omar. Anne?

Anna Frances Griffiths (Vignoles)

So, I mean, we’ve covered most of the issues. I think two points, I mean, one is this age sensitivity is coming through, you know, loud and clear. And I think that needs to be taken very seriously.

And to your point, Jonathan, you know, we’ve had monopoly power in the past that has been tamed to some degree. We’re not talking about destroying capital or companies. We’re talking about safety regulation in certain settings.

And I don’t think we should give up on that, frankly.

Carl Eschenbach

Yeah. And also I’d say it’s in business, kind of your point, the genius left the bottle. You need to find a way to embrace the technology.

Most people today in the workforce believe they’re competing. against AI. They really do.

They really think that’s why they’re worried about their jobs. The truth is, in the workforce, an employee is not competing against AI. They’re now going to be competing against their peers who are leveraging AI.

So I think we need to educate our workforce. We’re doing that at Workday about how to use AI, the benefit of it, and what we’re going to do with those cost savings to reskill them to become more social, to be more networking, to be more collaborative, to have empathy.

I personally believe, now in my 37th year of work, we have lost the social skills in the enterprise, and we’ve got to get them back. And maybe think about it slightly different. How can AI let us bring the human back into work, as opposed to technology driving work

Omar Abbosh

outcomes? Right. Well, I think we’ve had a final question.

I mean, I think what’s loud and clear from this conversation is there’s no positive future AI scenario without a strong focus on human development. I’ll leave it at that.

William J. Hague

Exactly. I think that is exactly right. I know you’re done, and I

Carl Eschenbach

highlight for you. We have to remember one thing. Technology only enables change.

At the end of the day, people are required still to drive change to implement technology. It comes back to being human-centric in

William J. Hague

everything we do. Very good. Well, I think we’ve identified some very important things that need doing, and that investment in human skills is going to be an absolutely critical part of whatever happens in the future.

There are some views more optimistic than others in our panel, but then I think it was Professor Erik Brynjolfsson at Stanford who said, Professor of AI, the next decade for humanity could be the best ever, or it could be the worst.

And what happens on this topic is going to be one of the determining factors. as to whether it’s the best or the worst. So we’re part of what is going to be a much bigger discussion.

Thank you very much to this brilliant panel for their participation. And thank you all for coming. Thank you.

Thank you, William.

A

Anna Frances Griffiths (Vignoles)

Speech speed

198 words per minute

Speech length

1575 words

Speech time

475 seconds

Children naturally take shortcuts and avoid cognitive effort, risking underdevelopment of neural pathways

Explanation

Griffiths argues that humans naturally conserve energy and children will take the easy route when AI is available. If children don’t develop cognitive skills and neural pathways during critical developmental periods, this creates long-term problems since you only get one chance at being five years old.


Evidence

References evolutionary biology showing all species like to conserve energy, and notes that learning takes effort which people naturally avoid


Major discussion point

Risks and Challenges of AI in Education


Topics

Human rights | Sociocultural


Agreed with

– Omar Abbosh
– Audience

Agreed on

Age-appropriate AI implementation is crucial for child development


Disagreed with

– Carl Eschenbach
– Jonathan Haidt (Audience)

Disagreed on

Optimism vs. caution regarding AI’s impact on human development


Inadequate technology investment in public schools creates AI-enabled divides between private and state education

Explanation

Griffiths points out that public education systems have never had sufficient investment in technology, with teachers struggling to get basic internet connectivity. This creates a divide where some schools use AI effectively while state schools are left behind, exacerbating educational inequality.


Evidence

Notes that teachers would be grateful if they could just get the internet to work and access technology affordably


Major discussion point

Educational Inequality and Resource Gaps


Topics

Development | Economic


Agreed with

– Omar Abbosh

Agreed on

Educational inequality will worsen without proper AI access and implementation


Early childhood exposure to screens degrades language skills and social development

Explanation

Griffiths argues that young children don’t respond well to screens, even interactive ones, because they haven’t developed the necessary neural pathways yet. They need human interaction, and screen exposure has already shown evidence of degrading language skills.


Evidence

References existing evidence of language skill degradation from early screen exposure and post-COVID data showing consequences in mental health and learning outcomes


Major discussion point

Risks and Challenges of AI in Education


Topics

Human rights | Sociocultural


Agreed with

– Omar Abbosh
– Audience

Agreed on

Age-appropriate AI implementation is crucial for child development


Disagreed with

– Omar Abbosh
– Jonathan Haidt (Audience)

Disagreed on

Age-appropriate AI implementation in education


Students lose motivation believing AI will eliminate high-skilled jobs, requiring narrative change about uncertain futures

Explanation

Griffiths observes that student engagement is suffering because they believe AI will leave them only with manual or low-skill jobs. She argues the narrative should emphasize that the future is uncertain and students need every skill they can develop.


Evidence

References pupil engagement being a massive issue and teachers observing student demotivation


Major discussion point

Future Skills and Curriculum Redesign


Topics

Economic | Sociocultural


Many schools lack basic internet connectivity and textbooks, making advanced AI implementation unrealistic

Explanation

Griffiths highlights the practical reality that many schools struggle with basic resources like textbooks and reliable internet. She notes the irony that schools need more locked-down, purpose-built AI systems while workplaces need more open systems.


Evidence

References her experience as Professor of Education where schools couldn’t afford textbooks due to degradation and replacement costs


Major discussion point

Educational Inequality and Resource Gaps


Topics

Development | Infrastructure


Agreed with

– Omar Abbosh

Agreed on

Educational inequality will worsen without proper AI access and implementation


Safety regulation is needed for classroom AI implementation, particularly for young children

Explanation

Griffiths argues that rather than asking technology companies nicely to invest in child development, strong safety regulation is needed. She emphasizes that what happens in classrooms with young children requires extreme caution and that a free-for-all could be disastrous.


Evidence

References the need for age-appropriate trialling rather than launching into unknown consequences, and notes that scaling educational interventions often fails in real-world settings


Major discussion point

Investment and Regulation Needs


Topics

Legal and regulatory | Human rights


O

Omar Abbosh

Speech speed

205 words per minute

Speech length

2343 words

Speech time

685 seconds

Digital devices are weapons of mass distraction that harm attention spans and learning

Explanation

Abbosh argues that even before AI, digital devices required for AI access are highly distracting. He describes how students will ask professors questions but then immediately go to TikTok while the professor is answering, showing severe attention problems.


Evidence

Cites conversations with professors worldwide who report students accessing TikTok while professors are directly answering their questions


Major discussion point

Risks and Challenges of AI in Education


Topics

Sociocultural | Human rights


AI tutoring systems can improve higher-order cognitive outcomes when designed with proper learning science

Explanation

Abbosh argues that when AI study tools are designed with proper learning science principles, they can help students achieve higher-order cognitive outcomes like analyzing, critiquing, and creating rather than just remembering or understanding.


Evidence

References Pearson’s data from 10 million students using AI study tutors, showing improved engagement and higher-order cognitive outcomes


Major discussion point

Proper Implementation Strategies for AI in Learning


Topics

Sociocultural | Development


Flip classroom model allows AI use at home for content learning while classroom time focuses on assessment and collaboration

Explanation

Abbosh proposes that since homework can now be easily cheated with AI, the traditional model should flip. Students use AI at home to learn content, then come to class for assessment and collaborative problem-solving where cheating is impossible.


Evidence

Notes that LLMs have absorbed all human knowledge making traditional homework obsolete, and references the established flip classroom pedagogical approach


Major discussion point

Proper Implementation Strategies for AI in Learning


Topics

Sociocultural | Legal and regulatory


Age-appropriate implementation is crucial, with different approaches needed for children versus adults

Explanation

Abbosh acknowledges that AI implementation must be age-appropriate and that every human brain is different. He argues for thoughtful design that considers when individuals mature and develop social skills at different rates.


Evidence

References neurodiversity and notes that people with ADHD benefit from interactive screen formats while struggling with text on paper


Major discussion point

Proper Implementation Strategies for AI in Learning


Topics

Human rights | Sociocultural


Agreed with

– Anna Frances Griffiths (Vignoles)
– Audience

Agreed on

Age-appropriate AI implementation is crucial for child development


Disagreed with

– Audience member

Disagreed on

Necessity of early technology exposure for workforce preparation


AI can provide personalized tutoring that adapts to individual learning patterns and struggles

Explanation

Abbosh argues that AI can solve the ‘two sigma problem’ by providing personal tutors to everyone, not just wealthy people. These AI tutors can travel with students, know their history, and understand where they struggle and how they learn best.


Evidence

References 1980s research showing personal tutors improve learning by two standard deviations, and describes Pearson’s Communications Coach that provides personalized feedback after Teams calls


Major discussion point

Proper Implementation Strategies for AI in Learning


Topics

Development | Economic


Agreed with

– Anna Frances Griffiths (Vignoles)

Agreed on

Educational inequality will worsen without proper AI access and implementation


Foundational skills like learning-to-learn, mathematical literacy, and critical thinking remain essential even in AI era

Explanation

Abbosh argues that while AI can handle many tasks, foundational skills remain crucial. He emphasizes that learning science has figured out how to teach people to learn, and these meta-skills are essential when other skills keep changing.


Evidence

References learning science research and notes the importance of durable skills that employers signal demand for, like AI skills in places like Singapore


Major discussion point

Future Skills and Curriculum Redesign


Topics

Sociocultural | Economic


Augmenting people with AI could unlock $4.8-6.6 trillion in value in the US alone by 2034

Explanation

Abbosh presents research showing that properly augmenting people with AI rather than just automating existing processes could generate massive economic value. He argues this requires investing in people alongside AI technology.


Evidence

Cites Pearson’s ‘Mind the Learning Gap’ research analyzing hundreds of occupations and their gross value added, showing potential for 15% of US GDP at the low end


Major discussion point

Workplace Transformation and Human Skills


Topics

Economic | Development


Strong focus on human development is essential for any positive AI future scenario

Explanation

Abbosh concludes that regardless of the AI implementation approach, there is no positive future scenario that doesn’t prioritize human development. This requires thoughtful integration rather than replacement of human capabilities.


Major discussion point

Investment and Regulation Needs


Topics

Development | Human rights


Agreed with

– Anna Frances Griffiths (Vignoles)
– Carl Eschenbach
– William J. Hague

Agreed on

Investment in human skills development is essential alongside AI implementation


A

Audience

Speech speed

170 words per minute

Speech length

470 words

Speech time

165 seconds

Social media and always-on technology have already caused devastating effects on children’s mental health and cognitive abilities

Explanation

Jonathan Haidt argues that social media and smartphone technology have already caused what he calls ‘the greatest destruction of human capital in human history.’ He points to widespread damage to attention, thinking ability, happiness, and increased anxiety among children.


Evidence

References his book ‘The Anxious Generation’ and compares the scale of damage to World Wars I and II in terms of human capital destruction


Major discussion point

Risks and Challenges of AI in Education


Topics

Human rights | Sociocultural


Agreed with

– Anna Frances Griffiths (Vignoles)
– Omar Abbosh

Agreed on

Age-appropriate AI implementation is crucial for child development


Disagreed with

– Carl Eschenbach
– Anna Frances Griffiths (Vignoles)
– Jonathan Haidt (Audience)

Disagreed on

Optimism vs. caution regarding AI’s impact on human development


AI could hack children’s attachment systems through chatbot relationships, causing worse damage than social media

Explanation

Haidt warns that while tech companies have already hacked human attention systems, AI chatbots will hack attachment systems as children develop relationships with AI. He predicts this will cause much worse developmental damage than social media.


Evidence

Draws parallel to existing attention hacking by tech companies and warns about the more fundamental nature of attachment system manipulation


Major discussion point

Risks and Challenges of AI in Education


Topics

Human rights | Cybersecurity


Disagreed with

– Anna Frances Griffiths (Vignoles)
– Omar Abbosh
– Jonathan Haidt (Audience)

Disagreed on

Age-appropriate AI implementation in education


Technology companies developing AI should invest in protecting young brain development through proper nutrition, sleep, and interaction

Explanation

An audience member from Kuwait suggests that in the ‘race between organic and inorganic brains,’ companies like Nvidia should be required to invest in protecting the development of young children’s brains through basic health and social needs.


Evidence

Frames it as protecting the ‘original brain’ that created AI from being overcome by artificial intelligence


Major discussion point

Investment and Regulation Needs


Topics

Human rights | Development


Students without early technology exposure develop better hand-brain connections and risk-taking abilities

Explanation

An audience member argues that children who don’t have iPads before age seven develop superior hand-brain connections and take more physical risks, making them more desirable employees. This challenges the assumption that early tech exposure is necessary for workforce readiness.


Evidence

References the importance of physical risk-taking and hand-brain connection development in early childhood


Major discussion point

Educational Inequality and Resource Gaps


Topics

Human rights | Sociocultural


Disagreed with

– Omar Abbosh
– Audience member

Disagreed on

Necessity of early technology exposure for workforce preparation


C

Carl Eschenbach

Speech speed

167 words per minute

Speech length

1679 words

Speech time

602 seconds

AI should shift from humans using technology to technology working for humans in the background

Explanation

Eschenbach argues that AI will fundamentally change the relationship between humans and technology. Instead of people actively engaging with and using technology, AI will work invisibly in the background, freeing humans to focus on networking, collaboration, and learning from each other.


Evidence

Points to the World Economic Forum as an example of humans’ natural tendency to network and collaborate, contrasting this with current work that forces people to sit behind terminals


Major discussion point

Workplace Transformation and Human Skills


Topics

Economic | Sociocultural


Disagreed with

– Anna Frances Griffiths (Vignoles)
– Jonathan Haidt (Audience)

Disagreed on

Optimism vs. caution regarding AI’s impact on human development


Business narrative must change from cost-cutting to growth through human augmentation rather than replacement

Explanation

Eschenbach argues that businesses currently focus too narrowly on ROI and cost savings from AI, which creates employee distrust and fear. The narrative should shift to using AI for growth by reinvesting savings in people and business development rather than just automation.


Evidence

Notes that companies have backlogs of work they can’t complete due to lack of automation, and that productivity gains from technology have historically driven corporate growth over 40-60 years


Major discussion point

Workplace Transformation and Human Skills


Topics

Economic | Development


Agreed with

– Anna Frances Griffiths (Vignoles)
– Omar Abbosh
– William J. Hague

Agreed on

Investment in human skills development is essential alongside AI implementation


Employees compete against AI-enabled peers, not against AI itself, requiring workforce education on AI collaboration

Explanation

Eschenbach clarifies that workers aren’t competing against AI directly, but against colleagues who are effectively using AI tools. This requires companies to educate their workforce on AI collaboration and invest in developing human skills like social interaction and empathy.


Evidence

References Workday’s approach to educating employees about AI benefits and reskilling them for more social and collaborative roles


Major discussion point

Workplace Transformation and Human Skills


Topics

Economic | Sociocultural


Future workforce will become more generalist as AI handles specialized tasks like software development and legal contract work

Explanation

Eschenbach predicts that many current specializations taught in schools will become obsolete as AI can perform those tasks. Workers will need to become more generalist, focusing on the human-centric skills while AI handles technical specializations.


Evidence

Cites examples of AI already handling 30-50% of software development tasks and being able to read and negotiate contracts


Major discussion point

Future Skills and Curriculum Redesign


Topics

Economic | Sociocultural


Investment in human skills development must accompany AI implementation in business contexts

Explanation

Eschenbach emphasizes that successful AI transformation requires investment in three areas: technology, people, and process. He argues that companies must use AI cost savings to reskill employees in social, networking, and collaborative skills that have been lost in modern work environments.


Evidence

References his 37 years of work experience observing the loss of social skills in enterprise settings


Major discussion point

Investment and Regulation Needs


Topics

Economic | Development


Agreed with

– Anna Frances Griffiths (Vignoles)
– Omar Abbosh
– William J. Hague

Agreed on

Investment in human skills development is essential alongside AI implementation


W

William J. Hague

Speech speed

171 words per minute

Speech length

2040 words

Speech time

712 seconds

Human-centric skills like empathy and active listening become more valuable as AI handles analytical tasks

Explanation

Hague presents World Economic Forum research showing that while mathematical and statistical thinking is easily replaced by AI, human-centric skills like empathy and active listening are not. However, he warns these skills can still erode without regular practice and engagement.


Evidence

References WEF Future of Jobs report showing analytical thinking as most sought-after skill, and displays research slide showing AI’s capability to replace different types of skills


Major discussion point

Future Skills and Curriculum Redesign


Topics

Economic | Human rights


Agreed with

– Anna Frances Griffiths (Vignoles)
– Omar Abbosh
– Carl Eschenbach

Agreed on

Investment in human skills development is essential alongside AI implementation


Agreements

Agreement points

Age-appropriate AI implementation is crucial for child development

Speakers

– Anna Frances Griffiths (Vignoles)
– Omar Abbosh
– Audience

Arguments

Children naturally take shortcuts and avoid cognitive effort, risking underdevelopment of neural pathways


Early childhood exposure to screens degrades language skills and social development


Age-appropriate implementation is crucial, with different approaches needed for children versus adults


Social media and always-on technology have already caused devastating effects on children’s mental health and cognitive abilities


Summary

All speakers agree that children’s developing brains require special protection and age-appropriate approaches to AI implementation, with particular caution needed for early childhood development


Topics

Human rights | Sociocultural


Investment in human skills development is essential alongside AI implementation

Speakers

– Anna Frances Griffiths (Vignoles)
– Omar Abbosh
– Carl Eschenbach
– William J. Hague

Arguments

Strong focus on human development is essential for any positive AI future scenario


Investment in human skills development must accompany AI implementation in business contexts


Business narrative must change from cost-cutting to growth through human augmentation rather than replacement


Human-centric skills like empathy and active listening become more valuable as AI handles analytical tasks


Summary

All speakers consensus that successful AI integration requires parallel investment in developing human capabilities rather than simply replacing them


Topics

Development | Economic | Human rights


Educational inequality will worsen without proper AI access and implementation

Speakers

– Anna Frances Griffiths (Vignoles)
– Omar Abbosh

Arguments

Inadequate technology investment in public schools creates AI-enabled divides between private and state education


Many schools lack basic internet connectivity and textbooks, making advanced AI implementation unrealistic


AI can provide personalized tutoring that adapts to individual learning patterns and struggles


Summary

Both speakers agree that without proper resources and thoughtful implementation, AI will exacerbate existing educational inequalities between well-funded and under-resourced schools


Topics

Development | Economic | Infrastructure


Similar viewpoints

Strong regulatory intervention is needed to protect children from potentially harmful AI implementations, rather than relying on voluntary corporate responsibility

Speakers

– Anna Frances Griffiths (Vignoles)
– Audience

Arguments

Safety regulation is needed for classroom AI implementation, particularly for young children


Technology companies developing AI should invest in protecting young brain development through proper nutrition, sleep, and interaction


AI could hack children’s attachment systems through chatbot relationships, causing worse damage than social media


Topics

Legal and regulatory | Human rights


AI’s greatest business value comes from augmenting human capabilities rather than replacing workers, requiring a fundamental shift in how companies approach AI implementation

Speakers

– Omar Abbosh
– Carl Eschenbach

Arguments

Augmenting people with AI could unlock $4.8-6.6 trillion in value in the US alone by 2034


Business narrative must change from cost-cutting to growth through human augmentation rather than replacement


AI should shift from humans using technology to technology working for humans in the background


Topics

Economic | Development


Educational approaches must evolve to maintain essential cognitive skills while adapting to AI capabilities, requiring both curriculum changes and new pedagogical methods

Speakers

– Anna Frances Griffiths (Vignoles)
– Omar Abbosh

Arguments

Foundational skills like learning-to-learn, mathematical literacy, and critical thinking remain essential even in AI era


Students lose motivation believing AI will eliminate high-skilled jobs, requiring narrative change about uncertain futures


Flip classroom model allows AI use at home for content learning while classroom time focuses on assessment and collaboration


Topics

Sociocultural | Economic


Unexpected consensus

Technology industry responsibility for child development

Speakers

– Anna Frances Griffiths (Vignoles)
– Audience
– Omar Abbosh

Arguments

Safety regulation is needed for classroom AI implementation, particularly for young children


Technology companies developing AI should invest in protecting young brain development through proper nutrition, sleep, and interaction


Strong focus on human development is essential for any positive AI future scenario


Explanation

Unexpectedly, even the business-oriented speakers agreed that technology companies should bear responsibility for protecting child development, suggesting broad consensus on corporate accountability beyond traditional market mechanisms


Topics

Legal and regulatory | Human rights | Development


Fundamental limitations of current educational technology infrastructure

Speakers

– Anna Frances Griffiths (Vignoles)
– Omar Abbosh

Arguments

Many schools lack basic internet connectivity and textbooks, making advanced AI implementation unrealistic


Digital devices are weapons of mass distraction that harm attention spans and learning


Explanation

Despite Omar’s role as CEO of an educational technology company, he agreed with Anna’s stark assessment of current educational technology problems, showing unexpected alignment on infrastructure limitations


Topics

Infrastructure | Development


Overall assessment

Summary

The speakers demonstrated strong consensus on three main areas: the critical importance of age-appropriate AI implementation for children, the necessity of investing in human skills development alongside AI adoption, and the risk of AI exacerbating educational inequalities without proper resources and planning


Consensus level

High level of consensus with significant implications for policy and implementation. The agreement across business leaders, educators, and researchers suggests these concerns transcend sectoral interests and represent fundamental challenges that require coordinated responses from multiple stakeholders including governments, educational institutions, and technology companies


Differences

Different viewpoints

Age-appropriate AI implementation in education

Speakers

– Anna Frances Griffiths (Vignoles)
– Omar Abbosh
– Jonathan Haidt (Audience)

Arguments

Early childhood exposure to screens degrades language skills and social development


Age-appropriate implementation is crucial, with different approaches needed for children versus adults


AI could hack children’s attachment systems through chatbot relationships, causing worse damage than social media


Summary

Griffiths and Haidt advocate for extreme caution or exclusion of AI/technology from early childhood education, while Abbosh supports thoughtful age-appropriate implementation that still includes technology exposure for workforce readiness


Topics

Human rights | Sociocultural


Necessity of early technology exposure for workforce preparation

Speakers

– Omar Abbosh
– Audience member

Arguments

Age-appropriate implementation is crucial, with different approaches needed for children versus adults


Students without early technology exposure develop better hand-brain connections and risk-taking abilities


Summary

Abbosh argues children need technology exposure to avoid being cut out of the workforce, while an audience member contends that children without early iPad exposure develop superior cognitive and physical abilities


Topics

Human rights | Sociocultural | Development


Optimism vs. caution regarding AI’s impact on human development

Speakers

– Carl Eschenbach
– Anna Frances Griffiths (Vignoles)
– Jonathan Haidt (Audience)

Arguments

AI should shift from humans using technology to technology working for humans in the background


Children naturally take shortcuts and avoid cognitive effort, risking underdevelopment of neural pathways


Social media and always-on technology have already caused devastating effects on children’s mental health and cognitive abilities


Summary

Eschenbach presents an optimistic vision of AI as a great equalizer that will enhance human potential, while Griffiths and Haidt emphasize the serious risks and potential for devastating consequences, particularly for children


Topics

Human rights | Sociocultural | Development


Unexpected differences

The role of technology companies in protecting child development

Speakers

– Audience member from Kuwait
– Anna Frances Griffiths (Vignoles)

Arguments

Technology companies developing AI should invest in protecting young brain development through proper nutrition, sleep, and interaction


Safety regulation is needed for classroom AI implementation, particularly for young children


Explanation

Unexpectedly, there was disagreement on whether to ask technology companies to voluntarily invest in child development versus implementing mandatory safety regulation. Griffiths explicitly rejected the ‘asking nicely’ approach in favor of regulation


Topics

Legal and regulatory | Human rights | Development


The inevitability of AI integration in education

Speakers

– Omar Abbosh
– Jonathan Haidt (Audience)

Arguments

Age-appropriate implementation is crucial, with different approaches needed for children versus adults


AI could hack children’s attachment systems through chatbot relationships, causing worse damage than social media


Explanation

Surprising disagreement emerged between Abbosh’s pragmatic acceptance that ‘the genie is out of the bottle’ requiring thoughtful integration, versus Haidt’s call to ‘keep it the hell out of elementary school’ – representing fundamentally different philosophies about technological inevitability


Topics

Human rights | Sociocultural | Legal and regulatory


Overall assessment

Summary

The discussion revealed significant disagreements on three main areas: the appropriate age for AI/technology introduction in education, the necessity of early technology exposure for workforce preparation, and the overall risk-benefit assessment of AI implementation for human development


Disagreement level

Moderate to high disagreement with significant implications. While speakers agreed on the importance of human development, they fundamentally disagreed on implementation approaches, risk tolerance, and regulatory needs. These disagreements could lead to inconsistent policies and approaches that either over-restrict beneficial AI applications or fail to protect vulnerable populations, particularly children. The divide between optimistic business perspectives and cautious educational/child development perspectives suggests potential conflicts in policy development and resource allocation.


Partial agreements

Partial agreements

Similar viewpoints

Strong regulatory intervention is needed to protect children from potentially harmful AI implementations, rather than relying on voluntary corporate responsibility

Speakers

– Anna Frances Griffiths (Vignoles)
– Audience

Arguments

Safety regulation is needed for classroom AI implementation, particularly for young children


Technology companies developing AI should invest in protecting young brain development through proper nutrition, sleep, and interaction


AI could hack children’s attachment systems through chatbot relationships, causing worse damage than social media


Topics

Legal and regulatory | Human rights


AI’s greatest business value comes from augmenting human capabilities rather than replacing workers, requiring a fundamental shift in how companies approach AI implementation

Speakers

– Omar Abbosh
– Carl Eschenbach

Arguments

Augmenting people with AI could unlock $4.8-6.6 trillion in value in the US alone by 2034


Business narrative must change from cost-cutting to growth through human augmentation rather than replacement


AI should shift from humans using technology to technology working for humans in the background


Topics

Economic | Development


Educational approaches must evolve to maintain essential cognitive skills while adapting to AI capabilities, requiring both curriculum changes and new pedagogical methods

Speakers

– Anna Frances Griffiths (Vignoles)
– Omar Abbosh

Arguments

Foundational skills like learning-to-learn, mathematical literacy, and critical thinking remain essential even in AI era


Students lose motivation believing AI will eliminate high-skilled jobs, requiring narrative change about uncertain futures


Flip classroom model allows AI use at home for content learning while classroom time focuses on assessment and collaboration


Topics

Sociocultural | Economic


Takeaways

Key takeaways

There is no positive future AI scenario without a strong focus on human development and investment in human skills


The business narrative must shift from AI as cost-cutting to AI as growth enabler through human augmentation rather than replacement


Age-appropriate implementation is crucial – different approaches needed for children versus adults, with particular caution required for early childhood


AI can enhance cognitive development when properly designed with learning science principles, but can also erode skills without regular practice


Educational inequality will worsen as AI creates divides between well-resourced schools and under-funded public systems


Future workforce will become more generalist as AI handles specialized tasks, requiring focus on foundational skills like learning-to-learn


Human-centric skills (empathy, collaboration, critical thinking) become more valuable as AI handles analytical tasks


Technology companies have already caused significant harm to children through social media and always-on devices, risking worse damage with AI


Resolutions and action items

Employers should invest AI cost savings back into human skills development and reskilling programs


Educational systems should implement flip classroom models where AI is used for content learning at home and classroom time focuses on collaboration and assessment


Technology companies developing AI should be required to invest in protecting young brain development through safety regulation


Local coalitions of lifelong learning ecosystems should be formed between education, government, and business to set appropriate standards


Curriculum should be redesigned to focus on three areas: foundational skills (math, literacy, learning-to-learn), durable skills (career/technical education), and human motivational skills


Workforce education programs should teach employees how to collaborate with AI rather than compete against it


Unresolved issues

How to prevent AI from becoming a ‘weapon of mass distraction’ in educational settings while still preparing students for an AI-enabled future


Whether to implement broad age restrictions on AI access in schools or rely on case-by-case assessment based on individual development


How to fund adequate technology infrastructure in under-resourced public schools to prevent widening educational inequality


What specific safety regulations should govern AI implementation in classrooms, particularly for young children


How to balance the need for AI literacy with protecting children’s cognitive development and attachment systems


Whether a global fund or tax on AI companies should be established to protect child development


How to scale successful AI tutoring interventions that work in research settings to real-world educational systems


Suggested compromises

Divide educational time between AI-enabled learning periods and traditional non-digital learning to ensure full cognitive development


Implement AI gradually with careful monitoring and evidence-based approaches rather than wholesale adoption or complete prohibition


Focus AI implementation on late teens and adults while maintaining more restrictive approaches for elementary school children


Use locked-down, purpose-built AI tools in schools rather than open internet access to balance learning benefits with distraction risks


Allow AI use for content acquisition while maintaining human-led assessment and collaborative problem-solving in classrooms


Thought provoking comments

We’re up against evolutionary biology. All species like to conserve energy. Humans are no different. Learning takes effort. Naturally, people will tend to take the easy route… But as a child, if you don’t get the chance to develop those cognitive skills, to develop those neural pathways, we’re in trouble.

Speaker

Anna Frances Griffiths (Vignoles)


Reason

This comment is profoundly insightful because it frames the AI education challenge through the lens of fundamental human biology and development. It moves beyond surface-level concerns about technology to identify a core biological reality that could undermine cognitive development if AI is implemented incorrectly.


Impact

This comment established the foundational framework for the entire discussion, introducing the concept that there’s a fundamental tension between AI’s efficiency and the human brain’s need for effortful learning. It shifted the conversation from general AI concerns to specific developmental neuroscience, and other panelists repeatedly referenced this biological imperative throughout the discussion.


The narrative needs to shift from cost savings to growth. If you think about corporate growth over the last 40, 50, 60 years, the greatest driver of corporate growth is productivity gains… We need to change the narrative. It’s not purely cost savings, or maybe it is, but what do we do with those cost savings? We reinvest them in the business and people.

Speaker

Carl Eschenbach


Reason

This comment is thought-provoking because it reframes the entire business case for AI from a zero-sum replacement model to a growth-oriented augmentation model. It challenges the dominant narrative that AI adoption is primarily about reducing costs and headcount.


Impact

This comment created a pivotal moment in the discussion, shifting from identifying problems to proposing solutions. It prompted Hague to express skepticism about whether businesses would actually follow this approach, leading to a deeper exploration of business incentives and the need to ‘change the narrative’ rather than assume it will happen naturally.


In this race between the organic brain and the inorganic brain, is there percentage in asking the people who are developing the inorganic brain to invest in ensuring that the original brain, at the age of five, when it’s most vulnerable, continues to be built up through sleep, through proper nutrition, through interaction, through exercise?

Speaker

Mohammed Jalfar (audience member)


Reason

This comment is remarkably insightful because it proposes a novel solution: making AI developers financially responsible for protecting human cognitive development. It reframes the issue as a collective responsibility rather than just an educational or parental one.


Impact

This question introduced an entirely new dimension to the discussion – the concept of a ‘global fund’ or taxation mechanism. It moved the conversation from theoretical concerns to concrete policy proposals and sparked discussion about regulation and corporate responsibility.


Social media came into childhood and always on on your phone came into childhood and the result is the greatest destruction of human capital in human history… when you hack the attachment system, when you have chatbots, when kids are developing relationships with AI before they’re 16 or 18 the results are likely to be devastating.

Speaker

Jonathan Haidt


Reason

This comment is profoundly thought-provoking because it provides historical precedent and warns of an even greater potential catastrophe. Haidt draws on his research to argue that we’re about to repeat and amplify the social media disaster with AI, moving from hacking attention to hacking human attachment systems.


Impact

This comment created the most dramatic shift in the discussion’s tone, moving from cautious optimism to urgent warning. It forced the more optimistic panelists to defend their positions more rigorously and introduced the concept of ‘hacking attachment systems’ as a new category of risk. The discussion became more polarized and intense after this intervention.


An employee is not competing against AI. They’re now going to be competing against their peers who are leveraging AI.

Speaker

Carl Eschenbach


Reason

This comment is insightful because it reframes workplace anxiety about AI in a completely different way. Instead of humans vs. machines, it’s humans with AI vs. humans without AI, which has profound implications for training and adoption strategies.


Impact

This reframing helped resolve some of the tension created by Haidt’s warnings by offering a practical perspective on workplace dynamics. It shifted the discussion toward practical implementation strategies and highlighted the inevitability of AI adoption in professional contexts.


The child that doesn’t have the iPad before the age of seven, I guarantee that’s the child you’re going to want to hire. That child is the child developing the connection between the hand and the brain. That’s the child taking the physical risks.

Speaker

Anonymous audience member


Reason

This comment is thought-provoking because it directly challenges the assumption that early tech exposure is necessary for future success. It argues for the value of analog childhood experiences in developing crucial cognitive and physical capabilities.


Impact

This comment forced the panelists to be more specific about age-appropriate AI use and highlighted the tension between preparing children for a digital future while preserving essential developmental experiences. It led to more nuanced discussion about developmental stages and individual differences.


Overall assessment

These key comments transformed what could have been a superficial discussion about AI in education into a profound exploration of human development, evolutionary biology, corporate responsibility, and societal values. The discussion evolved through several distinct phases: Anna’s biological framing established the scientific foundation, Carl’s business narrative reframing introduced solution-oriented thinking, the audience questions about corporate responsibility and childhood protection raised policy implications, and Haidt’s intervention created urgency and moral stakes. The interplay between optimistic and cautionary voices created a dynamic tension that prevented the discussion from becoming either naively enthusiastic or paralyzing pessimistic. Instead, it produced a nuanced framework for thinking about AI implementation that balances innovation with human development needs, ultimately concluding that ‘there’s no positive future AI scenario without a strong focus on human development.’


Follow-up questions

How do we ensure people retain the ability to question, reason and judge in a world where machines will increasingly be thinking for us?

Speaker

William J. Hague


Explanation

This is the central question of the discussion that requires ongoing exploration as AI adoption accelerates


What are the specific age-appropriate guidelines for AI use in education, particularly for children under 16?

Speaker

Anna Frances Griffiths (Vignoles) and Jonathan Haidt


Explanation

There’s urgent need for research on developmental impacts of AI on young brains and establishing safety protocols


How can we design AI systems in education that prompt more effort rather than reduce it?

Speaker

Anna Frances Griffiths (Vignoles)


Explanation

Critical for ensuring children develop necessary neural pathways and cognitive skills during formative years


What are the long-term effects of AI tutors and chatbots on children’s attachment systems and social development?

Speaker

Jonathan Haidt


Explanation

Concerns about potential devastating effects similar to social media’s impact on mental health and cognitive development


How can employers effectively signal future skill demands to educational institutions?

Speaker

Omar Abbosh


Explanation

Need for better coordination between business and education to prepare students for AI-enabled workforce


What specific regulations and safety measures should govern AI implementation in schools, particularly state-funded systems?

Speaker

Anna Frances Griffiths (Vignoles)


Explanation

Preventing equity gaps and ensuring responsible deployment of AI technology in educational settings


How can the ‘flip classroom’ model be scaled across entire educational systems effectively?

Speaker

Omar Abbosh


Explanation

Need to understand implementation challenges and success factors for widespread adoption


What evidence exists for AI’s ability to help students who find cognitive effort particularly challenging?

Speaker

Anna Frances Griffiths (Vignoles)


Explanation

Critical gap in research on AI’s effectiveness for struggling learners in real-world settings


Should there be a global fund or tax on AI companies to invest in protecting young brain development?

Speaker

Mohammed Jalfar (Audience member)


Explanation

Exploring funding mechanisms to ensure AI development considers impacts on childhood cognitive development


How do we change the business narrative from AI cost savings to growth and human augmentation?

Speaker

Carl Eschenbach


Explanation

Need strategies to shift corporate mindset toward investing in human skills alongside AI implementation


What are the optimal transition strategies for adults who need to reskill in an AI-enabled economy?

Speaker

Anna Frances Griffiths (Vignoles)


Explanation

Understanding how to manage the significant transition costs and support displaced workers


How can we restore social skills in the enterprise workplace that have been eroded by technology?

Speaker

Carl Eschenbach


Explanation

Need research on rebuilding human-centric skills like collaboration, empathy, and networking in business settings


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.