The Intelligent Co-Worker
21 Jan 2026 12:15h - 13:00h
The Intelligent Co-Worker
Session at a glance
Summary
This World Economic Forum panel discussion explored the evolving role of AI in the workplace, examining the transition from AI as a tool to AI as a “coworker” or intelligent agent. The panelists, including CEOs from HP, BCG, and several AI companies, debated whether AI should be considered a true coworker or remain classified as an advanced tool. While some participants preferred terms like “agentic workflows” over “AI coworkers,” others described scenarios where AI functions more like a collaborative partner than a traditional tool.
The discussion highlighted the importance of safety and reliability in AI deployment, with examples from healthcare applications showing how rigorous testing and human oversight remain crucial. Panelists emphasized that successful AI implementation requires comprehensive organizational change, including process redesign, employee training, and cultural adaptation, rather than simply deploying new technology. They noted that companies achieving the best results treat AI adoption as a CEO-level priority requiring investment in skills development and change management.
A significant theme emerged around AI creating abundance rather than scarcity, with examples of AI enabling previously impossible tasks like calling thousands of patients during emergencies. However, the panel also addressed concerns about data inequality and the need for AI sovereignty in developing nations, where infrastructure limitations create barriers to adoption. The discussion touched on potential job displacement, particularly for entry-level positions, while suggesting that AI might fundamentally reshape organizational structures and career paths. Overall, the panelists presented an optimistic view of AI’s workplace integration, emphasizing the need for thoughtful implementation that enhances rather than replaces human capabilities.
Keypoints
Major Discussion Points:
– Defining AI as “Coworker” vs “Tool” – Panelists debated whether AI should be conceptualized as a coworker or remain viewed as a tool, with varying perspectives on terminology like “agents” versus “agentic workflows.” The consensus leaned toward AI being more sophisticated than traditional tools but not quite human coworkers.
– Safety, Reliability, and Testing of AI Systems – Extensive discussion on ensuring AI safety through output testing rather than just input criteria, with examples of hiring thousands of nurses to test healthcare AI and implementing redundant safety systems and human escalation protocols.
– Data Challenges and Global Equity – Significant focus on data scarcity in the Global South, the need for AI sovereignty, and addressing the digital divide that could become an “AI jobs divide.” Discussion included the infrastructure requirements and compute capacity needed for equitable AI adoption.
– Organizational Change Management – Strong emphasis that technology isn’t the bottleneck – the real challenge is changing how people work, requiring transformation of processes, skills, leadership, and culture. Panelists stressed this should be a CEO-level priority rather than delegated down.
– Workforce Impact and Career Evolution – Discussion of how AI will reshape job structures, eliminate traditional entry-level learning paths, and require rethinking of career progression. The conversation explored both job displacement concerns and the potential for “infinite abundance” creating new types of work.
Overall Purpose:
The discussion aimed to explore how AI is evolving from a simple tool to something more akin to a workplace coworker, examining the practical implications for organizations, workers, and society as they integrate AI agents into their operations.
Overall Tone:
The discussion maintained a notably optimistic tone throughout, with panelists emphasizing AI’s potential benefits for worker satisfaction and productivity. While acknowledging challenges like safety concerns, data equity, and organizational change management, the conversation consistently framed these as solvable problems rather than insurmountable barriers. The tone remained collaborative and forward-looking, with panelists building on each other’s points rather than expressing significant disagreement.
Speakers
Speakers from the provided list:
– Mat Honan – Moderator/Host of the World Economic Forum panel on “The Intelligent Coworker”
– Kian Katanforoosh – Founder and Chief Executive Officer of Workera (Note: The moderator initially mispronounced the name as “Kian Canton-Ferouche”)
– Munjal Shah – Founder and Chief Executive Officer of Hippocratic AI
– Kate Kallot – Founder and Chief Executive Officer of Amini AI
– Enrique Lores – President and CEO of HP
– Christoph Schweizer – Chief Executive Officer of BCG (Boston Consulting Group)
– Audience – Audience member who asked a question about ROI on AI investments
Additional speakers:
None identified beyond those in the provided speakers names list.
Full session report
The Intelligent Coworker: AI’s Evolution in the Workplace
World Economic Forum Panel Discussion – Comprehensive Report
Introduction and Context
This World Economic Forum panel discussion, moderated by Mat Honan, brought together five industry leaders to explore the evolving role of artificial intelligence in the workplace, specifically examining the transition from AI as a tool to AI as an “intelligent coworker.” The panel included Kian Katanforoosh (Founder and CEO of Workera), Munjal Shah (Founder and CEO of Hippocratic AI), Kate Kallot (Founder and CEO of Amini AI), Enrique Lores (President and CEO of HP), and Christoph Schweizer (CEO of Boston Consulting Group).
The discussion aimed to address fundamental questions about how organisations should conceptualise, implement, and manage AI systems that are becoming increasingly autonomous and capable of performing complex workplace functions. The conversation maintained a notably optimistic tone throughout, with panellists emphasising AI’s potential benefits for worker satisfaction and productivity whilst acknowledging significant challenges around safety, equity, and organisational transformation.
Defining AI in the Workplace: Tool, Coworker, or Something Else?
The panel’s most fundamental disagreement centred on how to conceptualise AI’s role in the workplace. This definitional debate revealed deeper philosophical differences about AI’s capabilities and appropriate deployment strategies.
Kian Katanforoosh immediately challenged the panel’s premise, stating: “I’m not a fan of calling AI agents, agents, neither I’m a fan of calling it a coworker… AI being good at a set of tasks is very different than AI taking on an entire job.” He advocated for viewing AI as “agentic workflows” rather than coworkers, emphasising that whilst AI excels at specific tasks, transforming entire jobs requires broader organisational change. His perspective grounded the discussion in empirical reality, noting that “most of the predictions on job XYZ is going away over the last three years has been wrong.”
Munjal Shah offered a more nuanced framework, describing AI as functioning in three capacities: “copilots, autopilots, and infinite pilots.” He explained that infinite pilots enable previously impossible tasks at scale, citing an example where his healthcare AI system made 16,000 calls to patients from one Medicare Advantage plan during a heat wave—directing people to cooling centers and providing personalised care that would have been impossible with human resources alone. This concept of “infinite pilots” introduced the transformative idea of abundance economics, suggesting AI enables entirely new categories of work rather than simply replacing existing functions.
Kate Kallot maintained that AI remains fundamentally a tool, albeit one that makes professions more attractive rather than replacing workers. She referenced radiology as an example, noting that despite predictions (later confirmed by moderator Mat Honan to be from Geoffrey Hinton) of AI displacement, the field has become more appealing to medical professionals.
In contrast, Enrique Lores embraced the coworker concept more fully, describing scenarios where AI transitions from tool to coworker when performing complex, autonomous functions like supply chain planning. He noted that at HP, AI systems are increasingly handling sophisticated decision-making processes that previously required human intervention.
Christoph Schweizer similarly endorsed the coworker framework, describing AI as becoming coworker-like when it can “dynamically query data, identify patterns, and create deliverables autonomously.” He emphasised that AI coworkers require the same treatment as human colleagues: “onboarding, training, feedback.” He provided examples of AI systems at BCG that can analyse manufacturing data across multiple facilities to identify unusual patterns that human analysts might miss.
Safety, Reliability, and Quality Standards
The discussion of AI safety revealed both consensus on the importance of rigorous testing and different approaches to implementation standards.
Munjal Shah presented the most comprehensive approach to AI safety, challenging a fundamental assumption about AI development: “There’s a misnomer out there that if you train your AI on the right data, then it’s safe… what we realized was the first thing you have to do to ensure safety and kind of responsible AI is you have to do something that almost no horizontal model does, which is output testing.” His company hired 7,500 U.S. licensed nurses to test AI outputs, implementing “triply redundant systems” with “models that check models that check models” and human escalation protocols where “one out of every 50 times” cases are escalated to human oversight.
Shah also introduced the concept of “artificial jagged intelligence” (AJI), a phrase he noted was coined by Andrej Karpathy, explaining that AI systems can excel at complex tasks whilst failing at seemingly simple ones. He illustrated this with an example where AI could perform sophisticated medical consultations but struggled with basic scheduling tasks.
Enrique Lores offered a more pragmatic perspective on AI standards, arguing: “We need to be careful not to be more demanding with the AI co-workers or with the models than what we are with our own employees because nobody’s perfect, everybody makes mistakes… they are more accurate than the humans that we have had for many years providing that work.” His approach emphasised comparative rather than absolute standards, suggesting that AI systems should be evaluated against human performance benchmarks rather than theoretical perfection.
Data Challenges and Global Equity
Kate Kallot brought crucial global perspective to the discussion, highlighting how AI adoption challenges vary dramatically between developed and developing nations. She identified a cascading series of digital divides that threaten to exacerbate global inequalities: “If I don’t actually provide them the infrastructure, the compute capacity, if I don’t provide them access, if I let a digital divide, which is compounded by a data divide, become a compute divide, it’s going to run right towards becoming an AI jobs divide.”
The Global South faces particular challenges with data scarcity, as many systems remain analogue and paper-based, limiting AI adoption opportunities. Kallot warned that without addressing these infrastructure gaps, AI could perpetuate extractive cycles where developing nations provide raw materials (data) whilst developed nations capture the value-added benefits of AI processing and insights.
She discussed the importance of governments serving as “first customers” for AI systems, helping to build domestic capabilities whilst addressing local needs. Kallot referenced work with USAID and economists studying project performance data to understand these challenges better.
The demographic challenges are staggering: Kallot forecasted that emerging markets face a massive job gap with 1.2 billion people entering the workforce but only 400 million jobs projected, creating an 800 million job shortfall that represents one of the most significant challenges discussed during the panel.
Organisational Change and Implementation Strategies
Strong consensus emerged among panellists around the critical importance of organisational transformation in successful AI implementation. Christoph Schweizer emphasised: “The technology is not the bottleneck. The models work… They will succeed if they really change how their people work. And to do that, you need to change processes, organization, incentives, skills, leadership, culture… make this a CEO problem and don’t delegate it somewhere down in the organization.”
Schweizer traced the evolution of AI applications over the past 12-18 months from basic call centre automation to sophisticated functions in “pharmaceutical, R&D, regulatory, clinical trial management,” “coding and software maintenance,” “sophisticated underwriting,” and “marketing, content production.”
Enrique Lores reinforced this theme, noting that companies must redesign entire processes rather than simply using AI to assist existing workflows to achieve significant impact. He observed that organisations achieving the best results treat AI adoption as a CEO-level priority requiring substantial investment in change management and skills development.
Kian Katanforoosh advocated for top-down approaches with clear requirements and rewards rather than treating AI learning as optional benefits. He emphasised the importance of mandatory skill development programmes, using a Stanford classroom example to illustrate how mentorship and structured learning approaches are more effective than voluntary initiatives.
Kate Kallot highlighted a common implementation mistake: companies expecting immediate results without proper workforce preparation and reskilling to manage AI systems effectively.
Workforce Impact and Skills Development
The discussion of workforce impact revealed optimistic projections about job enhancement. Kate Kallot noted that AI makes professions more attractive, citing radiology as an example where AI adoption has enhanced rather than diminished the field’s appeal. Enrique Lores supported this with data from HP’s Workforce Relationship Index, reporting that workers using AI regularly report higher job satisfaction and better ability to meet personal and professional goals.
Christoph Schweizer provided concrete evidence from BCG, noting that “77% of BCG who say thank God I have AI” demonstrate high satisfaction rates among AI users.
The panel addressed career development challenges when traditional entry-level positions are automated. Kian Katanforoosh emphasised the importance of distinguishing between “durable skills” (problem-solving, coding) and “perishable skills” that change every six months. Notably, he maintained that coding remains a crucial skill despite widespread predictions of AI making programming obsolete, leading to a brief exchange with moderator Mat Honan about whether this view was becoming unfashionable.
Enrique Lores suggested that future enterprise structures will be “radically different from current pyramid organizations” as AI transforms functions and processes, though the exact form of these new structures remains unpredictable.
Measuring Success and ROI Challenges
The panel addressed significant challenges in measuring AI’s impact using traditional productivity metrics. An audience member raised concerns about companies struggling to achieve ROI on AI investments, citing examples of “six minute productivity gain” and “36 minutes” of time savings that are difficult to capture and monetise.
Munjal Shah provided concrete ROI examples to counter this concern: “$200 returning $1.3 million, $10,000 returning $1.5 million, $2,000 returning $2 million.” He advocated for experimental approaches: “Organizations should focus on experimentation with multiple parallel use cases rather than deploying AI like traditional software.”
Christoph Schweizer advocated for new measurement approaches, emphasising “adoption and usage,” “employee satisfaction surveys,” and the importance of tracking daily versus weekly usage patterns rather than traditional productivity metrics. He noted that success should be measured through these qualitative indicators rather than solely quantitative productivity measures.
Enrique Lores reinforced this theme, noting that AI provides significant time savings that improve employee satisfaction even when ROI is difficult to quantify using conventional metrics.
The Abundance Paradigm and Future Implications
Munjal Shah introduced one of the discussion’s most thought-provoking concepts: the transition from scarcity-based to abundance-based thinking about work and services. He projected: “There will be eight billion humans or whatever number we’ll have then, but there will be 80 billion AIs… I think by and large, we’re gonna find all of these use cases of things we never thought to do until we had an infinite supply at low cost.”
This abundance paradigm suggests that AI enables previously impossible one-to-one ratios in sectors like healthcare and education. The heat wave example illustrates how AI can provide personalised attention at scales that would be impossible with human resources alone, fundamentally challenging traditional economic assumptions about resource constraints.
Mat Honan added perspective on generational adaptation, noting how his children take autonomous vehicles like Waymo for granted, suggesting that AI integration may become similarly normalised for future generations.
Key Challenges and Future Considerations
Several significant challenges emerged from the discussion. The 800 million job gap in emerging markets represents a pressing demographic and economic challenge requiring innovative approaches to job creation and economic development.
The question of career progression in an AI-transformed workplace remains complex, particularly regarding how professionals develop expertise when traditional entry-level work is automated and what career paths look like when AI handles foundational training work.
The challenge of capturing ROI from fragmented productivity gains—saving minutes rather than hours across many tasks—represents a practical implementation hurdle that requires new measurement frameworks and business metrics.
Questions about equitable access to AI benefits remain pressing, particularly for developing nations that risk being left behind in the AI transformation due to infrastructure and resource constraints.
Conclusion
This World Economic Forum panel demonstrated sophisticated understanding of AI’s workplace integration challenges extending far beyond technology deployment. The discussion revealed consensus on key principles: successful AI adoption requires comprehensive organisational transformation, measurement approaches must evolve beyond traditional productivity metrics, and AI generally enhances rather than replaces human capabilities.
The most significant insight was recognising that organisational change management, rather than technology, represents the primary bottleneck in successful AI adoption. This suggests companies should focus AI investments on change management, skills development, and cultural transformation rather than simply acquiring new technologies.
While maintaining an optimistic tone, the panel realistically acknowledged significant challenges around global equity, career development, and measurement frameworks. The discussion ultimately presented AI workplace integration as a complex transformation requiring thoughtful leadership, comprehensive change management, and continued attention to human factors and global equity concerns.
Session transcript
Hello, and welcome to the World Economic Forum’s panel on the Intelligent Coworker. AI today is evolving from a tool to a coworker, transforming how individuals work, how organizations operate, and how societies adapt. The big question we’re going to try and get at today is what happens when AI takes a seat at the table in the workforce.
We’ve got a great panel for you today. I’m glad everybody can be here to join us. Sitting to my left, we have Enrique Lores, President and CEO of HP, Christoph Schweizer, who’s the Chief Executive Officer of BCG, Kate Kallot, who’s the Founder and Chief Executive Officer of Amini AI, Kian, whoops, I’m sorry, Munjal Shah, the Founder and Chief Executive Officer of Hippocratic AI, and Kian Canton-Ferouche, who’s the Founder and Chief Executive Officer of Workera.
Please welcome our esteemed guest. To get started, I’d like to basically ask this question, which may be an obvious question given the topic of the panel, but when we’re talking about AI as a coworker, what does that mean versus AI as a tool?
When we’re getting agents in the workforce, when we’re starting to turn over autonomy to them, where’s the line? Maybe I’ll start with you, Kian, and then we can come back this way. What’s the difference between a coworker and a tool for an AI?
Personally, I’m not a fan of calling AI agents, agents, neither I’m a fan of calling it a coworker. Fair enough. I can explain.
I think that right now, if you look at the publications that come out of the Frontier Labs, they all concern tasks. AI is good at these tasks, is better at that task. Humans have jobs that can make up hundreds of tasks at times.
It turns out that AI being good at a set of tasks is very different than AI taking on an entire job. In fact, most of the predictions on job XYZ is going away over the last three years has been wrong. We still have translators.
We still have customer support managers. We still have a lot of careers that we thought will go away way faster than they actually did. In fact, for a job to change, you need people to change, organizations to change, workforce to rewire.
We know from Cloud transformation that that can take decades. There are still businesses operating on pen and paper, 20 years after digitalization. That’s why I like to call it agentic workflows.
Professor Eng also talks about that rather than coworker.
Fair enough, fair enough. Please, Manjal, jump in.
So again, we break down the framework into of coworker into three buckets. Copilots, autopilots, and infinite pilots. And copilots are kind of working with the human in the loop.
Autopilots do the task autonomously, but typically a task you do today. And then infinite pilots is this idea of things you could never do until you had an infinite supply or you had a very, very low cost. And I think that while AI is going to augment our abilities, if you fast forward five, 10 years, there will be eight billion humans or whatever number we’ll have then, but there will be 80 billion AIs.
And meaning they won’t be doing the things we do today. I think only a little bit of that will be impacted. I think by and large, we’re gonna find all of these use cases of things we never thought to do until we had an infinite supply at low cost.
So one example is we recently called 16,000 people for this one Medicare Advantage plan in the U.S. during a heat wave. And we called them at the hottest time of the day and educated them on what to do, told them where the cooling center was.
I think even in some cases, helped them get there. And that was something you would have never done because you would have had to find thousands of people to call, educate, do an assessment. And these are the types of things I think we’re gonna see a lot more of.
Kate?
I would also tend to agree. For me, AI is always a tool, not a co-worker. And the reason is because, like Kian mentioned, we were forecasting that radiologists at some point will disappear because computer vision and AI started to surpass recognizing certain diseases and reading radiographies better than the human eye, right?
But actually today, there are more radiologists actually getting into the profession. And what happened is that the tool has made that profession sexy again, right? And I’m thinking that I do believe that we’re gonna see this happening everywhere.
We are seeing this happening in governments we work with. As we start deploying AI systems and AI tools across governments, government’s workers, civil servants are finding a renewed interest of doing their works differently with a different interface. But that interface is a tool, not their co-worker, because that tool cannot actually make value-based decisions, cannot say for a specific government or for specific citizens, this is the best outcome, because that tool doesn’t have that context just yet.
I see, yeah. I believe that was Geoffrey Hinton, right? Who said that?
Yeah. Then it was, how long ago was that now? It was.
Like 10 years? Eight years?
Eight years ago.
Eight years? Yeah. Okay.
Christoph.
So honestly, I’m not going to get into definitions. What is a co-worker or a tool? But I’m going to give you a practical example from my own firm.
So at BCG we have 34,000 people we’ve been around for 62 years and for decades we’ve been collecting our expertise and benchmarks and data about all sorts of things in the business world at our clients that we serve.
Now we have also collected a lot of information in our knowledge management systems over many years. And now with the advent of AI you could use these knowledge management systems in a very different way. You could say well we are about to meet that pharma company that has 27 factories around the world.
Can you please pull all the benchmarks from our gazillions of files and tell us what the best manufacturing sites look like and tell us how does this pharma company compare to that? That’s when AI really started to kick in and you got a lot more out of it than through kind of the traditional queries. Where we are now is you can further augment that and I do think you are now starting to be in a reality where it does feel like a co-worker, whether you call it that or not.
So what happens you get the things from our own database. You can then say please also pull all the information that’s publicly available from analyst reports and recent kind of expert kind of assessments etc. And please also after looking at the data tell me what are the 10 most unusual and remarkable patterns in the data of those 27 factories for that pharma company.
And you can even take it one step further and say well please now put that on a coherent set of eight slides that we could use for an initial workshop. So it goes from a relatively static query into a very dynamic query and now it goes into augmented information. It goes into getting stuff done and then it does feel like a co-worker.
And I think we are at that point. It’s a reality. We very actively and happily build our own agents and we embrace it in many, many things that we do.
And I think we are at this point that AI is becoming much more than a smart tool.
I want to come back and find out what some of those things you’re using for are. But first, Enrique, please.
Yeah, I’ll use also a couple of real examples. When I use AI to prepare for earnings and to ask AI to help me to prepare questions, I’m using it as a tool. When we use AI to replace and to change, how do we do supply chain planning in the company?
It feels more like I’m a co-worker than a tool. Or if I go into a Waymo car and I ask the car to drive me to San Francisco, it is a co-worker or a co-driver or a co-something. It is not a tool.
So this is clearly something that we see is going to continue to grow more and more. Differently from other panels, I think these are going to be really integrated into our companies.
It’s amazing to me how normalized Waymos have already become, how normalized the autonomous cars are. Like my kids ride in them all the time. And it’s just, it’s a bizarre world.
They take it for granted. But, Manjal, I want to come back to something you said. You talked about this heat wave where you did 16,000 automated calls.
Obviously, when you’re doing something like that, you have to make sure that whatever you want to call them that are making the calls, are safe and reliable, can be trusted. Talk to me a little bit about that. How does that happen?
So, you know, there’s a misnomer out there that if you train your AI on the right data, then it’s safe. I don’t know if you guys remember, at the very beginning, there was a thing called PubMed GPT, trained only on PubMed, a nice, high-quality data source. It still made up stuff.
And so what we realized was the first thing you have to do to ensure safety and kind of responsible AI is you have to do something that almost no horizontal model does, which is output testing. So we literally hired 7,500 U.S. licensed nurses.
because we operate largely within the scope of an RN. RNs in the U.S. don’t prescribe or diagnose but they do a ton of care management.
And so we hired 7,500 of them. We had them call the AI. We had them act like a patient.
We had said, hey you’re a nurse. You know what mistakes it made. Tell us all the mistakes.
And then one out of every 50 times they got a human nurse on the other line and they marked all the mistakes they made. And we said, all right, we’re not going to ship this until you we are as safe as a human. And actually we’re now much safer than that.
But we created an empirical output testing model, I call it, rather than an input criteria for whether or not your model is safe. And I think more and more people can do it. You can’t really do it on a horizontal model because it’s just an infinite number of outputs.
But you can do it on a vertical model. You can do it if you roll out one use case at a time and you can test it. It’s expensive but I think ultimately you have to do some of these responsible things to do it.
We also have triply redundant systems. They keep checking me, who builds a safety system that doesn’t have redundancy. So this idea of, you know, one model to rule them all, you know, this one ring to rule them all concept.
I mean this this is not how any safety system is developed anywhere in any industrial application. So we have models that check models that check models. And so these are the, you know, some of the the techniques you have to do.
And then the last part I’ll just say is, in a health care call, if I say I’m having chest pains or shortness of breath, you know, you need to run something like a triage protocol and assess is this an immediate red transfer to a human being.
And so we’re we actually built in an automatic ability to run triage and transfer the call immediately to a human. And so there are humans standing by. They’re not in the loop.
So it’s not a human in the loop. But we will kick out to a human if there’s an unsafe scenario because it can happen on any patient call.
Can I ask how often that happens? Like how often is it?
It varies widely. I mean if it’s a congestive heart failure post-discharge from a hospital, you know, and this is day two, day one, it can be one out of every ten calls. But if it’s a blood pressure check-in of a remote patient monitoring that the guy didn’t upload in days and we’re calling to kind of get the data, It can be one in a thousand one in ten thousand.
So it widely varies. It’s just really use case dependent
Kate I want to come to you When we’re talking about data, it’s really important that this data not be biased that it you know That it’s that it you know, especially if it’s in the workplace that maybe it’s where something’s been data’s been scarce or extractive even How does that play out when in places where there’s not such rich data sets?
Yeah, so from a Global South standpoint and double-clicking on Africa and other emerging economies We are still living in a reality where most of our systems are operating on analog paper based very unstructured data PDF that are fragmented and scattered everywhere and even when you look at a government systems Most of our governments still operate on paper format today So while there is an appetite to want to adopt a lot of these tools and intelligent co-workers and adopt AI at scale There’s still a lot of limitations that we have to solve before we can actually get there Because if we are not adapting and grounding those models in our realities, we actually risking to perpetuate extractives and to perpetuate cycles where the model will make a confident decision on very incomplete records So for example, if your civil registries are not digitized if your land registries are not digitized How are you going to be able to manage your country efficiently and make decisions with regards to your citizens?
Are you going to listen the model that seems very intelligent and confidently tell you a decision or will you actually bring in? Civil servants and continue having the human in the loop for for for that purpose So that’s why we focused on really supporting government to build knowledge base Which are specific to each ministries each governments each entities that really help them Digitize and transform their data because for us is the first step before they are able to adopt some of those tools
Ken, we’ve been talking mostly here so far about basically training AI, but I think there’s also a world where AI is training people and helping people become up-skilled in some ways. Can you tell me a little bit about that, and how that works in the workforce?
Yeah. So if you look at the industry, and I give the perspective of the enterprise, that’s summary of 44 enterprises, what you want for the outcome is you want an ideal scenario where whatever skills is needed to the business, everybody has it internally, you have it there.
It’s a zero-skill situation, zero-skills gap. That’s what you want at all time. If the skill gap grows, you want to close it.
You want to do that with an approach that involves everyone, that doesn’t leave people with a fear factor, that gives them the confidence that they can make it, and they’re part of this. So that’s where the question of AI mentorship comes in. How do you design a good AI mentor that can perform these outcomes?
I think a lot of people narrow the mentorship problem to learning, because we’ve all seen the last decade of we take a course, and we finish a course, the adoption is not great, it’s very one-size-fits-all.
So the problem of mentorship is actually not that much about learning, it’s about something else. I’ll give you a story from our class at Stanford. One of the things that Stanford does very generously is they put our classes on YouTube once in a while, and on campus we might have 1,000 students, online we have hundreds of thousands, millions sometimes.
If you were to call those students online and ask them, what’s the difference between a Stanford student and how you feel? They would almost never tell you it’s the material. They would tell you that they don’t understand the bar for excellence, and they don’t know how far they are from that bar.
When the Stanford student has friends at OpenAI, at Meta, at Google, they have a constant feedback loop. They know the rewards that they need to attain and they know what an opportunity opening means. So what a mentor needs to be is really able to set good goals for a worker.
to set good rewards for a worker and to assess them effectively so that they understand their gaps. If you actually do that well, the learning problem is not an issue. People with a big enough carrot are going to do their best in order to achieve that carrot.
Enrique, you wanted to weigh in there, please.
I wanted to go back to the conversation about data and about accuracy because I think it’s very important because we need to be careful not to be more demanding with the AI co-workers or with the models than what we are with our own employees because nobody’s perfect, everybody makes mistakes.
And what we have learned, for example, we have been using AI models in our call centers. They are not perfect. Many times they give the wrong answer, but they are more accurate than the humans that we have had for many years providing that work.
And the satisfaction with our clients when they call is much higher. So I think we need to balance how demanding we are just because we as humans make many mistakes regularly.
So just to pile on to that, look, I mean, we work with thousands of clients in their AI adoption and scaling and getting to real impact. And exactly in line with what you were saying, Enrique, I mean, the technology is not the bottleneck. The models work.
Also the agents, which are still early days, work or they are going to work. That does not make a company successful or fail when it comes to their AI journey. They will succeed if they really change how their people work.
And to do that, you need to change processes, organization, incentives, skills, leadership, culture. And frankly, that is a much, much, much bigger factor. And I mean, our number one conviction as we work with leading companies around the world when it comes to AI is make this a CEO problem and don’t delegate it somewhere down in the organization.
I mean, this needs to bring together all these important aspects. And if CEOs take ownership, if they… upskill themselves if they make bold investments, upskill their organization.
Then you get into this positive flywheel where people all of a sudden see, oh wow, this really helps me. Oh wow, this is great experience. I love this.
And then you get into a self kind of reinforcing circle. And I feel at this point in time, the whole world still talks way too much about the technology question. I think we should talk a whole lot more about the skills and the positive sentiment and the change management around it.
I think that’ll make it or break it.
I saw everybody nodding while he was saying that. Does anyone want to weigh in on that?
I think one thing on the skills topic, Carpathy has a very good phrase he coined called AJI, artificial jagged intelligence. And that’s what we’re finding all over the place. You’d be so surprised at the things that AI does amazing and you’d be like, are you an idiot?
How come you couldn’t do that? So I’ll give you an example on both sides. We also deployed our AI to convince patients to do follow-up tests.
Because a lot of times, if you have a lung nodule, you’re supposed to do a CT scan. We got 1,700 patients that basically the health system had tried to get to do it and they had refused. And they had sent them a letter, text, everything.
And we called them. And the AI was so persuasive, it convinced 250 of them to do it. And these have been quote, lost to follow-up, which is the healthcare way of just saying, we gave up.
But there was actually a cancer in there. So it’s like, we saved a life. But now think about, but on the other hand, they’re like, oh great, but now you can also use this to do scheduling.
Sure, straight up, there’s times available. Okay, but the AI, it doesn’t have common sense. So give me three times.
Sure, Mr. Shaw, would you like 7, 705 or 710? Like we would space them out.
Like we would try out different things. Or, oh, I want two times for me and my wife, but back to back as we drive in together. Oh God.
I mean, GPT 4.0, we benchmarked it on scheduling accuracy. 23% of the time it’ll hallucinate. and you literally need models checking, models checking, models to actually get that sub 1% because you can’t have a clinic where 24%, you know, a quarter of the people walk in and they’re like, I’m here for my appointment.
It’s like, what appointment? And so that you would have thought that’s an easy skill and you would have thought persuasion is a hard skill and yet it’s exactly the opposite.
Christoph, I’m gonna come back to you because what you’re talking about a little bit there I believe is helping boost productivity in the workplace. How can you tell if that’s happening? How can you tell if you’ve set up agents or again, co-workers, whatever we’re calling these AIs we have operating internally in a successful way?
Well, the dream of every CEO is that you can perfectly measure productivity. I can tell you the vast majority of us actually find it pretty elusive. You have a lot of fun talking to your IT function or to your marketing function trying to get an explanation how productivity changed.
Yeah, good luck with that. So what do you really measure? And we found that incredibly instructive and predictive of success.
You measure first of all, does a certain organization, do the employees in that organization habitually use the toolkit? What we see is if people use an agent, agentic AI solution or any AI solution, once a week it’s a distraction. It makes you less productive.
Once people start using it multiple times every day or at least once a day, it becomes a norm. You learn faster, you get experience, you get a whole lot more out of it. So you do track adoption and usage.
The second thing is you do what all large companies do. You do employee satisfaction service. Do you actually enjoy this?
And look, we have seen that at BCG ourselves. We now over the course of 2025 had an exponential increase in the number of people who said this is a real help and I’m happier because of the AI tools I have. We are now at 77% of BCG who say thank God I have AI.
And it was much lower. And so I do think you have to manage some of these adoption, happiness, appreciation factors, and then you will get to productivity and eventually you can also measure that. But I do feel, again, as I said earlier, the whole business world talks about parameters as if they were God-given and perfectly precise and measurable.
I think there’s a lot of kind of measuring what your workforce does, how they work, how they feel. That is going to be very predictive for success.
Let me share a data point that kind of supports what Christoph was saying. Every year we conduct a survey that we call the Workforce Relationship Index, and we basically ask people all over the world, multiple companies and countries, how do they feel about their work? Are they able to meet their own personal goals and professional goals?
And something we have learned is that those using AI regularly in their work have a higher satisfaction level than those that don’t. So this shows that really are starting to see the value that these tools bring, and they really feel not only more productive, but also better about what they do.
The AI is having a very positive 360 review, is that right? Exactly.
By the way, you say that jokingly, it’s actually, I mean, if you follow the title of this panel that it’s a co-worker, I mean, any co-worker that’s a human, you spend time to recruit it, you find it, then onboard it, instruct it, ramp it up, give it feedback, train it, and then promote it for bigger use, or sometimes also let it go.
You have to handle any agentic AI or co-worker AI solution the same way, and you should not be negligible on that whole early part of it. There is onboarding, there is training, and you can measure the contribution as you measure it for any human being. And I think, I mean, if you take that whole co-worker thesis seriously, and we do, then, I mean, treat it as such.
I was joking, but I agree with you. Kate, I’m gonna come back to you here. To your earlier points about data scarcity, I’m curious if you have thoughts about where the line is in terms of having AI, like, augment human judgment, and where we should stick to humans.
So, I’ve been sitting on the Global Leadership Council for Reimagining Aid, and I’m no aid expert, right? But the reason why they decided to have someone focused on AI in the council was because everybody believes that today we need to rethink, deconstruct and reconstruct how we’re thinking about development and how we’re thinking about having technology embedded into the ways we’re making decisions.
Right now, when you think about development, we see countries in the Global North making decisions on where to invest in problems in the Global South, but without really integrating the data that’s on the ground.
One of the biggest examples of that has been USAID. So, when USAID was dismantled, there is an economist who started working on kind of like building a knowledge base of all the projects that USAID had done in the past and kind of building a knowledge management system to say, listen, we’ve been doing this for decades and decades and decades.
There is a treasure trove of data of how those projects have performed on the ground, where did we invest the money and how can we use these insights to be able to make better decisions and reconstruct that system in a better way.
So, that for me is a great use of AI. Where I think we shouldn’t cross the line is in making sure that, is in having AI making decisions over where, which citizens, for example, or which community in your country is more, will deserve more to receive that capital than any others.
Because there’s still uniqueness and opportunities and challenges that the model is not able to understand. And this is where all our uniqueness as different cultures, communities, countries is coming into play. But for us to get there, I want to come back to a bit of the elephant in the room, which for me is compute capacity.
Because I feel like we’ve been talking from a place of abundance, but in the countries I operate in, we’re not talking about upskilling workforce. We’re just looking into the fact that we’re entering a new digital economy. And you have, we forecasted over the next 10 years to add 1.2 billion people in emerging markets who will reach the age of entering the workforce, but only 400 million of jobs are forecasted to be created.
That’s a big gap. It’s a big gap, 800 million, most of them in Africa. What am I going to do with them, right?
If I don’t actually provide them the infrastructure, the compute capacity, if I don’t provide them access, if I let a digital divide, which is compounded by a data divide, become a compute divide, it’s going to run right towards becoming an AI jobs divide.
And we can’t let that happen. So I think we need to kind of have mixed emotions when we’re addressing that question and also understand that there are realities on the grounds that need to be addressed. And that’s infrastructure, that’s sovereignty, that’s bringing GPUs.
We were talking about this right before. You’ve been struggling to get GPUs in Africa, bringing GPUs and upscaling our use to be able to use those systems because it’s becoming the default interface for all of your companies. We also need to be able to benefit from that.
How do you make AI sovereignty happen, if you have an opinion on this?
So that’s what we focus on in my company, right? We build sovereign AI for countries in the global south. We work with government to transform their data.
We support them to understand that you don’t have to build gigawatt factory to actually be able to have your own AI system. There is a different way to do it, and it’s to understand what is the minimum viable compute infrastructure that you need to be able to keep your critical data. I’m not talking about the entirety of the country’s data, but just your critical data as they relate to your citizens, to healthcare, in your country, sovereign in your country and within your borders.
And then there is another layer to that, which is being able to access some of the models, to be able to fine-tune, to be able to integrate some of those localized data pipelines in those models so they are reflective of your countries and your realities.
And then the first customer is the government, because governments are still driving a lot of the innovation in a lot of the economies in the global south. And governments have to understand that they need to be the first ones to offtake and show the rest of the ecosystem how it’s done, and then be able to support developers, startups and innovators with access to compute.
Very good. Kian, I’m going to change the subject a little bit and come to you now, because what we’re talking about here is making sure that humans have agency, I think. How do you make sure, how do companies make sure that their workers have agency, that the AI systems that we’re bringing into the workforce are benefiting the workers themselves?
Yeah, agency is trending. It’s an important topic. You can hire for it, but most organizations today have to build internally.
There’s just not enough AI native talent out there, and they’re all sucked by the hyperscalers and the AI startups, so you have no choice but to build internally. Now, to build those type of behaviors, you sort of need to reinvent HR, reinvent L&D. Last decade, L&D and HR were seen as a benefit.
It’s learning as a benefit. It’s self-directed learning. Here’s the gym, become a bodybuilder.
We know it’s not as easy as it seems. What I’ve seen companies do that worked is be much more top-down. The companies that make learning a business imperative trigger certain changes internally.
Oftentimes, companies are scared to do that, like, oh, I’m going to require you to have an AI driver license. Turns out the response that I’ve seen from workers is we would rather be required to do something and rewarded for it. If the company is doing that, it’s probably because they’re not going to fire us.
They’re going to, in fact, invest in us for the future. I think the companies that are top-down, that are very clear about rewards, are the ones who manage to get agency out of their employees. Additionally, going back to Kate’s point, we don’t fully know the skills that are going to be useful for the future, but we can separate those skills into two categories.
The durable skills, the one that we know better, are going to be useful 10 years from now. Problem-solving, critical thinking, communication, coding, in my opinion. We also know there’s perishable skills that change every six months, and so companies have to understand what is their strategy in durable and perishable skills, and how you handle these differently, because they’re so different.
And that’s also an important consideration.
It’s interesting you’re a coding bull still. People are… it’s falling out of fashion, I’ve heard.
No, I think everybody needs to be able to code. I’m not saying write code. I’m saying, you know, work with one of those tools and build your own stuff.
And Andrej has been talking about that a lot, I fully agree.
Enrique, I want to come to you on something. There’s… I think there’s another elephant in the room, which is that you’re starting to hear about job losses in terms of entry-level jobs.
And a lot of the… you know, a lot of the way that you grow as a person within an organization is by doing these entry-level jobs. You learn from them, you get somewhere else.
My first job was… I was, you know, a fact checker. And it’s, you know, it helped me learn how to be a better reporter, how to be an editor.
So if we’re assigning some of this, you know, for lack of a better term, grunt work to, you know, to AI agents. We’re having them make calls, we’re having them, you know, organize tasks. That may be the ways that people have traditionally learned.
How can we make sure that the, you know, that early career workers are able to learn enough about the professions that they can grow in them?
I think the question is even broader than that. Because what is very hard to predict today is what is going to be the structure of an enterprise 5, 10, 15 years from now. If you think about how any of our companies is organized, it’s almost a pyramid.
And you have entry-level workers and then they… you start making progress as you spend more time or you become more proficient. What is going to be the structure of a company that has been fully transformed by AI is going to be radically different.
Today we are organized by functions that are driven by the processes that we have been using during the last 30, 40 years. As Cristobal was saying, AI is going to be… start by transforming these processes.
When you transform them, you’re not going to need these functions anymore. You may need different type of functions. And what is the impact on the overall organization is something that we will be learning all together during the next years.
I think the only thing we can know is going to be different from what we have seen until now.
Nick, can I jump in on this? I think we’re getting a glimpse into what that looks like, but I completely agree with you that you can always see the first order impacts of a new tech, but it’s really hard to see the second order impact.
But one of the things we’re starting to see is this abundance thesis. Let’s take health care. What’s the ideal health care staffing?
It’s really one-to-one. Education has that same property, right? One teacher to one student is the best way to learn.
But we only have in the U.S. 5 million nurses, or 360 million people. But I remember my mom, she was diagnosed with high blood pressure six years ago, and she came home from the doctor and told me she was fine.
She didn’t tell any of us, and she didn’t take her blood pressure med. And so then, fast forward six years, and we’re in the hospital one night, because she had 220 blood pressure and had the beginnings of congestive heart failure. And I felt horrible as a child.
I said, oh my god, I should have been on her, but I didn’t know because she came in. She said, oh, a doctor said I’m the healthiest 78-year-old. He knows.
Okay, great, mom. But nobody from the hospital called him. Nobody from the doctor said, hey, Mrs.
Shaw, you didn’t refill your prescription for your high blood pressure meds. In fact, you haven’t refilled it for—oh, what’s your blood pressure, ma’am? Can you take it right now?
And so I think there’s an infinite abundance we can absorb in health care. And so the big idea starts to—I mean, I think we have to rethink and say, what’s the ideal staffing level that gives us the best health outcomes in that case, in this vertical? And maybe it’s so large that that’s what the AI is doing, rather than us.
And then what are the humans doing? Maybe the humans are supervising all the AIs. Maybe the humans are the escalation point for all the AIs.
Maybe the humans are helping to come up with novel use cases. Like, I think for all of human history, we have assumed scarcity in almost everything. We go to war over scarcities.
pretty much. You know, we fight over islands that are not called the name of the thing they are, over scarcity. But like, ultimately, for the first time in human history, we might have infinite abundance.
And I think we haven’t even begun to think that way, because our entire civilizations and our ways of thinking are built off of scarcity. And for the first time, we have to move to this new framework, and we have to bring it to all parts of the world.
I appreciate that. And we’re getting close on time, and I’m going to come to audience questions in just a moment. But before we do, Christoph, I want to ask you, because you’ve got a really broad view of a lot of clients.
Again, you run a giant enterprise yourself, and then you see all these other organizations. What trends are you seeing people in the workplace taking up in terms of bringing agents in, in terms of bringing bringing in these co-workers or not co-workers? What are you seeing start to take off across the industry?
Well, what is fascinating for me is how much the functional mix of our AI work for clients has evolved over the past 12, 18 months. Initially, when Gen AI came to the world, there was a lot of use case around the call center, the customer contact center, automate that, make the agent, the human agent, more productive, better scripted, etc.
Very plausible, very intuitive, it’s happening. Over the last 12, 18 months, we do now see that the use of AI and agentic AI is going into the deep technical value-added areas of the most sophisticated companies. So what do I mean by that?
It goes into pharmaceutical, R&D, regulatory, clinical trial management. It goes into, at the tech companies, the coding and software maintenance documentation testing. It goes into sophisticated underwriting at major insurance companies.
It goes into the marketing, content production, and campaign design at the world’s best consumer goods companies. So, I mean, with all due respect to call centers, it’s really important, but we are now seeing it go into the functions that either make you a winner or loser in your industry. That’s drastic, a massive change.
The second change is what I said earlier. I mean, people are realizing, okay, I have a technology problem to solve, but I do really have an organizational and workforce problem to solve. And then the third thing is this whole question, Enrique, that you are also teeing up.
I mean, an organization is not something static that is an org chart. I mean, there are human beings in every one of these boxes who do things that qualify them to also eventually do other things. There’s learning, there’s kind of progression, there’s move into other roles, et cetera.
And I feel this whole question, how many people do you need and in what role? Okay, sure. I mean, we all have a job to manage that.
I think we all have much bigger questions. What is the qualitative career path of a person in healthcare, of a nurse? What’s the career path of a person in HR, in consulting, in sales and marketing, in technology going to look like?
How are my coders of the year 2035 getting trained today so that they are world class then? I think this qualitative question is becoming bigger than the quantitative one, even though from a societal perspective, of course, there will be lots of talk about this is the quantitative.
This has been a very optimistic panel and I appreciate it. I want to turn to the room now. If you have a question, please raise your hand.
We can bring a mic around. Yes, right here.
So compared to a 800 person job gap, this is a very prosaic question, excuse me, but we started with AI doesn’t do jobs, it does tasks. And then we heard that you need to completely rethink skills and career pathing. A small question, but I think it’s critical.
A lot of companies are not getting ROI on their AI investments because people don’t know what to do with the six minute productivity gain or the 36 minutes, right? We don’t save three months. We save bits of pieces of time here.
What do we do about that soon?
I think it starts from really redesigning the processes. I think when this happens is because you’re using AI to help you on your daily job. To really see the big impact, redesign your process, and once you have redesigned it, seeing how you will be using AI and technology, then the impact will be very different.
We’re seeing a lot of impact on the, oh, sorry, go ahead. As a couple of numbers, we’re seeing a lot of impact on these emergent use cases. So one of our health systems spent $200 and got back $1.3 million.
Another one spent $10,000 convincing patients to move back to their in-house pharmacy for their specialty meds. Got back $1.5 million a year. Another use case that we saw was $2,000 in the one I mentioned earlier about the lung nodule screening.
They got back $2 million. So maybe we have to think differently because you’re right, what I call fragmentation of ROI doesn’t allow you to capture it. But again, when you go to the emergent, then it’s very different because nobody’s doing it, so you’re not getting fragments.
At the same time, these three, five, 15 minutes have a big impact in how your team member feels about what he or she does. And this is very valuable as well. So you may not be able to improve or to reduce the cost by 15%, but that person feeling better is gonna have an impact in the company also.
So I’m not gonna talk about how to measure ROI. I think it really depends on the use case. But what I can say is a lot of companies are in echo chambers and the difference between top tech AI companies and the rest is becoming increasingly drastic.
I’ll give you an example. Prompt engineering has a very different definition in the top tech AI companies versus others. Some people think of it as how do I guide the model?
Some people think of it as zero shot, few shot prompts, chain of thought prompts, agentic workflows, retrieval augmented generation, reasoning. It’s just that those six minutes, my guess is, maybe in this organization it looked like six minutes a day. In another organizations, it would look like the agent is constantly working behind the scene because it’s just been built different.
I think we have time for one more question if anyone has one. No? Okay.
Well, I’m going to actually leave you guys with a question, which we’ve talked a lot about everything that’s going right. What are the mistakes that organizations are making right now when they’re trying to bring agents into the workforce? If we can just get a quick answer from each of you because we’re very close to being out of time.
So, I look a lot into talent decisions. Talent decisions are shots in the dark. They’re almost all wrong today.
We make really poor talent decisions. Hiring, firing, performance management, upskilling, re-skilling, it’s very wrong, internal mobility. I think in the next few years, we’re going to start seeing AI be way better than the best humans, the least biased humans at making talent decisions.
Going back to what Christoph said, the context that’s going to go into all the agents, you need some context around the skills of the person, if you want to help them effectively. That will come from AI, not humans. I would imagine that in a few years, it might even be illegal to not use an AI for interviewing.
I wouldn’t be surprised because we know it’s worse.
So, I think that you have to focus on experimentation. Too many times, we’re deploying AI the way we deployed software, but software was a specific intelligence that did that one thing. So, the pilot with 10 people in your organization would extrapolate to the experience with the software with everybody else.
But if you put in a prompt in a chat GPT and get a bad answer, do you throw away chat GPT? No, you try a different prompt. And so, you don’t want to accidentally give up on the entire initiative because your first beachhead was the wrong beachhead.
And so, really, we’re recommending you do 10 use cases in parallel because actually some work, some don’t work, and it’s kind of hard to predict at this stage which ones will and won’t. So, that would be my advice to folks on what we’ve seen.
Okay.
On my side, thinking that all of this is just a magic wand and everything just, as soon as you deploy everything, will roll. I think we also need to rethink how we re-skill some of our workforce, especially in the case where, if we consider that agents are going to be co-workers, you have to move some of those individual contributors into learning how to manage some of those, right?
So, how do you help your workforce get to that next layer is going to be my advice for everybody.
So many things can go wrong, but I think there are two that stand out. The first one is you try to Or you handle this as a tech challenge. You just use a technology and you try to get a tech solution You will get a tech solution, but you’re not going to change much for the better The second thing that can go wrong is you lose sight of what makes humans special And I do feel exactly in line with what you said earlier There are things that humans do that are unique and will be unique for a long period in time An ability to deal with ambiguity judgment, empathy, care, Relatability, a willingness to motivate others in a way that they do things differently going forward They won’t disappear in my mind.
They will only get more important. If you miss that in your organization I think you’re in real trouble
Enrique, take us home. Yes, I think we when we talk about AI, many times we think about the cost implications This is going to have. It will have and it will be positive But also it will help companies to move faster, to innovate better, to provide better customer experience And we need to make sure we look at all these elements, not just as the cost impact of it
Well, we are out of time. Thank you all so much. This has been a great discussion and I really appreciate it.
Thank you and others. Thank you for watching. I hope you enjoyed this video.
If you did, please leave a like and share it with your friends. Also, don’t forget to subscribe to my channel. I am always looking forward to your comments.
If you have any questions or other problems, please post them in the comments. I will be happy to help you.
Kian Katanforoosh
Speech speed
185 words per minute
Speech length
1214 words
Speech time
391 seconds
AI should be viewed as “agentic workflows” rather than coworkers, as AI excels at tasks but jobs require organizational change
Explanation
Katanforoosh argues that AI is good at specific tasks but jobs consist of hundreds of tasks, and taking on entire jobs requires people and organizations to change, which can take decades. He prefers the term “agentic workflows” over “coworker” because most predictions about jobs disappearing have been wrong.
Evidence
Examples include translators and customer support managers still existing despite predictions, and businesses still operating on pen and paper 20 years after digitalization, similar to how cloud transformation took decades
Major discussion point
Definition and Nature of AI in the Workplace
Topics
Future of work
Disagreed with
– Munjal Shah
– Kate Kallot
– Christoph Schweizer
– Enrique Lores
Disagreed on
Terminology and conceptual framing of AI in the workplace
AI mentorship should focus on setting goals, rewards, and assessments rather than just learning, helping workers understand excellence standards
Explanation
Katanforoosh explains that effective AI mentorship is not primarily about learning content, but about helping workers understand the bar for excellence and how far they are from achieving it. He argues that when people have clear goals and rewards, the learning problem becomes manageable.
Evidence
Stanford students online vs on-campus example – online students don’t lack material access but lack understanding of excellence standards and feedback loops that campus students get through connections at major tech companies
Major discussion point
Workforce Impact and Skills Development
Topics
Future of work | Online education
Organizations need top-down approaches with clear requirements and rewards rather than treating AI learning as optional benefits
Explanation
Katanforoosh argues that companies should make AI learning a business imperative rather than a self-directed benefit. He suggests that workers actually prefer being required to develop AI skills because it signals job security and investment in their future.
Evidence
Comparison to gym membership approach vs requiring an “AI driver license” – workers respond better to requirements with rewards than optional self-directed learning
Major discussion point
Organizational Change and Implementation
Topics
Future of work | Capacity development
Companies must distinguish between durable skills (problem-solving, coding) and perishable skills that change every six months
Explanation
Katanforoosh emphasizes that organizations need different strategies for skills that will remain valuable long-term versus those that become obsolete quickly. He believes coding remains a durable skill that everyone should learn to work with AI tools.
Evidence
Examples of durable skills include problem-solving, critical thinking, communication, and coding, while perishable skills change every six months
Major discussion point
Workforce Impact and Skills Development
Topics
Future of work | Capacity development
Poor talent decisions in hiring and performance management will be improved by AI systems that are less biased than humans
Explanation
Katanforoosh argues that current talent decisions including hiring, firing, performance management, and internal mobility are mostly wrong and poorly executed. He predicts AI will become significantly better than humans at making these decisions and providing context for skills assessment.
Evidence
Claims that talent decisions are “shots in the dark” and “almost all wrong today” across hiring, firing, performance management, upskilling, re-skilling, and internal mobility
Major discussion point
Implementation Mistakes and Best Practices
Topics
Future of work
Munjal Shah
Speech speed
198 words per minute
Speech length
1939 words
Speech time
585 seconds
AI functions as copilots, autopilots, and infinite pilots, with infinite pilots enabling previously impossible tasks at scale
Explanation
Shah categorizes AI into three types: copilots work with humans in the loop, autopilots do tasks autonomously that humans do today, and infinite pilots enable tasks that were never possible before due to cost or scale constraints. He predicts there will be 80 billion AIs serving 8 billion humans in the future.
Evidence
Example of calling 16,000 Medicare Advantage members during a heat wave to provide education and assistance – something impossible without AI due to the scale of human resources required
Major discussion point
Definition and Nature of AI in the Workplace
Topics
Future of work
Disagreed with
– Kian Katanforoosh
– Kate Kallot
– Christoph Schweizer
– Enrique Lores
Disagreed on
Terminology and conceptual framing of AI in the workplace
AI safety requires empirical output testing with human experts rather than just training on quality data
Explanation
Shah argues that training AI on high-quality data alone doesn’t ensure safety, as even models trained on medical literature still make mistakes. He advocates for extensive output testing with domain experts and multiple redundant safety systems rather than relying solely on input data quality.
Evidence
PubMed GPT example – trained only on high-quality medical data but still made up information; hired 7,500 U.S. licensed nurses to test AI outputs and compare with human nurse performance; triply redundant systems and automatic triage protocols
Major discussion point
Safety, Reliability, and Quality Standards
Topics
Future of work
Agreed with
– Enrique Lores
Agreed on
AI safety and reliability require rigorous testing and quality assurance measures
Disagreed with
– Enrique Lores
Disagreed on
Approach to AI safety and reliability standards
AI demonstrates “artificial jagged intelligence” – excelling at complex tasks while failing at seemingly simple ones
Explanation
Shah explains that AI can be surprisingly good at complex tasks like patient persuasion while failing at simple tasks like scheduling. This unpredictable performance pattern requires multiple checking systems to achieve reliability in practical applications.
Evidence
AI successfully convinced 250 out of 1,700 patients to do follow-up cancer screenings after health systems had given up, but GPT-4 has 23% hallucination rate on scheduling tasks and lacks common sense for basic scheduling requests
Major discussion point
Safety, Reliability, and Quality Standards
Topics
Future of work
AI enables abundance thinking, potentially providing one-to-one ratios in healthcare and education previously impossible due to scarcity
Explanation
Shah argues that AI allows us to move from scarcity-based thinking to abundance, enabling ideal staffing ratios like one-to-one in healthcare and education. He suggests this fundamental shift requires rethinking human civilization’s scarcity-based frameworks.
Evidence
Personal story about his mother’s high blood pressure medication non-compliance leading to hospitalization – illustrates how continuous AI monitoring could prevent such outcomes; ideal healthcare is one-to-one staffing but only 5 million nurses for 360 million Americans
Major discussion point
Workforce Impact and Skills Development
Topics
Future of work | Inclusive finance
Organizations should focus on experimentation with multiple parallel use cases rather than deploying AI like traditional software
Explanation
Shah warns against treating AI deployment like traditional software implementation, where a small pilot would predict broader success. Instead, he recommends running multiple use cases simultaneously because AI’s unpredictable performance makes it difficult to predict which applications will succeed.
Evidence
Analogy that getting a bad answer from ChatGPT doesn’t mean you throw it away, you try different prompts; recommends 10 parallel use cases because some work and some don’t in unpredictable ways
Major discussion point
Evolution of AI Applications
Topics
Future of work
Kate Kallot
Speech speed
178 words per minute
Speech length
1315 words
Speech time
441 seconds
AI remains a tool that makes professions more attractive rather than replacing workers, as seen with radiologists
Explanation
Kallot argues that despite predictions of job displacement, AI actually makes professions more appealing and increases worker interest. She contends that AI cannot make value-based decisions or understand specific contexts that humans can provide.
Evidence
Radiologist example – despite AI surpassing human ability in reading radiographs, there are now more radiologists entering the profession because AI made the field more attractive; government workers finding renewed interest in their work with AI tools
Major discussion point
Definition and Nature of AI in the Workplace
Topics
Future of work
Agreed with
– Enrique Lores
Agreed on
AI enhances rather than replaces human workers, making jobs more attractive
Disagreed with
– Kian Katanforoosh
– Munjal Shah
– Christoph Schweizer
– Enrique Lores
Disagreed on
Terminology and conceptual framing of AI in the workplace
Global South faces data scarcity with analog, paper-based systems limiting AI adoption and risking perpetuation of extractive cycles
Explanation
Kallot highlights that emerging economies still operate on unstructured, paper-based systems, creating barriers to AI adoption. Without proper data infrastructure, AI systems risk making confident decisions based on incomplete information, potentially perpetuating harmful cycles.
Evidence
Examples of non-digitized civil registries and land registries; governments operating on paper formats; risk of AI making decisions on incomplete records about citizens and land management
Major discussion point
Data Challenges and Global Equity
Topics
Digital access | Data governance | Sustainable development
AI sovereignty requires minimum viable compute infrastructure and localized data pipelines within national borders
Explanation
Kallot advocates for countries to maintain control over critical citizen data by building sovereign AI systems. This involves understanding minimum compute requirements and developing localized data pipelines that reflect national realities rather than building massive infrastructure.
Evidence
Focus on keeping critical data related to citizens and healthcare within national borders; fine-tuning models with localized data; government as first customer to demonstrate adoption to the broader ecosystem
Major discussion point
Data Challenges and Global Equity
Topics
Data governance | Digital access | Critical internet resources
Emerging markets face a massive job gap with 1.2 billion people entering workforce but only 400 million jobs forecasted
Explanation
Kallot warns of an impending crisis where 800 million people, mostly in Africa, will enter the workforce over the next decade without corresponding job creation. She argues this gap will be exacerbated by digital divides becoming AI job divides without proper infrastructure investment.
Evidence
Specific forecast numbers: 1.2 billion people reaching workforce age in emerging markets vs 400 million jobs to be created, with 800 million gap mostly in Africa; concerns about digital divide becoming compute divide and then AI jobs divide
Major discussion point
Data Challenges and Global Equity
Topics
Future of work | Digital access | Sustainable development
Companies err by expecting immediate magic solutions without reskilling workforce to manage AI agents as coworkers
Explanation
Kallot warns against treating AI deployment as an automatic solution that will work immediately upon implementation. She emphasizes the need to retrain individual contributors to manage AI agents effectively, moving them into supervisory roles.
Major discussion point
Implementation Mistakes and Best Practices
Topics
Future of work | Capacity development
Agreed with
– Christoph Schweizer
– Enrique Lores
Agreed on
AI implementation requires comprehensive organizational change beyond just technology adoption
Enrique Lores
Speech speed
182 words per minute
Speech length
752 words
Speech time
246 seconds
AI transitions from tool to coworker when it performs complex, autonomous functions like supply chain planning
Explanation
Lores distinguishes between AI as a tool (like helping prepare earnings questions) versus AI as a coworker (like autonomous supply chain planning or self-driving cars). He sees this transition as inevitable and believes AI will become deeply integrated into companies.
Evidence
Personal examples: using AI to prepare earnings questions (tool) vs AI handling supply chain planning (coworker) vs Waymo autonomous driving (co-driver)
Major discussion point
Definition and Nature of AI in the Workplace
Topics
Future of work
Disagreed with
– Kian Katanforoosh
– Munjal Shah
– Kate Kallot
– Christoph Schweizer
Disagreed on
Terminology and conceptual framing of AI in the workplace
AI systems should not be held to higher standards than human workers, as AI often outperforms humans in accuracy
Explanation
Lores argues that organizations shouldn’t demand perfection from AI when human employees also make mistakes regularly. He points out that AI systems in call centers, while imperfect, actually outperform human accuracy and achieve higher customer satisfaction.
Evidence
Call center implementation where AI makes mistakes but is more accurate than human agents and achieves higher customer satisfaction ratings
Major discussion point
Safety, Reliability, and Quality Standards
Topics
Future of work | Consumer protection
Agreed with
– Munjal Shah
Agreed on
AI safety and reliability require rigorous testing and quality assurance measures
Disagreed with
– Munjal Shah
Disagreed on
Approach to AI safety and reliability standards
Future enterprise structure will be radically different from current pyramid organization as AI transforms functions and processes
Explanation
Lores predicts that AI will fundamentally reshape organizational structures beyond the traditional pyramid model. As AI transforms processes, companies will need different types of functions, making the future structure unpredictable but certainly different from current models.
Evidence
Current pyramid structure based on 30-40 year old processes; transformation of processes will eliminate need for current functions and create need for different types of functions
Major discussion point
Workforce Impact and Skills Development
Topics
Future of work
Companies must redesign processes rather than just using AI to assist existing workflows to see significant impact
Explanation
Lores emphasizes that meaningful AI impact requires fundamental process redesign rather than simply adding AI assistance to current workflows. This approach addresses the common problem of only achieving small productivity gains that are difficult to capture.
Major discussion point
Organizational Change and Implementation
Topics
Future of work
Agreed with
– Christoph Schweizer
– Kate Kallot
Agreed on
AI implementation requires comprehensive organizational change beyond just technology adoption
Workers using AI regularly report higher job satisfaction and better ability to meet personal and professional goals
Explanation
Lores shares data from HP’s annual Workforce Relationship Index showing that employees who use AI regularly in their work report higher satisfaction levels than those who don’t. This demonstrates AI’s positive impact on worker experience beyond just productivity metrics.
Evidence
Annual Workforce Relationship Index survey across multiple companies and countries showing higher satisfaction among regular AI users
Major discussion point
Measuring Success and Productivity
Topics
Future of work
Agreed with
– Kate Kallot
Agreed on
AI enhances rather than replaces human workers, making jobs more attractive
AI provides significant time savings that improve employee satisfaction even when ROI is difficult to quantify
Explanation
Lores acknowledges that small time savings from AI (3, 5, or 15 minutes) may not translate to measurable cost reductions, but these improvements significantly impact how employees feel about their work, which provides valuable organizational benefits.
Major discussion point
Measuring Success and Productivity
Topics
Future of work
Agreed with
– Christoph Schweizer
Agreed on
Measuring AI success requires new metrics beyond traditional productivity measures
Organizations should focus on AI’s benefits for speed, innovation, and customer experience, not just cost reduction
Explanation
Lores warns against viewing AI implementation solely through a cost-reduction lens. He argues that AI’s value extends to helping companies move faster, innovate better, and provide superior customer experiences, which are equally important outcomes.
Major discussion point
Implementation Mistakes and Best Practices
Topics
Future of work | Digital business models
Christoph Schweizer
Speech speed
172 words per minute
Speech length
1695 words
Speech time
590 seconds
AI becomes coworker-like when it can dynamically query data, identify patterns, and create deliverables autonomously
Explanation
Schweizer describes how AI evolved from simple database queries to dynamic, augmented information processing that can pull from multiple sources, identify unusual patterns, and create presentation materials. This autonomous, multi-step capability makes it feel like working with a coworker rather than using a tool.
Evidence
BCG example: AI pulling benchmarks from internal databases for pharma company with 27 factories, then adding public analyst reports, identifying 10 unusual patterns, and creating 8-slide presentation for workshop
Major discussion point
Definition and Nature of AI in the Workplace
Topics
Future of work
Disagreed with
– Kian Katanforoosh
– Munjal Shah
– Kate Kallot
– Enrique Lores
Disagreed on
Terminology and conceptual framing of AI in the workplace
Technology is not the bottleneck; success requires changing processes, organization, incentives, skills, and culture with CEO leadership
Explanation
Schweizer argues that AI technology itself works, but organizational success depends on comprehensive change management including processes, incentives, skills, and culture. He emphasizes that this transformation requires CEO-level ownership rather than delegation to lower levels.
Evidence
Experience working with thousands of clients on AI adoption; observation that companies succeed when CEOs take ownership, upskill themselves, make bold investments, and create positive feedback loops
Major discussion point
Organizational Change and Implementation
Topics
Future of work | Capacity development
Agreed with
– Enrique Lores
– Kate Kallot
Agreed on
AI implementation requires comprehensive organizational change beyond just technology adoption
Success should be measured through adoption rates, employee satisfaction, and daily usage patterns rather than traditional productivity metrics
Explanation
Schweizer explains that productivity is difficult to measure precisely, so organizations should focus on whether employees use AI tools habitually (daily vs weekly), and whether they report satisfaction and appreciation for the tools. These metrics predict eventual productivity gains.
Evidence
BCG’s own experience: 77% of employees now say they’re grateful for AI tools, up from much lower levels; observation that weekly usage is distracting but daily usage becomes productive norm
Major discussion point
Measuring Success and Productivity
Topics
Future of work
Agreed with
– Enrique Lores
Agreed on
Measuring AI success requires new metrics beyond traditional productivity measures
AI applications have evolved from basic call center automation to sophisticated functions like pharmaceutical R&D and technical coding
Explanation
Schweizer observes a dramatic shift in AI applications over 12-18 months, moving from simple customer service automation to core value-added functions that determine competitive advantage in industries. This represents a fundamental change in AI’s role within organizations.
Evidence
Examples include pharmaceutical R&D, regulatory and clinical trial management, coding and software maintenance at tech companies, sophisticated underwriting at insurance companies, and marketing content production at consumer goods companies
Major discussion point
Evolution of AI Applications
Topics
Future of work
Major mistakes include treating AI as purely technical challenge and losing sight of unique human capabilities like empathy and judgment
Explanation
Schweizer identifies two critical errors: approaching AI implementation as only a technology problem rather than an organizational transformation, and failing to recognize and preserve uniquely human capabilities that become more important as AI advances.
Evidence
Human capabilities that remain unique and important: dealing with ambiguity, judgment, empathy, care, relatability, and ability to motivate others to change behavior
Major discussion point
Implementation Mistakes and Best Practices
Topics
Future of work | Human rights principles
Mat Honan
Speech speed
207 words per minute
Speech length
1223 words
Speech time
354 seconds
AI normalization is happening rapidly, especially among younger generations who take autonomous systems for granted
Explanation
Honan observes how quickly autonomous vehicles like Waymos have become normalized in society, with his children regularly using them and treating the technology as routine. This demonstrates how rapidly AI systems are being integrated into daily life and accepted as normal.
Evidence
Personal example of his kids riding in Waymo autonomous cars regularly and taking it for granted, describing it as a ‘bizarre world’ where the technology has become normalized
Major discussion point
Definition and Nature of AI in the Workplace
Topics
Future of work
Entry-level job displacement by AI threatens traditional career development pathways
Explanation
Honan raises concerns that AI taking over entry-level tasks and ‘grunt work’ could eliminate the traditional learning opportunities that help people grow within professions. He worries this could disrupt how early career workers develop the skills and experience needed to advance in their fields.
Evidence
Personal example of starting as a fact checker, which helped him learn to be a better reporter and editor; mentions AI making calls and organizing tasks that traditionally served as learning opportunities
Major discussion point
Workforce Impact and Skills Development
Topics
Future of work
Organizations struggle to capture ROI from AI due to fragmented productivity gains rather than substantial time savings
Explanation
Honan highlights a critical implementation challenge where companies achieve small productivity improvements (6 minutes, 36 minutes) that are difficult to monetize or reorganize around. This fragmentation prevents organizations from realizing meaningful returns on their AI investments.
Evidence
Specific mention of ‘6 minute productivity gain or 36 minutes’ versus not saving ‘three months’ – emphasizing the challenge of small, fragmented time savings
Major discussion point
Measuring Success and Productivity
Topics
Future of work
Audience
Speech speed
182 words per minute
Speech length
99 words
Speech time
32 seconds
Companies are not achieving ROI on AI investments because small productivity gains are difficult to capture and monetize
Explanation
The audience member points out a practical challenge where AI provides fragmented time savings that don’t translate into meaningful business value. They emphasize that organizations save ‘bits and pieces of time’ rather than substantial blocks that can be reorganized or monetized effectively.
Evidence
Specific examples of saving 6 minutes or 36 minutes rather than three months; describes this as a ‘prosaic but critical’ question affecting AI adoption
Major discussion point
Measuring Success and Productivity
Topics
Future of work
Agreements
Agreement points
AI implementation requires comprehensive organizational change beyond just technology adoption
Speakers
– Christoph Schweizer
– Enrique Lores
– Kate Kallot
Arguments
Technology is not the bottleneck; success requires changing processes, organization, incentives, skills, and culture with CEO leadership
Companies must redesign processes rather than just using AI to assist existing workflows to see significant impact
Companies err by expecting immediate magic solutions without reskilling workforce to manage AI agents as coworkers
Summary
All three speakers emphasize that successful AI adoption requires fundamental organizational transformation including process redesign, cultural change, and workforce reskilling, rather than simply implementing new technology
Topics
Future of work | Capacity development
AI safety and reliability require rigorous testing and quality assurance measures
Speakers
– Munjal Shah
– Enrique Lores
Arguments
AI safety requires empirical output testing with human experts rather than just training on quality data
AI systems should not be held to higher standards than human workers, as AI often outperforms humans in accuracy
Summary
Both speakers agree that AI systems need proper testing and quality measures, though they approach it from different angles – Shah emphasizes extensive testing protocols while Lores argues for realistic performance expectations
Topics
Future of work | Consumer protection
AI enhances rather than replaces human workers, making jobs more attractive
Speakers
– Kate Kallot
– Enrique Lores
Arguments
AI remains a tool that makes professions more attractive rather than replacing workers, as seen with radiologists
Workers using AI regularly report higher job satisfaction and better ability to meet personal and professional goals
Summary
Both speakers provide evidence that AI implementation leads to increased job satisfaction and professional attractiveness rather than job displacement
Topics
Future of work
Measuring AI success requires new metrics beyond traditional productivity measures
Speakers
– Christoph Schweizer
– Enrique Lores
Arguments
Success should be measured through adoption rates, employee satisfaction, and daily usage patterns rather than traditional productivity metrics
AI provides significant time savings that improve employee satisfaction even when ROI is difficult to quantify
Summary
Both speakers agree that traditional productivity metrics are insufficient for measuring AI success and that employee satisfaction and adoption patterns are better indicators
Topics
Future of work
Similar viewpoints
Both speakers are skeptical of the ‘AI as coworker’ framing and prefer to view AI as tools or workflows that augment human capabilities rather than replace human roles
Speakers
– Kian Katanforoosh
– Kate Kallot
Arguments
AI should be viewed as ‘agentic workflows’ rather than coworkers, as AI excels at tasks but jobs require organizational change
AI remains a tool that makes professions more attractive rather than replacing workers, as seen with radiologists
Topics
Future of work
Both speakers emphasize the need for experimental approaches to AI implementation and recognize that AI applications are becoming more sophisticated and varied
Speakers
– Munjal Shah
– Christoph Schweizer
Arguments
Organizations should focus on experimentation with multiple parallel use cases rather than deploying AI like traditional software
AI applications have evolved from basic call center automation to sophisticated functions like pharmaceutical R&D and technical coding
Topics
Future of work
Both speakers emphasize the critical importance of structured workforce development and training programs for successful AI adoption
Speakers
– Kian Katanforoosh
– Kate Kallot
Arguments
Organizations need top-down approaches with clear requirements and rewards rather than treating AI learning as optional benefits
Companies err by expecting immediate magic solutions without reskilling workforce to manage AI agents as coworkers
Topics
Future of work | Capacity development
Unexpected consensus
AI systems don’t need to be perfect to be valuable
Speakers
– Enrique Lores
– Munjal Shah
Arguments
AI systems should not be held to higher standards than human workers, as AI often outperforms humans in accuracy
AI demonstrates ‘artificial jagged intelligence’ – excelling at complex tasks while failing at seemingly simple ones
Explanation
Despite coming from different perspectives, both speakers agree that AI’s imperfections don’t disqualify its value, which is unexpected given common concerns about AI reliability. This consensus suggests a mature understanding of AI limitations while recognizing practical benefits
Topics
Future of work | Consumer protection
Coding remains a crucial skill in the AI era
Speakers
– Kian Katanforoush
– Mat Honan
Arguments
Companies must distinguish between durable skills (problem-solving, coding) and perishable skills that change every six months
It’s interesting you’re a coding bull still. People are… it’s falling out of fashion, I’ve heard
Explanation
Despite widespread predictions that AI would make coding obsolete, there’s unexpected consensus that coding remains a durable, essential skill. This challenges common assumptions about AI replacing programming jobs
Topics
Future of work | Capacity development
Small productivity gains from AI are difficult to monetize
Speakers
– Audience
– Mat Honan
– Enrique Lores
Arguments
Companies are not achieving ROI on AI investments because small productivity gains are difficult to capture and monetize
Organizations struggle to capture ROI from AI due to fragmented productivity gains rather than substantial time savings
AI provides significant time savings that improve employee satisfaction even when ROI is difficult to quantify
Explanation
There’s unexpected consensus across speakers and audience that fragmented productivity gains are a real challenge, but also agreement that these small improvements still provide value through employee satisfaction, suggesting a nuanced view of AI ROI
Topics
Future of work
Overall assessment
Summary
The panel showed remarkable consensus on key issues: AI requires comprehensive organizational change beyond technology implementation, success should be measured through employee satisfaction and adoption rather than traditional metrics, AI enhances rather than replaces human workers, and implementation requires experimental approaches with proper training programs
Consensus level
High level of consensus with complementary rather than conflicting viewpoints. The speakers approached AI implementation from different angles but converged on similar conclusions about the need for organizational transformation, human-centered approaches, and realistic expectations. This strong consensus suggests mature thinking about AI adoption challenges and validates practical approaches to workplace AI integration
Differences
Different viewpoints
Terminology and conceptual framing of AI in the workplace
Speakers
– Kian Katanforoosh
– Munjal Shah
– Kate Kallot
– Christoph Schweizer
– Enrique Lores
Arguments
AI should be viewed as “agentic workflows” rather than coworkers, as AI excels at tasks but jobs require organizational change
AI functions as copilots, autopilots, and infinite pilots, with infinite pilots enabling previously impossible tasks at scale
AI remains a tool that makes professions more attractive rather than replacing workers, as seen with radiologists
AI becomes coworker-like when it can dynamically query data, identify patterns, and create deliverables autonomously
AI transitions from tool to coworker when it performs complex, autonomous functions like supply chain planning
Summary
Speakers fundamentally disagree on whether AI should be conceptualized as a ‘coworker’ or remain classified as a ‘tool.’ Katanforoosh and Kallot argue against the coworker terminology, preferring ‘agentic workflows’ and ‘tool’ respectively, while Shah, Schweizer, and Lores embrace the coworker concept with varying frameworks for understanding AI’s autonomous capabilities.
Topics
Future of work
Approach to AI safety and reliability standards
Speakers
– Munjal Shah
– Enrique Lores
Arguments
AI safety requires empirical output testing with human experts rather than just training on quality data
AI systems should not be held to higher standards than human workers, as AI often outperforms humans in accuracy
Summary
Shah advocates for rigorous, expensive safety protocols with multiple redundant systems and extensive human expert testing, while Lores argues for more pragmatic standards that don’t exceed human performance expectations, accepting that AI imperfection is acceptable if it outperforms humans.
Topics
Future of work | Consumer protection
Unexpected differences
Coding as a durable skill in the AI era
Speakers
– Kian Katanforoosh
– Mat Honan
Arguments
Companies must distinguish between durable skills (problem-solving, coding) and perishable skills that change every six months
It’s interesting you’re a coding bull still. People are… it’s falling out of fashion, I’ve heard
Explanation
This represents an unexpected disagreement where Katanforoosh strongly advocates for coding as a durable skill everyone should learn, while Honan suggests this view is becoming unfashionable. This disagreement is significant because it touches on fundamental questions about which skills will remain valuable as AI capabilities expand, with implications for workforce development strategies.
Topics
Future of work | Capacity development
Overall assessment
Summary
The panel shows moderate disagreement primarily on conceptual frameworks and implementation approaches rather than fundamental goals. Main areas of disagreement include terminology (coworker vs tool vs agentic workflows), safety standards (rigorous vs pragmatic), and specific skill development priorities.
Disagreement level
The disagreement level is moderate and constructive, with speakers generally building on each other’s ideas rather than directly contradicting them. The disagreements reflect different perspectives and experiences rather than fundamental philosophical divides. This suggests a healthy diversity of approaches to AI implementation in the workplace, which could lead to more robust and varied solutions. The consensus on the need for organizational change and the positive potential of AI provides a strong foundation despite tactical disagreements.
Partial agreements
Partial agreements
Similar viewpoints
Both speakers are skeptical of the ‘AI as coworker’ framing and prefer to view AI as tools or workflows that augment human capabilities rather than replace human roles
Speakers
– Kian Katanforoosh
– Kate Kallot
Arguments
AI should be viewed as ‘agentic workflows’ rather than coworkers, as AI excels at tasks but jobs require organizational change
AI remains a tool that makes professions more attractive rather than replacing workers, as seen with radiologists
Topics
Future of work
Both speakers emphasize the need for experimental approaches to AI implementation and recognize that AI applications are becoming more sophisticated and varied
Speakers
– Munjal Shah
– Christoph Schweizer
Arguments
Organizations should focus on experimentation with multiple parallel use cases rather than deploying AI like traditional software
AI applications have evolved from basic call center automation to sophisticated functions like pharmaceutical R&D and technical coding
Topics
Future of work
Both speakers emphasize the critical importance of structured workforce development and training programs for successful AI adoption
Speakers
– Kian Katanforoosh
– Kate Kallot
Arguments
Organizations need top-down approaches with clear requirements and rewards rather than treating AI learning as optional benefits
Companies err by expecting immediate magic solutions without reskilling workforce to manage AI agents as coworkers
Topics
Future of work | Capacity development
Takeaways
Key takeaways
AI should be viewed as ‘agentic workflows’ rather than coworkers, as it excels at specific tasks but transforming entire jobs requires broader organizational change
AI safety and reliability require empirical output testing with human experts, not just training on quality data, and AI should not be held to higher standards than human workers
Success in AI implementation depends more on organizational change (processes, culture, skills, leadership) than on technology itself, requiring CEO-level ownership
AI adoption should be measured through usage patterns, employee satisfaction, and daily engagement rather than traditional productivity metrics
Global South faces significant challenges with data scarcity, analog systems, and compute infrastructure gaps that risk creating an ‘AI jobs divide’
AI demonstrates ‘artificial jagged intelligence’ – excelling at complex tasks while failing at seemingly simple ones, requiring multiple redundant systems
Future enterprise structures will be radically different from current pyramid organizations as AI transforms functions and processes
AI enables ‘abundance thinking’ – providing previously impossible one-to-one ratios in sectors like healthcare and education
Companies must distinguish between durable skills (problem-solving, coding) and perishable skills that change frequently when reskilling workers
AI applications have evolved from basic automation to sophisticated functions in R&D, technical coding, and strategic decision-making
Resolutions and action items
Organizations should redesign entire processes rather than just using AI to assist existing workflows to achieve significant impact
Companies should implement top-down approaches with clear requirements and rewards for AI adoption rather than treating it as optional
Organizations should experiment with multiple parallel AI use cases rather than deploying AI like traditional software with single pilots
Companies need to reskill workforce to manage AI agents, moving individual contributors into supervisory roles
CEOs must take personal ownership of AI transformation rather than delegating it down the organization
Global South countries should focus on building minimum viable compute infrastructure and sovereign AI capabilities with government as first customer
Unresolved issues
How to structure future enterprise organizations when AI fundamentally transforms traditional pyramid hierarchies and career progression paths
How to address the 800 million job gap in emerging markets where 1.2 billion people will enter the workforce but only 400 million jobs are forecasted
How to capture ROI from fragmented productivity gains (saving minutes rather than hours) across organizations
What the qualitative career paths will look like for professionals in various fields (healthcare, HR, consulting, technology) in an AI-transformed workplace
How to ensure AI sovereignty and prevent digital divides from becoming compute divides and AI job divides globally
How to maintain human agency and ensure AI systems benefit workers themselves rather than just organizations
How early career workers will gain necessary experience when traditional entry-level ‘grunt work’ is assigned to AI agents
Suggested compromises
Balance AI capabilities with human oversight – use AI for autonomous tasks but maintain human escalation paths for safety-critical situations
Treat AI as both tool and coworker depending on context – tool for simple assistance, coworker for complex autonomous functions
Focus on AI’s multiple benefits (speed, innovation, customer experience) rather than just cost reduction to gain broader organizational acceptance
Implement AI gradually through experimentation while simultaneously investing in workforce reskilling and organizational change
Maintain focus on uniquely human capabilities (empathy, judgment, ambiguity handling) while leveraging AI for tasks it performs better than humans
Thought provoking comments
I’m not a fan of calling AI agents, agents, neither I’m a fan of calling it a coworker… AI being good at a set of tasks is very different than AI taking on an entire job. In fact, most of the predictions on job XYZ is going away over the last three years has been wrong.
Speaker
Kian Katanforoosh
Reason
This comment immediately challenged the fundamental premise of the panel discussion by questioning the terminology and assumptions everyone was using. It introduced the crucial distinction between task-level competence and job-level replacement, grounding the discussion in empirical reality rather than theoretical speculation.
Impact
This opening comment set a more nuanced, realistic tone for the entire discussion. It forced other panelists to be more precise in their language and examples, moving away from broad generalizations about AI ‘coworkers’ to specific use cases and implementations.
There will be eight billion humans or whatever number we’ll have then, but there will be 80 billion AIs… I think by and large, we’re gonna find all of these use cases of things we never thought to do until we had an infinite supply at low cost.
Speaker
Munjal Shah
Reason
This comment introduced the transformative concept of abundance economics – moving from scarcity-based thinking to abundance-based applications. The 16,000 heat wave calls example demonstrated how AI enables entirely new categories of work rather than just replacing existing work.
Impact
This shifted the conversation from a zero-sum replacement narrative to an expansive creation narrative. It influenced later discussions about rethinking organizational structures and career paths, and provided a framework for understanding AI’s potential beyond current job categories.
There’s a misnomer out there that if you train your AI on the right data, then it’s safe… what we realized was the first thing you have to do to ensure safety and kind of responsible AI is you have to do something that almost no horizontal model does, which is output testing.
Speaker
Munjal Shah
Reason
This comment challenged a widespread assumption about AI safety and introduced a practical, empirical approach to ensuring reliability. The concept of hiring 7,500 nurses to test outputs demonstrated serious commitment to safety validation.
Impact
This comment elevated the discussion’s technical sophistication and introduced concrete methodologies for AI deployment. It influenced Enrique’s later point about not being more demanding of AI than humans, and established a framework for thinking about AI reliability in high-stakes environments.
We need to be careful not to be more demanding with the AI co-workers or with the models than what we are with our own employees because nobody’s perfect, everybody makes mistakes… they are more accurate than the humans that we have had for many years providing that work.
Speaker
Enrique Lores
Reason
This comment provided a crucial reality check about performance expectations and introduced the concept of comparative rather than absolute standards for AI performance. It challenged perfectionist thinking about AI deployment.
Impact
This comment helped balance the discussion between safety concerns and practical deployment. It provided a framework for realistic AI adoption and influenced the later discussion about measuring productivity and success in AI implementations.
The technology is not the bottleneck. The models work… They will succeed if they really change how their people work. And to do that, you need to change processes, organization, incentives, skills, leadership, culture… make this a CEO problem and don’t delegate it somewhere down in the organization.
Speaker
Christoph Schweizer
Reason
This comment fundamentally reframed the AI adoption challenge from a technical problem to an organizational transformation problem. It identified the real barriers to AI success and provided a strategic framework for implementation.
Impact
This was a major turning point that shifted the entire panel’s focus from technical capabilities to change management. It influenced subsequent discussions about skills, training, and organizational structure, and established the human/organizational dimension as the critical success factor.
If I don’t actually provide them the infrastructure, the compute capacity, if I don’t provide them access, if I let a digital divide, which is compounded by a data divide, become a compute divide, it’s going to run right towards becoming an AI jobs divide.
Speaker
Kate Kallot
Reason
This comment introduced global equity concerns and the concept of cascading digital divides. It challenged the panel’s implicit assumption of universal access and highlighted how AI could exacerbate global inequalities if not thoughtfully deployed.
Impact
This comment brought a crucial global perspective to what had been a largely developed-world discussion. It introduced the concept of AI sovereignty and influenced the conversation about infrastructure requirements and equitable access to AI benefits.
Karpathy has a very good phrase he coined called AJI, artificial jagged intelligence… You’d be so surprised at the things that AI does amazing and you’d be like, are you an idiot? How come you couldn’t do that?
Speaker
Munjal Shah
Reason
This comment introduced a crucial concept about AI’s unpredictable capability profile – being simultaneously superhuman and subhuman at different tasks. The scheduling vs. persuasion example perfectly illustrated this paradox.
Impact
This comment provided a framework for understanding AI limitations and capabilities that influenced the discussion about training, deployment strategies, and realistic expectations. It helped explain why AI adoption is more complex than simple task replacement.
Overall assessment
These key comments fundamentally shaped the discussion by challenging initial assumptions, introducing new frameworks for thinking about AI in the workplace, and elevating the conversation’s sophistication. The discussion evolved from simple tool vs. coworker distinctions to nuanced considerations of organizational transformation, global equity, safety methodologies, and abundance economics. The most impactful comments consistently moved the conversation away from theoretical speculation toward practical implementation challenges and real-world complexities. The interplay between these insights created a comprehensive view of AI workplace integration that balanced optimism with realism, technical capabilities with human factors, and local implementation with global implications.
Follow-up questions
What are the 10 most unusual and remarkable patterns in manufacturing data across multiple facilities?
Speaker
Christoph Schweizer
Explanation
This represents a specific AI-driven analytical capability that BCG is exploring to identify non-obvious insights from large datasets that human analysts might miss
How do we measure productivity improvements from AI implementation in knowledge work?
Speaker
Mat Honan and Christoph Schweizer
Explanation
Both acknowledged that measuring productivity in functions like IT and marketing is elusive, and traditional metrics may not capture the full impact of AI tools
What will be the organizational structure of companies that have been fully transformed by AI in 5-15 years?
Speaker
Enrique Lores
Explanation
He noted that current pyramid-like organizational structures based on traditional processes will likely be radically different, but the exact form is unpredictable
How do we solve the infrastructure and compute capacity challenges in emerging markets before AI adoption can be meaningful?
Speaker
Kate Kallot
Explanation
She highlighted that many Global South countries still operate on paper-based systems and lack the digital infrastructure necessary for effective AI implementation
What is the minimum viable compute infrastructure needed for AI sovereignty in developing countries?
Speaker
Kate Kallot
Explanation
She suggested there are alternatives to building massive compute facilities, but the specific requirements and approaches need further exploration
How do we address the 800 million job gap in emerging markets where 1.2 billion people will enter the workforce but only 400 million jobs are forecasted?
Speaker
Kate Kallot
Explanation
This represents a massive demographic and economic challenge that requires research into how AI and technology can create new opportunities rather than just displace existing ones
How do we design effective career progression paths when entry-level jobs are being automated?
Speaker
Mat Honan and discussed by multiple panelists
Explanation
Traditional career ladders may be disrupted, requiring new models for how people develop skills and advance professionally
How do companies capture ROI from fragmented productivity gains (6 minutes here, 36 minutes there) rather than large time savings?
Speaker
Audience member
Explanation
Many organizations struggle to realize meaningful returns when AI provides small incremental improvements rather than dramatic time savings
What are the qualitative career paths for professionals in various fields (healthcare, HR, consulting, sales, technology) in an AI-transformed world?
Speaker
Christoph Schweizer
Explanation
He emphasized this as a bigger question than quantitative job displacement, focusing on how professional development and career trajectories will evolve
How do we prevent AI from perpetuating extractive cycles and making confident decisions on incomplete data in developing countries?
Speaker
Kate Kallot
Explanation
This addresses the risk of AI systems making authoritative-seeming decisions based on inadequate or biased data from regions with limited digital infrastructure
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

World Economic Forum Annual Meeting 2026 at Davos
19 Jan 2026 08:00h - 23 Jan 2026 18:00h
Davos, Switzerland
