AI for Social Empowerment_ Driving Change and Inclusion

20 Feb 2026 11:00h - 12:00h

AI for Social Empowerment_ Driving Change and Inclusion

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel examined how artificial intelligence is reshaping labour markets and whether societies can afford to wait for clearer evidence before acting [1-2][14-16]. Sabina argued that firms publicly deny AI-driven job disruptions but privately acknowledge 30-40 % productivity gains that translate into workforce cuts [5-8]. She added that AI intensifies inequality and concentrates capital, citing the massive market cap of firms like NVIDIA while labour’s share of income shrinks [10-12]. Anurag questioned the source of AI investment returns, suggesting they will come either from productivity-driven labour reductions or from new products and services [32-38]. Sandhya responded that AI is prompting a redesign of roles, with coding a minor component; junior developers are becoming “AI managers” who oversee design, architecture and security rather than being eliminated [74-88]. She noted that sectors such as marketing, finance and healthcare still require human strategic oversight, and AI can boost efficiency and decision-making in these areas [93-106]. Julie emphasized that effective governance depends on strong institutions, labour research and human-centred co-creation, pointing to AI4D’s work collecting household and firm data to track AI’s real-world impacts [129-138]. She introduced the Global Index on Responsible AI, a rights-based dataset covering 138 countries that helps policymakers assess labour-related risks and design evidence-based regulations [233-242]. Sabina warned that layoffs are already occurring, that focusing solely on job counts ignores quality issues and gig-economy algorithmic management, and that broader precarity is rising [152-166]. She called for urgent policy measures-competition, antitrust, tax, labour law, social protection and skill development-especially in India where formal employment is under 10 % of the workforce [197-205][220-222]. Anurag disclosed a conflict of interest, noting his foundation’s 70 % stake in Wipro, which highlights the tension between tech growth and protecting vulnerable populations [254-262]. Sandhya argued that waiting is not an option; proactive policies, platform regulation and workforce retraining are needed, though panic must be avoided [355-363][366-367]. The discussion concluded that AI poses risks comparable to nuclear technology, yet roles requiring human wisdom, empathy and care are likely to persist, making coordinated, human-centred governance essential [406-409].


Keypoints

Major discussion points


AI will generate large productivity gains that are likely to translate into significant workforce reductions, and the scale of these impacts is already visible.


Sabina notes that companies privately admit “30 % to 40 % time-saving… which then translates into significant workforce cuts” [8-9] and points to “plenty of empirical evidence” of AI-driven surveillance and inequality [10-12]. She later stresses that “companies are laying off thousands of workers already” [152-154] and that efficiency gains “always lead to layoffs” [148-149].


The technology sector argues that AI will reshape rather than eliminate many jobs, creating new roles that focus on oversight, creativity, and human-centric skills.


Sandhya explains that coding can be handed to an AI agent, but “the success of this code… depends on a human to oversee design, architecture, security” [85-87]; junior developers become “managers of AI” [86-88]. She also highlights that in marketing, finance, healthcare, etc., AI handles routine processing while “strategic thinking… remains with humans” [94-99][104-106].


Effective governance requires strong institutions, data-driven research, and a human-centred, rights-based approach to AI.


Julie emphasizes that without “strong regulatory institutions, labor institutions, strong research ecosystems” governments cannot protect workers [130-132]. She describes the AI4D program’s work on “co-creating… with workers, communities, employers” [133-136] and the Global Index on Responsible AI that provides “country-level comparable data” on labor protection [237-242].


India (and the broader Global South) faces acute vulnerability because formal jobs are scarce and the informal sector is expanding, making AI-driven disruption especially risky.


Sabina corrects the earlier claim that only 10 % are in formal employment, stating “more than 90 % in India… are in formal employment” [197-199] and warns that “the precaritisation of the labor market… formal jobs are being gotten rid of” [208-213]. She calls for urgent action on competition policy, tax, labor law, and universal social protection [320-327].


AI is already affecting education, with concerns about cognitive decline and the need to redesign assessment and learning.


Anurag and Sabina discuss emerging “cognitive decline” among youth [313-316] and the shift back to “paper-and-pencil… in-class tests” as a response to AI-driven outsourcing of thinking [384-390]. This underscores the broader societal implications beyond the labor market.


Overall purpose / goal of the discussion


The panel convened to assess how the rapid diffusion of generative AI is reshaping labour markets, to contrast divergent views from the tech industry, labour researchers, and policy experts, and to identify concrete policy, institutional, and educational actions needed to mitigate risks, protect workers, and harness AI’s benefits-especially for vulnerable economies such as India’s.


Overall tone and its evolution


– The conversation opens with a cautious-alarmist tone, highlighting unknown impacts and urgent risks (Sabina’s “the impact is still unfolding” [1-2]; “we need to act now” [16-21]).


– It shifts to a more optimistic, industry-focused tone when Sandhya describes how AI creates new roles and augments existing work (e.g., “junior developer becomes a manager of AI” [86-88]; “strategic thinking remains with humans” [94-99]).


– The tone then becomes balanced and solution-oriented, as Julie stresses the need for strong institutions, evidence-based regulation, and collaborative governance (e.g., “without strong institutions… difficult” [130-132]; “Global Index… helps policymakers” [237-242]).


– Finally, the discussion adopts an urgent, call-to-action tone, with Sabina and the others urging immediate policy reforms, social protection, and education redesign (e.g., “we don’t have the luxury to wait” [173-176]; “we must act now” [355-363]).


Overall, the dialogue moves from warning, through optimism, to a pragmatic consensus that immediate, coordinated action is essential to steer AI’s labour impact toward inclusive outcomes.


Speakers

Julie Delahanty


– Expertise: Development research, AI policy, labor market impacts of AI


– Role/Title: President, IDRC Canada (International Development Research Centre) [S1][S2]


Sandhya Ramachandran Arun


– Expertise: Technology and AI implementation, digital transformation, consulting services


– Role/Title: Chief Technology Officer, Wipro Limited [S3][S4]


Anurag Behar


– Expertise: Philanthropy, education, social impact, AI governance


– Role/Title: Chief Executive Officer, Azeem Premji Foundation; Moderator/Chair of the panel [S5][S6]


Sabina Dewan


– Expertise: Labor market research, AI’s impact on jobs and social equity


– Role/Title: Researcher, Just Jobs Network (labor market expert) [S7][S8]


Additional speakers:


– None


Full session reportComprehensive analysis and detailed insights

Opening – Sabina Dewan – The panel began with Sabina warning that the impact of artificial intelligence on employment is “still unfolding” and that societies cannot wait for clearer evidence before acting [1-2][15-21]. She cited private firm reports from India of “30 % to 40 % time-saving… which then translates into significant workforce cuts” [8-9] and linked these efficiency gains to broader harms such as AI-enabled surveillance, biased decision-making and a “grossly exacerbating inequality” [10-11]. The concentration of capital in a few tech giants – exemplified by NVIDIA’s “$5 trillion market cap” [11-12] – is shrinking the labour share of income and raising the risk of large-scale job losses. Sabina also highlighted recent big-tech lay-offs, noting that while firms cite macro-economic shocks, AI represents “a really big disruption that comes on top of all the other disruptions” [152-158].


Panel introduction – Anurag Behar – Anurag, CEO of the Azeem Premji Foundation, framed the discussion around the economics of AI investment, describing the AI summit as “the 42nd kilometre of a marathon” and stressing that massive capital flowing into AI must be justified by monetisation [32-34]. He identified two possible sources of return – productivity-driven labour reduction or the creation of new products and services – and asked Sandhya to explain which direction the technology is heading and which jobs are likely to be displaced versus created [35-44].


Sandhya Ramachandran Arun’s view of the technology trajectory – Sandhya described AI as a “very huge impact… as a disruptor” and explained that firms are revisiting role design, hiring criteria (learnability, communication, adaptability) and reskilling programmes [48-51]. She illustrated the evolution of technology with a horse-carriage/methane to motor-vehicle analogy, arguing that just as societies governed the transition from horse-drawn carriages to automobiles, they must now govern AI’s rapid evolution [350-352]. Wipro’s experience shows that most work remains consultative, limiting large-scale displacement, and that AI solutions have been in use internally for over a year [60-63].


Coding and IT jobs – Sandhya noted that coding is only a small slice of software engineering; while AI can generate code, “the success of this code… depends on a human to oversee design, architecture, security” [85-86]. Consequently, junior developers become “managers of AI” [86-88]. Similar patterns appear in other sectors: marketing (AI creates content, humans retain strategy) [93-96]; finance (AI processes data, humans provide wisdom) [97-98]; healthcare (AI augments clinicians) [104-106].


Julie Delahanty on governance – Julie argued that effective AI-labour governance requires “strong regulatory institutions, labour institutions, strong research ecosystems” [130-132]. She highlighted the AI4D programme’s co-creation model with workers, communities and employers [133-136] and its data-collection effort in sub-Saharan Africa that gathers household, firm-level and worker information to inform skill-development, social-protection and labour-rights policies [137-138]. She also introduced the Global Index on Responsible AI, a rights-based dataset covering 138 countries with a dedicated focus on “labour protection and the right to work” [237-242], which provides evidence for concrete labour-market interventions despite the current lack of standardised regulation [241-246].


Sabina on empirical evidence – Sabina returned to the data, noting that companies are already laying off thousands of workers [152-154] and that AI adds a layer of disruption to existing macro-economic shocks [155-158]. She warned that the gig-economy’s algorithmic management leaves workers with “no mechanism for redressal” [161-164] and corrected the misconception that most Indian workers are informal, stating that “more than 90 % in India… are in formal employment” [197-199]. She emphasized that loss of even a small share of these scarce formal jobs would have “cascading effects across the economy” [214-215] and highlighted the growing “precaritisation” of the labour market, with many classified as self-employed contractors lacking health insurance or other safety nets [208-210][212-213].


Policy recommendations (Sabina) – Sabina called for urgent, coordinated action on competition policy, antitrust, transaction taxes, wealth tax, corporate tax, labour-law reform, universal social protection and massive investment in skill systems[173-176][320-334]. She noted that only 4.1 % of workers report having formal skill identification, underscoring the need for rapid upskilling [350-351].


Anurag’s conflict-of-interest disclosure – He disclosed that the Azeem Premji Foundation owns about 70 % of Wipro [255-256] and reiterated his mandate to “take care of the most vulnerable people in the country” [258-260].


Sandhya on human wisdom and platform-embedded policy – Sandhya stressed that technology cycles (investment → scaling → sailing) demand efficiency but also creativity, vision and foresight. She reiterated that regulation must be built directly into digital platforms, not only at the national level [361-364]. She also noted that a portion of Wipro’s profits funds the KMG Foundation’s welfare work [90-92].


Anurag on education – Drawing on his foundation’s role in three universities and more than 100 000 teachers[300-302], Anurag warned that AI is “attacking the very foundation of education” by encouraging both teachers and students to outsource thinking [380-382]. He cited emerging research showing increases in depression, anxiety and cognitive decline among youth, which could reduce work capacity and make them more replaceable by AI [200-202][313-316]. In response, his institutions have reverted to “paper-and-pencil, in-class tests” to preserve assessment integrity [384-390]. He likened AI’s societal reach to nuclear technology, arguing it may be even more consequential because it permeates everyday life [393-396].


Julie on the Future of Work project – Julie highlighted IDRC’s separate “Future of Work” project, which studies how work itself is being redesign-ed rather than merely counting job losses [341-346].


Consolidated urgency – Across the panel, Sabina, Sandhya and Julie repeatedly emphasized that “watching and waiting is certainly not an option” [355-358] and that immediate, evidence-based policy action is required at all levels [359-364][355-363].


Consensus & closing – The discussion concluded that AI will both displace and create jobs, but human-centred skills such as creativity, empathy and strategic oversight will remain essential. Coordinated, multi-level governance-encompassing competition, tax, labour-law reform, universal protection, platform-embedded regulation and robust skill-development programmes-is needed to steer AI’s benefits toward inclusive outcomes while mitigating its risks [355-363][320-334].


Session transcriptComplete transcript of the session
Sabina Dewan

say, you know, it’s yet to unfold. We don’t know what the impact is and it’s yet to unfold. I believe that that contention is actually largely untrue. And let me tell you why. When you talk to companies privately, publicly they will not own up to the potential job disruptions as a result of AI. And partly that is because many of the big companies actually are known to be formal job creators, right? And that is a very important part of their image and their contribution to economies and societies. But when you talk to them privately, in India especially, our research shows that they will own up to anywhere between 30 % to 40 % time saving, right, productivity gains, which then translates into significant workforce cuts.

We already have plenty of empirical evidence that suggests that… that AI systems are enabling surveillance, they’re influencing decisions about who gets work, when, and what entitlements people have access to. We also know that AI systems are grossly exacerbating inequality. If you just look at the market caps of some of the top technology companies, you know, NVIDIA’s $5 trillion market cap, right? So there’s a massive accumulation of capital that really, you know, capital share is growing and labor share of income is getting smaller and smaller. So I guess, you know, this discussion that talks about social empowerment, a key question in that is the question of the impact on jobs. And the question that I, you know, put out there is, so if you even buy the idea that we don’t know, that we don’t know what the impact is, what the impact is going to be.

Can we afford to just wait, right? Or do we need to take every action possible in terms of regulations, in terms of building social institutions, in terms of really working to build systems that can manage this inevitable evolution of AI, whether we like it or not. The last thing I’ll say is just, you know, yes, there have been technologies before. Yes, they’ve had their own forms of inclusion and exclusion. But at the end of the day, this is the first time where you have the very pioneers of that technology, Jeffrey Hinton, Stuart Russell, Dario Amadai, the very pioneers of the technology themselves are ringing alarm bells. And would we not be wise to heed them?

So with that, I hope, provocative context setting, I am really grateful. On behalf of the Just Jobs Network, again, with support from IDR. CNF CDO to welcome our really esteemed panelists. Mr. Anurag Bihar, who is the chief executive officer of the Azeem Premji Foundation, has very graciously agreed to chair this conversation, moderate the discussion. We have Dr. Julie Delhanti, who is the president of IDRC Canada. Thank you, Julie. And Ms. Sandhya Ramachandran Arun, who is the chief technology officer of Wipro Limited. Thank you so much for being here, Sandhya. So, Anurag, over to you.

Anurag Behar

Thank you. Thank you, Sabina. Good evening, everybody. Thank you. There’s so much. There’s so much investment going into AI. why is it going into a why is so much investment there in AI? We are in the fifth day of the AI summit. So this is like the 42nd kilometer of a marathon. Right? At this stage, such investment has to be justified by some monetization. And where is that monetization going to come from? It’s either going to come from productivity, which comes from labor reduction, or it is going to come from new products and services or both, a combination of both. That’s where it’s going to come from. Right? We will talk more about that. At this moment, my job is easy.

I’m going to just ask Sandhya, because she’s the representative of the technology world here really, that which way is this technology headed? And in very simple terms, what is she seeing its implications on jobs? I mean, what kind of jobs are going to get displaced, destroyed? And what kind of jobs are going to get created? and what’s the underlying dynamic because of which these jobs will be created and the jobs will be destroyed. So how does she see it in the world of technology? Let’s start with that.

Sandhya Ramachandran Arun

Sure, thank you so much. Thanks, Anurag, for the question. So as far as the tech industry is concerned, we are really witnessing a very huge impact of the AI evolution as a disruptor. We’ve had to revisit how job roles are created. We’ve had to revisit how talent has to be reskilled. And we have also revisited the responsibility, not just in terms of security, safety, but also in terms of what does it mean to our colleagues and our hiring. I think initially there was a huge amount of fear that we would not hire from colleges, which is now… despair because we’re broken. continues to hire from colleges, and so do our competitors. But the criteria for hiring has shifted to a more nuanced, a more calibrated way of looking at learnability, looking at whether a person communicates well, technical ideas, looking at whether a person is adaptable.

Because AI is a technology that is changing as we speak. So no one can claim to be an expert in AI and remain that way for the next five days, possibly, because there’s things that’s going on changing every day. With regard to our own talent, we have created role personas, and we have created very specific learning modules on how the role changes with AI. And everybody from the board to the CEO down to the youngest employee is going through a very calibrated learning process. And there is also a very… calibrated way in which services and ways of working are changing. So to that extent, we see a change. We are not seeing a displacement because most of the work that we do is consultative in nature, inspired of the market valuation erosion that we saw some time back because of a news from Anthropic and Palantir.

The insiders in the technology world were already aware of the transformative nature of these solutions coming up. And we have already been using these solutions significantly for over a year. So from a market sentiment point of view, possibly there was an erosion, but from a technology impact perspective, we have been bracing ourselves for the change and our journey of transformation continues.

Anurag Behar

I just have a follow -up on that, and then I’ll move to Julie. I’ll put it very, I mean, let’s say, a very, very simple, commonsensical question. Which is that, we are hearing about these tools where coding has become so much more easier, right? So, and this is not just about Wipro, it’s about the IT industry in general. So if coding is becoming so much easier, and 50 % or 70 % of coding can be done by these AI tools, then isn’t it inevitable that IT sector jobs will be lost? Or if there’s business or volume growth, much less hiring will happen. So that’s part one to my question. Part two is, if you move away from the IT world, and if you go to let’s say design and marketing, or, I mean, let’s say my world of the academy, the world of research, so many of research assistants and those of you who have used research assistants or work with research assistants, so much of that job is being done easily by AI.

So part one of my question, if coding is becoming so much more efficient, isn’t it inevitable jobs will be lost? so much hiding will not happen, whichever way. And aside from that, in the outside world, in other industries, what is it that you’re seeing?

Sandhya Ramachandran Arun

Sure. Let me just address the coding part of it. I think for over 15 years, the industry has been trying to explain to the outside world and as well as to the talent aspiring for careers with us that we do not have coding roles primarily. Coding is a very small task in what a software engineer does or a software developer does. There is the need to understand business outcomes. There’s a need to understand customer experience. There’s a need to understand architecture and what is a well -engineered code, right? So this is not new today. This has been in existence. I mean, I’ve been doing digital transformation for the last 15 years, and we’ve been trying to change how the world thinks about these roles.

Yes, the day is here when coding can be completely handed off to an AI agent. And that is indeed a fact, right? But the fact that supports the success of this code in business is really the ability to have a human oversee the design, the engineering, the architecture, the security, as well as delegating the coding work to an agent. So the role of a junior developer really becomes that of a little manager of AI, as opposed to saying, you’re displacing my job. The person’s actually going up if the person really is aware and aligns to what the organization needs in terms of figuring out what is required. And those are the trainings that are happening.

That’s what’s happening in terms of selection. We now have COEs inside engineering colleges where we are talking to universities about this as well. And what about other industries? whatever you’re seeing? So other industries we work with, there is a variation. So if you think about it, marketing, there’s a lot of work that gets offloaded. The strategy, the planning, the oversight on execution, the ROI on marketing still remains a strategic thinking job that remains with humans. But you can generate a lot of good quality visual, audio, and video content using AI today. And probably it’s making marketing a whole lot more efficient. Now, if you take finance, for example, again, a lot of processing gets taken over by AI, but it still needs a human to bring in wisdom in terms of how the data gets interpreted, how decisions are being made, and also to make sure that the AI aligns to human values in some sense.

So those kind of changes are happening in these functions. And that’s why I’m so excited to talk to you today. And I’m so excited to talk to you today. And I’m so excited to talk to you today. And I’m so excited to talk to you today. And I’m so excited to talk to you today. Industry -wise, there is a lot happening positive, I would say, in, say, healthcare, for example, even in banking, for example, where we are able to fight financial crimes a whole lot better. In healthcare, we are augmenting technicians, clinicians, and doctors with more intelligent input for decision -making. And while AI can make the decision, you don’t allow it to make

Anurag Behar

So, Sandhya, just put a pin on something that you said, and I’ll come back in the second round. You used the word human and wisdom. So just put a pin on that, and I’m going to come back to that in my second round. Julie, if Sandhya was less than as optimistic as she is, she wouldn’t be representing the tech world, you know. So one should expect that she’s as optimistic. she is. But what I wanted to ask you was that, you know, eventually, and, you know, from your vantage point, you know, you’re seeing how governments are dealing with this evolving situation, and not just an AI safety and, you know, all the other things, but particularly on labor markets.

So how can governments and institutions govern AI responsibly, such that any disruption in labor markets is sort of minimized or handled well, or the transition happens well? So let’s assume this picture that Sandhya has painted, that, of course, there’s something that’s going on, the reproductive, like she talked about the marketing and advising. So some people are going to lose jobs there. So what should government institutions do? How does one govern this situation, such that the benefits are maximized? And I’m talking particularly about labor. markets, not the other stuff, while harms are minimized.

Julie Delahanty

Yeah, thank you so much. I’m going to answer that question, but the last question just made me think about two things. One was, you know, I’m old enough to remember when computers first came around in the 70s and, you know, what we thought would happen with computers and the job losses that we anticipated. And, of course, we did lose jobs. There was a lot of labor disruption related to, you know, typing pools and different kinds of ways. But at the time, even home computers, nobody could even fathom what you would do with a home computer. The conversation then was that home computers would be used to develop recipes and that you’d have recipes because homes were only where homemakers were.

People couldn’t even, there’s such gendered ideas that people just could not understand what you would do with a home computer. So I think in the same way, some of what… is going to happen with AI in the labor market, we may not be able to anticipate just yet. So just as a reminder of where we came from with other important technologies. But when it comes to governance, I think the important issue is that it’s not really only about the technology, it’s really about institutions, it’s about workers, and it’s also about research. So when it comes to institutions, really without the kind of strong institutions in countries, regulatory institutions, labor institutions, strong research ecosystems that are able to really understand what’s happening in the labor market, I think it’s very difficult to end up having a strong regulation of what’s happening in the labor market.

So just those institutions are incredibly important to understanding where job losses might be, where biases might happen, and really investing in people and institutions is something that has to go hand in hand with our thinking around technologies. Another… Another area is around making sure that when we’re thinking about new technologies, that we’re making it very human -centric. And one of the things that the AI4D program does when we think, what do we mean by human -centric? It’s really about making sure that we’re co -creating new technologies with the co -creation of workers, of communities, of employers, so that we can understand how to enhance job quality, how to enhance productivity, rather than increasing inequalities or changing who benefits.

So really understanding who benefits, who’s going to face the kinds of disruptions is really important so that we’re not thinking about that as an afterthought. That we’re really shaping AI systems using that knowledge. And similarly… I think the importance of research in, and I’ll just give an example from our AI4D work is we’ve done a big research program with partners in sub -Saharan Africa that’s looking at, that’s collecting household data, firm level data, worker data, to understand what the real world impacts of AI are on labor markets. And it’s that kind of tracking, who’s going to benefit, understanding who’s going to be displaced, and how the tasks and skills are really changing that’s going to allow governments to better design and think about what kind of skills development they need, what kind of social protections they need, and how to support labor rights.

So really, I think growing AI responsibly doesn’t mean avoiding innovation or avoiding change, but it’s really about shaping AI so that it, it does strengthen labor markets and supports workers and creates more opportunities.

Anurag Behar

Thanks, Judy. Thank you so much. I’ll move to Sabina Sabina I mean since you are the labor market expert here amongst us and the researcher what is it that you see what is it that I mean there’s so much of news we have had this five days of this grand summer what is really going on what do we understand what we don’t understand in the context of the impact of AI on jobs how do you stack up

Sabina Dewan

so just a little tongue -in -cheek we go back to the 1600s we’d asked chat GPT then if Galileo was correct it would have said no way right so this technology you know for all the possibilities that it brings notwithstanding it is not just a technology we can’t just at AI as machine learning, large language models. It is a system, it is an instrument that is being utilized for social, political, and economic engineering. And my job is to look at the impact of that in labor markets. So if we limit ourselves just to the question of how many jobs will be lost, how many jobs will be gained, that’s A, not even an appropriate question.

Two, I agree with my fellow panelists that we don’t necessarily know what sort of new possibilities there might be. But what we do know, what we already see, is also something that Sundar talked about, which is the efficiency gains. And any time there are efficiency gains, there are layoffs. And please, you do the research, right? Like, I do my job. But look at the newspapers. Companies are laying off thousands of workers already. All the big tech companies have in recent years been laying off workers. Now, sure, they can say that this is a confluence of many factors. It’s not just AI, and most of them will not just ascribe it to AI. They might ascribe it to macroeconomic conditions, to the confluence of various other forces like the pandemic or trade shocks, all of which is true.

But AI is one really big disruption that comes on top of all the other disruptions, and there’s already plenty of evidence that is suggesting that these disruptions are not just changing the quantity of jobs in terms of how many companies are already laying off workers. Again, I mean, we’ve heard also projections from the, tech companies themselves, right, what the possible projections are. of disruptions and layoffs are going to be. But we also already have evidence of people being laid off. But then on top of that, I would say let’s look beyond just how many jobs are lost and how many jobs are gained to actually look at, I mean, take the gig economy, for example, and algorithmic management of gig workers.

That is a labor market issue. If a gig worker is wronged, the platform just, you know, they just get kicked off the platform. There’s no mechanism for redressal because it’s an algorithm that’s managing the worker. So who do you talk to? I mean, I can go on and on and on. Now, we might be separating out platforms from AI, but actually the algorithms are AI, and it’s embedded in a platform economy that is increasingly becoming the architecture for transactions, and it’s deeply troubling. And then the last thing I’ll say is, so I’ve already said… like in terms of quantity of jobs, we are already seeing evidence of layoffs, right? We’re already seeing the evidence of layoffs.

It’s just that people aren’t necessarily able to pinpoint and ascribe it to AI. That’s point number one. Two, we need to go beyond the question of quantity of jobs and also look at the impact of this technology on quality of jobs. And third, we need to really deeply think about, again, to Julie’s point, the architectures that can help mitigate some of the potential adverse effects of this technology, both on the quantity and the quality of jobs. And we don’t have the luxury to sit and wait and say, hey, let’s get the empirical evidence and then we’ll figure out what to do. That will be way too late, right? So what do we need? We need countries to think about competition policy.

We need to look. We need to look very closely at tax policy. We need to look very closely at how labor laws need to change. We need to look at social protection systems. We need to look at skill systems, everything that Julie just mentioned, right? But we have to start from an urgency about this is having a huge impact already. It is likely to be, you know, even bigger, and we don’t have the luxury of time to just sit back and wait and say, hey, we need more empirical evidence before we figure out how to mitigate the negative or potentially negative circumstances. So that is what I think is, you know, really, really urgent, that everyone get on that bandwagon and say we need to create these systems and ask for them and do it in our work and do it in our advocacy.

Anurag Behar

Yeah, thank you. I’ll just follow up. I’ll just follow up with it. So, and Julie, please. Pardon. for saying this. I’m saying this tongue in cheek and all my friends and colleagues here who are not from India please pardon me for what I’m going to say. So, you know, we Indians, why should we care about all this? And the reason I’m saying that is because, you know, well just about 9 or 10 % of our employment is in the formal sector. So even if there is huge disruption in labour markets, maybe 2 % of these people are going to lose their jobs, right? So why should we care about all this stuff? Do you have any comments?

Sabina Dewan

I do. You can be sure I do. You can be sure I have a comment about that. So if you look at the numbers, we are more than 90 % in India in formal employment. So Anurag’s exactly right. He knows his numbers. So, you know, essentially what you’re saying is 1 out of every 10 people stands to be potentially affected, right? That’s one way. of looking at it. The other way of looking at it is we have such few good jobs, right? We have such few jobs in the formal labor market. Only one in 10 people get to have a formal sector job. And now you’re taking that away as well, right? That stands to be disrupted. So again, we’re moving to a world of work that is much more precarious, much more insecure, much more uncertain, where workers don’t, they’re not even called workers anymore.

We call them self -employed contractors. They have no health insurance. They have, you know, this is the precaritization of the labor market. So not only do you have, you know, pandemic, climate change, energy transition, trade shocks, and AI destruction, but you have a world of work that is much more precarious, disrupting everything, but you also now are moving to a place where work is becoming more and more informal. Formal jobs are jobs are being, you know, gotten rid of in the name of, please apologize, in the name of efficiency gains, right? And so, yeah, so that’s why in India we should be really scared because we have such few formal jobs. And then imagine if you have these jobs in the IT sector in Bangalore disappearing, all the workers that used to go to bars and restaurants and get loans to buy houses and cars, that starts to disappear and it has cascading effects across the economy.

So, you know, so the impact of this is definitely in the global south. It is definitely beyond the few formal sector jobs. And it’s deeply disturbing. And we need to actually work to understand from technologists very clearly, you know, how these efficiency gains are going to happen and how they’re going to, how. What can different governments. and so on, and for architecture, public architecture, manage some of these changes. So we do need to care. Definitely need to care. We need to care urgently.

Anurag Behar

All right. So I’m going to come to Julie on this and come back to you, Sandhya, because I put a pin on something that you said, right? So, Julie, I mean, let’s assume that the alarm that Sabina is raising is at least half true, right? It’s more than half. You know, I have a deep conflict of interest, and I’ll tell you once I’m sort of done with this. So, Julie, how can, you know, what are the lessons that you’re seeing across countries, you know? You’re seeing the vast landscape, right, and IDRC has a view across the continents. So what lessons can be learned? From across the continent. such that AI is able to create opportunities, right, part of what Sandhya talked about, and doesn’t really deepen inequality or it minimizes it.

What are you seeing across the countries? Something, some good stuff.

Julie Delahanty

What is that? What is that regulation? And I think one of the – we have this AI – the Global Index on Responsible AI that some of you may have heard about. It’s been talked about a lot during the conference, or at least some of the sessions that I’ve been to. And really what that is, it’s the largest global rights -based data set on responsible AI. And what is distinctive about it is that it includes a dedicated focus on labor protection and the right to work. And by providing that country level, that sort of comparable data, it looks at 138 countries. So by providing that comparable data, it’s helping governments to understand what they might need to do better, what some of the issues are, how they can improve.

So really using that information to support governments in understanding what is the regulation, what is the solution that they need, not just – You know, it has to be based on some evidence. And I think the third big thing, which won’t be a surprise to anybody here that I’m saying this, is that we really need to have good evidence, and evidence really matters when it comes to these issues. So tools like the Global Index on Responsible AI really allows policymakers to move beyond kind of the abstract must -fix regulation to assess how governance of AI actually affects people’s rights, affects their jobs, affects their working conditions, and supports more proactive policymaking on labor regulations, again, skills, social protections, et cetera.

And I think equally important is that we’re still learning. There is no standardized, here is the regulation that you need codified. Through the kind of work that we’re doing, I think we’re learning what’s the balance between… supporting innovation… and still supporting regulation and safety. And I think working together across many countries to share that kind of information is what’s going to support us in finding the right tools.

Anurag Behar

Thanks, Julie. I’m going to come to you, Sandhya. But I just want to disclose something to all of you. That’s my conflict of interest. You know, Sabina is a labor market researcher, and naturally I would think she’s saying what she’s saying. Julie represents IDRC, and therefore she’s saying what she’s saying. Sandhya is the tech person here, so she’s saying what she’s saying. My problem is I’m responsible for this organization, Azeem Premji Foundation. And my problem is the following. My problem is that the foundation owns about 70 % of Wipro. Okay. So whatever is good for a tech company. is good for us, right? On the other hand, my job is not to take care of the technology and this world.

My job is to take care of the most vulnerable people in the country, right? The very poorest, the most marginalized, those who have no recourse to social protection. That’s my job. So I am a deeply conflicted person, right? Very deeply conflicted person. And I wanted to disclose that because I’m going to come to that towards the end. And it has a specific bearing on the question that I’m going to ask Sandhya, which is, you said something fascinating. And I want to put a pin on that. And I’m pulling your leg, you know, which is that rarely do you hear such words from a tech person. She talked about human care and wisdom, right? Didn’t she?

Okay. So, you know, really, my takeaway from what you were saying is that the tech tech stuff, you know, the coding and that kind of stuff, that can get automated. but something that is human understanding people understanding desires how do you work with people that’s what is hard to do and that’s something that you’re already seeing right so would you want to sort of comment on that

Sandhya Ramachandran Arun

yeah so the stereotype of techies aren’t human is a little unfair I think so don’t anchor it in your heads but then yeah so where do I start at the end of the day what does technology consulting and technology services try to do they try to help our client businesses become more successful and our client businesses in turn become more successful when they are innovative when they are creative when they are growing when they are growing when they are making their business and they are doing their business and they are doing their business and they are doing their business profitably and they are doing their business and they are doing their business and they are doing their business and they are doing their business and they are doing their business Or if they have already reached a state of maturity, they are trying to bring in a whole lot of efficiencies as well, right?

So it’s the S curve where you have an idea, you nail it, and then you kind of scale it, and then you kind of start sailing. And when you’re sailing, that’s when you become a big battleship and you have to focus on discipline and efficiency and ensure that you’re making profits just the same even while you’re running this big ship. But then the cycle doesn’t end there. It kind of keeps going. You keep coming up with new ideas, you keep scaling it, and you keep sailing it. And so profitability starts off with an investment, it grows, and then you have to become super efficient to remain profit. And I’m saying this to my boss because every dollar that we earn funds to the tune of about 66 cents whatever efforts the KMG Foundation uses for welfare, right?

And I think it’s a beautiful thing. It’s a beautiful thing. It’s a beautiful thing. model and I don’t think an AI could have thought of it. So therefore I do believe very strongly that creativity, wisdom, vision, foresight, human centricity is core to any technology disruptor that comes about. So if you imagine the days when there were horse carriages all the horses would have been kind of crowding the roads and people would have been going from place to place and at the end of the day you would have had a whole lot of methane which would have kind of ended the year a long time back because of global warming. But yes, vehicles did come and you did have carbon fuel and the evolution continues.

So I don’t think technology is going to stop. So human ingenuity is going to keep bringing technology disruptors. These technology disruptors are going to be more and more exponential in terms of what they can do. And it is up to humans to figure out how to create policy, how to create a governance mechanism, and how to ensure that we derive benefits, mitigate the risks, and at the same time ensure that humanity is at the center of all of this. Right? Now, this is easier said than done, but we’ve done it with nuclear energy. Despite the disasters, the fact that you and I are still alive today and thriving and living a better life than we ever lived in the last 100 years is an example that, yes, you could have accidents that are preventable, but accidents are created by humans.

And it’s up to the leadership to ensure that they put the required guardrails. It could be policy. It could be governance. It could be guidelines, whatever you call it. And you can even hire a leader. And you can even hire a leader. some of the

Anurag Behar

Yeah, it’s good to hear that, you know. I’m just going to come to one round and then perhaps have the last word, if I may. Yeah, okay. So, Sabina, what’s your take? What should we do? What should we do, really?

Sabina Dewan

So I’ve already kind of said what we should do, but first, Sandhya, everything you said really resonated with me, right? And I fully agree that, you know, that the humans have to take responsibility. I can think of a few very worrying scenarios where there are leaders in the world that have access to, you know, nuclear weapons that perhaps… shouldn’t have access to nuclear weapons, right? So how much confidence do we have in people, and particularly when you look at the overall trend of growing precarity? Again, take India alone. Fifty -eight percent of our employment is now self -employment. It is not, you know, and these are people, workers, that have no coverage of health insurance or any kind of safety net.

Add to that the fact that, like, there’s all these different forces coming that we don’t know, you know, if AI disrupts jobs or pandemics happen. We all saw what happened with migrant workers walking back to their villages, hundreds and thousands of migrant workers, right? There is a lot more precarity in the labor market than there ever has been in the past in modern history. And the problem is that regulation, and the regulation of the labor market, and the regulation of the labor market, and the regulation of the labor market, across the globe are getting weaker and weaker in this respect. And then we don’t have precedent, as Julie said. Like, we’re still trying to figure out exactly what we should do, right?

But I will say, I mean, I’ve said many, and I will say that, you know, in the meantime, AI is different because this is also the first time research is now showing, the first time that the current generation of young people have shown cognitive decline, right? So, I mean, rates of depression, rates of anxiety, cognitive decline. How does cognitive decline affect your ability to operate at work and then be replaced by machines that are more efficient because you’re getting stupider? Like, right? Sorry, but this is a really worrying scenario. So what should we do? I think I’ve said this. Multiple times. Regulation and building of social institutions. institutions, but I’ll take Julie’s challenge and say, okay, let’s go a level deeper.

I think we need to look at competition policy very closely. We need to look at antitrust. We need to look at tax, and within tax, we need to look at, you know, how do we do look at the full gamut from, you know, certain kinds of transaction taxes to what person, like a wealth tax, you know, the whole corporate tax rates, the whole gamut of tax tools that we have at our disposal. We certainly, in an area that I know well, need to look at labor regulations, right? There’s a lot of discussion now about what should happen in the gig economy, but, you know, what about, how do we, how do you distinguish if two people have lost their job, how do you distinguish, you know, between them?

You can’t say, okay, this person lost their job to AI, so we’re going to give them health care and, you know, other kinds of support, but… person we’re not right like you need to have universal systems of support for workers of health care of other forms of Social Security that that enable consumption smoothing as well so the economies keep functioning we need to invest heavily in our skill systems for all the talk I can talk about Indian numbers till I’m blue in the face of all the investment and talk about skills training in India only 4 .1 percent of respondents in our labor force survey acknowledge you know identify as having any kind of formal skills only 4 .1 percent despite you know us saying skill India and talking about investments and skills for the last you know well over a decade and a half skill systems.

There’s also well -documented research about how education, you know, the quality of education is so poor. So how do you take a young person in a remote part of India who can barely read and write, might say that I’ve graduated, I’ve done eighth grade or tenth grade, eighth class, tenth class, you know, even twelfth class, but can barely do foundational reading or math, right? How do you take them and say, I’m going to train you for AI. Yeah, that’s what I’m going to do. Like, it doesn’t work. It doesn’t work. So we need to actually fundamentally think about regulations. We need to very urgently work on our education and skill systems that meet people where they are.

We need to definitely think about universal social protection systems. That enable workers to transition between occupations from one sector to another, from one to another. to another from one occupation to another. And I can go into much more detail because this is something that my organization has worked a great deal on. What kind of systems do we need to enable workers to be better protected and be able

Anurag Behar

Thanks, Sabina. We’ve got, I think, five minutes or so, so I’m going to try and wrap up. Judy, would you want to comment?

Julie Delahanty

Yeah, I just want to make a fairly random point, I think. And that is, in addition to the Artificial Intelligence for Development program that we have, we also have a Future of Work project. And I think one of the interesting things there that we don’t talk about as much, everybody is very worried about job loss. That’s kind of the big, it’s job loss. But actually, one of the bigger issues that’s happening is rethinking how to work and ways of working and the disruption that’s happening within jobs and within the workplace. And so I think that’s a really good point. institutions and organizations, that’s not necessarily about job losses. It’s about a complete shift in the way that we do our work and how workers are going to adapt to that fundamental shift in the way that they work.

So it was just a random thought.

Anurag Behar

I don’t think it’s a random thought at all. I think it’s a salient foundational thought, you know, for this discussion. You want to comment on that one line? Because that’s such an important point.

Sabina Dewan

Yeah, no, I mean, just to say that, you know, the Future Works Collective is a global consortium of researchers that IDRC funds that JustJobs is part of that focuses exactly on that. So I agree 100 % that that is a foundational and very important issue.

Anurag Behar

Sandhya, what about you? How would you want to respond to everything Sabina has said?

Sandhya Ramachandran Arun

Look, I think… Watching and waiting is certainly not an option. I mean, we don’t want to be in a Game of Thrones situation when you’re saying winter is coming for some 22 seasons and then it comes. Nobody’s going to wait for it. So we know what’s coming, and we know what’s coming is also capable of evolving and changing tremendously. So we need to learn to change. And yes, we do need to elect good leaders. We do need to have policy at all levels. We need to have policy embedded in platforms. And of course, we need to have a lot of reimagining work and training of workforce. So yes, I think to some extent, painting doom and gloom is good.

Then we start acting, right? But to some extent, I think it also shouldn’t make you paranoid that you become deer in headlights. So yes, we should act, and we should move forward on all of that that all of us agree on.

Anurag Behar

It seems so. It seems so, absolutely. No, but, you know, I think that’s, in some senses, a very good summary, what you just now said, right? What I wanted to say was that this phrase that’s used, boomer and doomer, boomer and doomer. So in a sense, my head is the boomer and my heart is the doomer, given my role. I want to take you just for a minute, which is my job is more to do with education. So we run three universities. We work with, at any point in time, we are working with more than 100 ,000 teachers, right? And so I’m an education person. I’m not the labor market or the tech person here, right? And I am deeply concerned by the effect of AI on education, deeply, deeply concerned.

In fact, I feel that AI is attacking the very foundation of education. The very foundation of education. What AI is doing is saying, the phrase artificial intelligence, it suggests what it does, which means you essentially outsource your thinking. So teachers are outsourcing their thinking and students are outsourcing their thinking. So essentially, and that’s what Sabina was referring to, but she was referring to in the context of social media, that for the first time in this round of sort of assessments, we are seeing cognitive declines, or on test measures we are seeing declines in student performance. I cannot tell you how serious the issue is. And it’s impossible to regulate this. It’s impossible to regulate this because it’s everywhere.

So the only way we are able to deal with this, in the universities at least, is that all assessment, examination, is now returning to the old world paper and pencil, in class test. No home assignments, no project planning, no test. No project work, nothing. Just come here and sit. and write the examination. It is truly serious. I mean, we don’t know how to tackle this right now. And the reason I talk about that is I want to go back to what the analogy that Sandhya used. And I’m so glad that she did that, which is that it is as serious as the nuclear technology. It is as serious as the nuclear technology. And in one very deep way, it is far more serious than the nuclear technology because nuclear technology did not reach out and affect every individual human being.

The possibility of policies and governance to be able to circumscribe, to put boundaries, to manage, those possibilities were far greater. And the possibilities here were highly disruptive, not highly, perhaps the most disruptive of technologies is in retail form, right? This is retail transformation of humanity. It is so hard. to do this. But I’m really glad. I’m glad that with the three of you here, we have this sort of reasonable conclusion, if I may say so, that we are really facing something as serious as the nuclear technology. And you can’t run away from it. It’s happening. You can’t run away from it. Job losses will happen. We’ve got to figure a way out of it. And I would want to close on this human note, that eventually, perhaps, those jobs that require wisdom, empathy, care, human understanding, they are going to be the hardest to replace if at all.

And they will stay. And that’s what one can see in the tech world. So with that, I want to thank all three of you. Thank you so much. I want to thank all of you for coming here. Thank you very much. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (37)
Factual NotesClaims verified against the Diplo knowledge base (5)
!
Correctionhigh

“NVIDIA’s “$5 trillion market cap””

The knowledge base reports Nvidia’s market capitalisation at about $2.915 trillion in 2024, not $5 trillion [S102].

Confirmedmedium

“AI is “a really big disruption that comes on top of all the other disruptions””

The same phrasing appears in the source, confirming AI is described as a major additional disruption [S1].

Additional Contextmedium

“The horse‑carriage/motor‑vehicle analogy for governing AI’s rapid evolution”

A comparable analogy is given in the knowledge base, which likens the transition to the historical issue of horse-drawn carriages before automobiles [S111].

Additional Contextmedium

“The impact of artificial intelligence on employment is “still unfolding” and societies cannot wait for clearer evidence before acting”

The source notes that long-term employment impacts of AI remain uncertain despite current hiring stability, supporting the claim that impacts are still unfolding [S97].

!
Correctionlow

“Private firm reports from India of “30 % to 40 % time‑saving… which then translates into significant workforce cuts””

The cited source describes a 35 % productive time **loss** and revenue leakages for an Indian firm, which contradicts the claim of 30-40 % time-saving [S99].

External Sources (111)
S1
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-social-empowerment_-driving-change-and-inclusion — And I think equally important is that we’re still learning. There is no standardized, here is the regulation that you ne…
S2
AI for Social Empowerment_ Driving Change and Inclusion — – Julie Delahanty- Sandhya Ramachandran Arun – Sabina Dewan- Julie Delahanty
S3
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-social-empowerment_-driving-change-and-inclusion — And I think it’s a beautiful thing. It’s a beautiful thing. It’s a beautiful thing. model and I don’t think an AI could …
S4
AI for Social Empowerment_ Driving Change and Inclusion — – Anurag Behar- Sandhya Ramachandran Arun – Sabina Dewan- Sandhya Ramachandran Arun
S5
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-social-empowerment_-driving-change-and-inclusion — All right. So I’m going to come to Julie on this and come back to you, Sandhya, because I put a pin on something that yo…
S6
AI for Social Empowerment_ Driving Change and Inclusion — – Anurag Behar- Sandhya Ramachandran Arun
S7
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-social-empowerment_-driving-change-and-inclusion — Add to that the fact that, like, there’s all these different forces coming that we don’t know, you know, if AI disrupts …
S8
AI for Social Empowerment_ Driving Change and Inclusion — – Sabina Dewan- Sandhya Ramachandran Arun
S9
Comprehensive Report: Preventing Jobless Growth in the Age of AI — Economic | Future of work Study of LLMs in call centers showing 14% average increase in productivity, up to 35%. Studie…
S10
AI drives productivity surge in certain industries, report shows — A recent PwC (PricewaterhouseCoopers International Limited) reporthighlightsthat sectors of the global economy with high…
S11
Building Inclusive Societies with AI — “The people who are absolutely at the lower quartile, they actually need help.”[81]. “The bottom quartile is not yet plu…
S12
Discussion Report: AI Implementation and Global Accessibility — This comment shifted the conversation from discussing current disruptions to future-oriented thinking. It led Sarah to f…
S13
Upskilling for the AI era: Education’s next revolution — This comment is thought-provoking because it shifts the focus from technological capabilities to equity and access issue…
S14
How AI Drives Innovation and Economic Growth — It’s the aspirational job that created Gurgaon’s and Noida’s and Mohali’s of this country. And those people are going to…
S15
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — And this requires proactive and coherent policy responses. First, people must be at the center of AI strategy, as we hea…
S16
Optimism for AI – Leading with empathy — Regulators and government officials have responsibility to balance innovation protection with human welfare
S17
Anthropic report shows AI is reshaping work instead of replacing jobs — A new report by Anthropicsuggestsfears that AI will replace jobs remain overstated, with current use showing AI supporti…
S18
(Interactive Dialogue 2) Summit of the Future – General Assembly, 79th session — The Republic of Korea calls for developing governance frameworks for AI and emerging technologies. This is seen as neces…
S19
From principles to practice: Governing advanced AI in action — – Balancing rapid technological advancement with necessary governance frameworks across different regional approaches B…
S20
AI Governance Dialogue: Steering the future of AI — Martin used a maritime metaphor to explain current governance limitations, stating that while frameworks like the UN’s P…
S21
AI/Gen AI for the Global Goals — Shea Gopaul: So thank you, Sanda. And like Sandra, I’d like to thank the African Union, as well as Global Compact. i…
S22
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — Rees-Jones takes an optimistic view that AI can provide personalized tutoring for reskilling in areas like coding, while…
S23
AI and human creativity: Who should hold the brush? — Economic structures that value human creativity:If AI can flood the market with ‘good enough’ content at minimal cost, w…
S24
Harnessing Collective AI for India’s Social and Economic Development — Kushe Bahl believes that AI will fundamentally reshape jobs rather than just replacing them outright. He suggests this t…
S25
AI (and) education: Convergences between Chinese and European pedagogical practices — **Norman Sze** (former Chair of Deloitte China) provided industry perspective on AI’s impact on professional work, notin…
S26
Closing Ceremony — Human rights | Legal and regulatory This argument advocates for a human rights-based approach to data governance and ar…
S27
AI for Good Technology That Empowers People — Ambassador Reintam Saar from Estonia outlined the structure and objectives of the first Global Dialogue on AI Governance…
S28
A Global Human Rights Approach to Responsible AI Governance | IGF 2023 WS #288 — Stakeholder engagement is identified as a vital component of responsible AI governance. The path towards effective gover…
S29
DC-CIV & DC-NN: From Internet Openness to AI Openness — Wanda Muñoz argues for a human rights-based approach to AI governance, going beyond ethics and principles. She emphasize…
S30
Building Trustworthy AI Foundations and Practical Pathways — “India has scale, India has linguistic diversity, but India also has a lot of different things.”[63]. “In many regions o…
S31
Addressing the gender divide in the e-commerce marketplace – a policy playbook for the global South (IT for Change) — In India, around 80% of the female workforce operates within the informal sector. These informal workers face numerous c…
S32
High Level Session 3: AI & the Future of Work — #### Education and Cognitive Concerns
S33
Education meets AI — Lastly, the significance of critical information and critical thinking in education was recognized, emphasising their po…
S34
Defying Cognitive Atrophy in the Age of AI: A World Economic Forum Stakeholder Dialogue — And then there is the resourcing, the possible divide in education. There could be the highly resourced private schools …
S35
The Intelligent Coworker: AI’s Evolution in the Workplace — -Workforce Impact and Career Evolution- Discussion of how AI will reshape job structures, eliminate traditional entry-le…
S36
Comprehensive Discussion Report: AI’s Transformative Potential for Global Economic Growth — Fink acknowledged that while some jobs may be displaced, new opportunities are simultaneously created. Both speakers agr…
S37
How AI Drives Innovation and Economic Growth — It’s the aspirational job that created Gurgaon’s and Noida’s and Mohali’s of this country. And those people are going to…
S38
Why science metters in global AI governance — “But if your potential or probable outcome is the end of jobs, then you need to think about universal basicism.”[113]. “…
S39
The Impact of Digitalisation and AI on Employment Quality – Challenges and Opportunities — – Employment policies should be interwoven with education, addressing both labour market demand and supply. – The impera…
S40
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — And this requires proactive and coherent policy responses. First, people must be at the center of AI strategy, as we hea…
S41
Empowering Workers in the Age of AI — This discussion featured four representatives from the International Labour Organization (ILO) presenting comprehensive …
S42
Comprehensive Report: AI’s Impact on the Future of Work – Davos 2026 Panel Discussion — Bhan argues that AI’s impact on jobs cannot be viewed in isolation but must be considered alongside broader economic dis…
S43
Building Trustworthy AI Foundations and Practical Pathways — “But similarly now, econ of maybe writing novels is gone.”[20]. “The movie industry is worried.”[21]. “That entire econo…
S44
Comprehensive Summary: AI Governance and Societal Transformation – A Keynote Discussion — These technological disparities will coincide with massive job displacement and economic disruption across all sectors s…
S45
AI job displacement: Malaysia’s strategy unveiled — The rise of AI and digitalisationcould displace up to 600,000 workersin Malaysia over the next five years, according to …
S46
AI for Social Empowerment_ Driving Change and Inclusion — – Sabina Dewan- Sandhya Ramachandran Arun- Julie Delahanty Urgent need for comprehensive policy responses including com…
S47
Trade regulations in the digital environment: Is there a gender component? (UNCTAD) — In conclusion, the analysis reinforces the potential of digitalisation and emerging technologies, such as artificial int…
S48
Contents — JD is a Chinese retailer with significant e-commerce logistics. The operation of infrastructure networks, logistics, sou…
S49
How AI Is Transforming Indias Workforce for Global Competitivene — It could mean the old world for Chennai and another enterprise, right? So, there are many reasons why adoption, I think,…
S50
Main Topic 2 – Keynotes  — Effective data collection and analysis are crucial. Transparently executed policies must consider infrastructure, digita…
S51
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Evidence-based policymaking is crucial but challenging when regulating emerging technologies, requiring sandbox environm…
S52
OECD releases AI Incidents Monitor to address AI challenges with evidence-based policies — The OECD.AI Observatoryreleaseda beta version of the AI Incidents Monitor (AIM). Designed by the OECD.AI Observatory, th…
S53
Skilling and Education in AI — The Professor took a notably realistic turn in acknowledging that AI will inevitably create new forms of inequality, des…
S54
UN High Commissioner urges human rights-centric approach to mitigate risks in AI development — While AI holds transformative potential for solving critical issues like curing cancer and addressing global warming, it…
S55
A Global Human Rights Approach to Responsible AI Governance | IGF 2023 WS #288 — These efforts are crucial to prevent the exacerbation of inequality and the marginalization of vulnerable groups. Stakeh…
S56
AI for Democracy_ Reimagining Governance in the Age of Intelligence — The main areas of disagreement centered on governance mechanisms (binding vs. voluntary frameworks), institutional respo…
S57
WS #98 Towards a global, risk-adaptive AI governance framework — 3. The recognition that cultural differences play a significant role in risk perception and governance approaches. Audi…
S58
Figure I: The Global Risks Landscape 2019 — Beyond the economic risks, there are potential political and societal implications. For example, a world of increasingly…
S59
Agentic AI in Focus Opportunities Risks and Governance — Governance responses should include standards, global norms, and risk procedures, not just regulation Policy should foc…
S60
AI for Social Empowerment_ Driving Change and Inclusion — Sabina points out that AI is causing major disruptions that are already leading companies to lay off workers. Private re…
S61
The mismatch between public fear of AI and its measured impact — These cases demonstrate that AI affects different workplaces in different ways. Gains are clear in specific tasks or wor…
S62
Reinventing Digital Inclusion / DAVOS 2025 — Generative AI is expected to create exponential returns in productivity, particularly in enterprise systems. However, th…
S63
AI/Gen AI for the Global Goals — Shea Gopaul: So thank you, Sanda. And like Sandra, I’d like to thank the African Union, as well as Global Compact. i…
S64
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — Rees-Jones takes an optimistic view that AI can provide personalized tutoring for reskilling in areas like coding, while…
S65
Harnessing Collective AI for India’s Social and Economic Development — Kushe Bahl believes that AI will fundamentally reshape jobs rather than just replacing them outright. He suggests this t…
S66
Empowering Workers in the Age of AI — Focus on augmentation and transformation of existing roles rather than wholesale job replacement
S67
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan — Radhicka Kapoor provided a more nuanced perspective, citing research showing that while most jobs will be exposed to AI …
S68
The Future of Innovation and Entrepreneurship in the AI Era: A World Economic Forum Panel Discussion — It is changing how people get jobs and how they get hired for jobs. So an example of that is entrepreneurs often now are…
S69
Closing Ceremony — Human rights | Legal and regulatory This argument advocates for a human rights-based approach to data governance and ar…
S70
Comprehensive Report: UN General Assembly High-Level Meeting on the 20-Year Review of the World Summit on the Information Society (WSIS) Outcomes — Artificial Intelligence Governance and Ethics Human rights | Legal and regulatory Lithuania called for artificial inte…
S71
Democratizing AI Building Trustworthy Systems for Everyone — Dr. Garg highlights that the biggest challenge is governing the sharing mechanisms, protocols and the talent needed to m…
S72
First round of informal consultations with member states, observers and stakeholders (2024) — On internet governance, Denmark endorses a human rights-centred multi-stakeholder model, advocating its importance to SD…
S73
Keeping AI in check — Societies should not be forgetful of the fact that technology is a product of the human mind and that the most intellige…
S74
A Digital Future for All (morning sessions) — Robert Muggah: There are multiple risks, some of which have been discussed over the last couple of hours. Some of the…
S75
Panel Discussion AI & Cybersecurity _ India AI Impact Summit — “Yet, only countries with AI capabilities can reap actual AI benefits to their fullest potential”[31]. “A collaborative …
S76
AI disruption risk seen as lower for India’s white-collar jobs — Indiafacesa lower risk of AI-driven disruption to white-collar jobs than Western economies, IT Secretary S Krishnan said…
S77
High Level Session 3: AI & the Future of Work — #### Education and Cognitive Concerns
S78
Education meets AI — Access to devices is a critical challenge faced in disadvantaged parts of the world. The scarcity of devices leads to gr…
S79
Can AI replace the transmission of wisdom? — The world of education is changing radically and rapidly. Generative AI tools are now capable of writing essays, solving…
S80
World Economic Forum Open Forum: Visions for 2050 – Discussion Report — The discussion began with cautious optimism as panelists shared their hopes for 2050, but the tone became increasingly u…
S81
Global Risks 2025 / Davos 2025 — The tone of the discussion was initially quite sobering as the panelists discussed serious global risks and challenges. …
S82
AI and Digital Developments Forecast for 2026 — The tone begins as analytical and educational but becomes increasingly cautionary and urgent throughout the conversation…
S83
Defying Cognitive Atrophy in the Age of AI: A World Economic Forum Stakeholder Dialogue — The discussion began with a cautiously optimistic tone, acknowledging both opportunities and risks. However, the tone be…
S84
Comprehensive Summary: AI Governance and Societal Transformation – A Keynote Discussion — The tone begins confrontational and personal as Hunter-Torricke distances himself from his tech industry past, then shif…
S85
What happens to software careers in the AI era — AI is rapidly reshaping what it means to work as a software developer, and the shift is already visible inside organisat…
S86
Upskilling for the AI era: Education’s next revolution — The tone is consistently optimistic, motivational, and action-oriented throughout. The speaker maintains an enthusiastic…
S87
Towards a Resilient Information Ecosystem: Balancing Platform Governance and Technology — – **Platform Governance and Regulatory Challenges**: Regulatory authorities face significant challenges in governing dig…
S88
Shaping the Future AI Strategies for Jobs and Economic Development — The discussion maintained an optimistic yet pragmatic tone throughout. While acknowledging significant challenges around…
S89
WS #81 Universal Standards for Digital Infrastructure Resiliency — The tone was largely collaborative and solution-oriented. Panelists built on each other’s points and acknowledged the co…
S90
WS #302 Upgrading Digital Governance at the Local Level — The discussion maintained a consistently professional and collaborative tone throughout. It began with formal introducti…
S91
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — The tone was consistently optimistic and collaborative throughout the discussion. Speakers maintained a solution-oriente…
S92
Wrap up — This served as a compelling call to action that elevated the urgency of the entire discussion. It moved the conversation…
S93
(Interactive Dialogue 1) Summit of the Future – General Assembly, 79th session — The overall tone was one of urgency and calls for action, with many speakers emphasizing the need for immediate reforms …
S94
(Plenary segment) Summit of the Future – General Assembly, 4th plenary meeting, 79th session — The tone of the discussion was generally optimistic and forward-looking, with speakers emphasizing the need for urgent a…
S95
New Technologies and the Impact on Human Rights — The discussion maintained a collaborative and constructive tone throughout, despite addressing complex and sometimes con…
S96
Responsible AI for Children Safe Playful and Empowering Learning — The discussion concluded with a strong emphasis on urgency and action. Rather than waiting for perfect solutions, the pa…
S97
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — Long-term employment impacts remain uncertain despite current stability in hiring patterns
S98
OPENING STATEMENTS FROM STAKEHOLDERS — Discussions on artificial intelligence show that technological development is not without risk.
S99
From India to the Global South_ Advancing Social Impact with AI — So good evening. My name is Ashish Pratap Singh. I am the CEO of Prasima AI. My father runs an MSME business in Lucknow….
S100
Strengthening Worker Autonomy in the Modern Workplace | IGF 2023 WS #494 — The analysis explores the impact of technology on various social issues, including labour exploitation, inequality, pove…
S101
5. — – #6. The impact of technology on the quality and quantity of jobs
S102
Tech Diplomacy: Actors, Trends, and Controversies – Full Book — Economic dominance: Tech companies have demonstrated unprecedented economic power, often surpassing the GDPs of entire n…
S103
YCIG & DTC: Future of Education and Work with advancing tech & internet — Marko Paloski highlights the potential risk of job losses due to automation. He points out that a significant portion of…
S104
Big Tech boosts India’s AI ambitions amid concerns over talent flight and limited infrastructure — Majorannouncementsfrom Microsoft ($17.5bn) and Amazon (over $35bn by 2030) have placed India at the centre of global AI …
S105
Contents — Lorem ipsu Lorem ipsu Technological advances – notably in such fields as automation, robotics, artificial intelligence, …
S106
Rights and Permissions — This troubling scenario, however, is on balance unfounded. It is true that in some advanced economies and middle-income …
S107
https://dig.watch/event/india-ai-impact-summit-2026/ai-2-0-reimagining-indian-education-system — India has millions and trillions of problems in each and every corner. You pick up one problem, solve it. You get your d…
S108
The Innovation Beneath AI: The US-India Partnership powering the AI Era — Adding to what just was discussed, we have a tendency to overestimate the next two years and impact and underestimate wh…
S109
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — It is very clear to me that the 2030s will be a chaotic era. There will be disruption. There will be large changes. And …
S110
Future-proofing global tech governance: a bottom-up approach | IGF 2023 Open Forum #44 — Thomas Schneider:Something that always strikes me is when you talk about how does this need to evolve, is that while tec…
S111
test marko — Comparison to New York City’s focus on horse-drawn carriage issues just before the advent of automobiles.
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Sabina Dewan
4 arguments146 words per minute2588 words1062 seconds
Argument 1
AI will drive 30‑40% productivity gains that translate into large workforce cuts, with evidence of layoffs and algorithmic gig‑economy management (Sabina)
EXPLANATION
Sabina argues that AI is expected to generate substantial productivity gains of 30‑40%, which will inevitably lead to significant reductions in workforce size. She highlights that these efficiency gains are already manifesting as large‑scale layoffs and the rise of algorithmic management in the gig economy.
EVIDENCE
She cites private research indicating that companies anticipate 30-40% time-saving and productivity gains that translate into workforce cuts [8]. She points to recent news of major tech firms laying off thousands of workers as concrete evidence of job losses [152-154]. She also references the algorithmic management of gig workers, where platforms can dismiss workers without redress, illustrating a new form of labor control [160-164].
MAJOR DISCUSSION POINT
Job displacement and productivity gains
AGREED WITH
Sandhya Ramachandran Arun, Anurag Behar, Julie Delahanty
DISAGREED WITH
Sandhya Ramachandran Arun
Argument 2
Immediate reforms are required: competition policy, antitrust, tax, labour law, universal social protection, and skill systems to mitigate AI‑driven disruption (Sabina)
EXPLANATION
Sabina calls for swift policy action across multiple domains to counteract the disruptive impact of AI on labour markets. She stresses that competition, antitrust, tax, labour regulations, universal social protection and skill development systems must be re‑engineered to protect workers.
EVIDENCE
She enumerates the need for competition policy, antitrust, tax reforms, labour regulations, universal social protection and robust skill systems as essential levers to address AI-driven disruption [320-334]. She also notes the urgency of acting now rather than waiting for more empirical evidence [176-183].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for comprehensive policy responses-including competition, tax, labour law reforms and universal social protection-is highlighted in [S2]; proactive, coherent policy frameworks that invest in skills and social protection are advocated in [S15].
MAJOR DISCUSSION POINT
Regulatory reforms for AI disruption
AGREED WITH
Sandhya Ramachandran Arun, Julie Delahanty, Anurag Behar
DISAGREED WITH
Julie Delahanty
Argument 3
Current skill levels are very low; massive upskilling for AI is unrealistic without fundamental education reform (Sabina)
EXPLANATION
Sabina highlights the severe skill gap in the labour force, noting that only a tiny fraction possess formal skills, making large‑scale AI upskilling impractical without deep reforms in education and skill systems. She argues that without addressing basic literacy and numeracy, AI‑focused training cannot succeed.
EVIDENCE
She cites labour-force survey data showing only 4.1% of respondents identify as having formal skills, and points to the poor quality of education that hampers the ability to train workers for AI roles [325-332].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AI skills gap as an equity and access issue is discussed in [S13]; the importance of upskilling linked to K-12 education is noted in [S12]; and the lack of basic skills among the bottom quartile is highlighted in [S11].
MAJOR DISCUSSION POINT
Skill gaps and education challenges
Argument 4
AI intensifies inequality, surveillance, and even cognitive decline, demanding urgent precautionary measures (Sabina)
EXPLANATION
Sabina warns that AI not only threatens jobs but also deepens existing inequalities, expands surveillance, and may be linked to emerging cognitive decline among youth. She urges immediate precautionary action to mitigate these broader societal harms.
EVIDENCE
She references AI-enabled surveillance influencing work decisions and exacerbating inequality, noting the massive market caps of tech firms that concentrate capital while labour’s share shrinks [10-13]. She also cites emerging research indicating cognitive decline, higher rates of depression and anxiety among the current generation, which could make them more replaceable by machines [313-316].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Inequality concerns and the need to support the lower-quartile workforce are raised in [S11]; warnings about rapid job loss and strain on labour markets appear in [S14]; and the call for human-centred precautionary policies is echoed in [S15].
MAJOR DISCUSSION POINT
Societal risks of AI
AGREED WITH
Julie Delahanty, Sandhya Ramachandran Arun
A
Anurag Behar
4 arguments152 words per minute2047 words807 seconds
Argument 1
AI’s monetisation will come from labor reduction or new products, so both job destruction and creation are expected (Anurag)
EXPLANATION
Anurag asserts that the economic returns from AI will be derived either from increased productivity that reduces labour demand or from the creation of novel products and services. Consequently, both job losses and new employment opportunities are anticipated.
EVIDENCE
He explains that investment in AI must be justified by monetisation, which will arise either from productivity gains (i.e., labour reduction) or from new products and services, or a combination of both [34-38].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Productivity gains that can lead to both displacement and new roles are documented in [S9]; PwC’s findings of wage increases alongside productivity surges suggest creation of value [S10]; policy discussions acknowledging both outcomes are present in [S15].
MAJOR DISCUSSION POINT
Economic drivers of AI impact
AGREED WITH
Sabina Dewan, Sandhya Ramachandran Arun, Julie Delahanty
Argument 2
Governments must design policies that balance innovation with labour‑market protection, using human‑centred AI design (Anurag)
EXPLANATION
Anurag calls on governments and institutions to craft policies that simultaneously foster AI innovation while safeguarding workers. He emphasizes a human‑centred approach to AI governance to minimise labour market disruption.
EVIDENCE
He poses a direct question to Julie about how governments can responsibly govern AI to minimise labour-market disruption, highlighting the need for policies that protect workers while enabling innovation [112-118].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Balanced policy frameworks that protect workers while fostering innovation are outlined in [S15]; the regulator’s role in aligning innovation with human welfare is stressed in [S16]; and multiple governance initiatives are described in [S18], [S19] and [S20].
MAJOR DISCUSSION POINT
Policy design for AI and labour markets
AGREED WITH
Sabina Dewan, Sandhya Ramachandran Arun, Julie Delahanty
Argument 3
Coding efficiency raises concerns for IT employment; broader industry impacts require new training pathways (Anurag)
EXPLANATION
Anurag raises the concern that AI tools making coding easier could lead to substantial job losses in the IT sector and beyond. He asks what new training pathways are needed to address this shift across industries.
EVIDENCE
He asks whether the ability of AI to perform 50-70% of coding tasks will inevitably cause IT job losses and questions the impact on other sectors such as design, marketing, and academic research assistance [66-72].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Studies showing AI can automate 50-70% of coding tasks and boost productivity are reported in [S9]; the Anthropic report indicating AI mainly assists rather than replaces coders provides a counterpoint and broader perspective [S17].
MAJOR DISCUSSION POINT
Impact of AI on coding jobs
DISAGREED WITH
Sandhya Ramachandran Arun
Argument 4
Acknowledges personal conflict of interest and the tension between optimistic tech narratives and doomer warnings (Anurag)
EXPLANATION
Anurag openly discloses his conflict of interest, noting his foundation’s ownership stake in a tech company, and reflects on his internal tension between optimism (boomer) and pessimism (doomer) regarding AI’s societal impact.
EVIDENCE
He reveals that his foundation owns about 70% of Wipro, creating a conflict between tech interests and his mandate to protect vulnerable populations, and describes himself as a “boomer” in the head and a “doomer” in the heart, illustrating the tension between optimism and caution [247-272].
MAJOR DISCUSSION POINT
Conflict of interest and perspective tension
S
Sandhya Ramachandran Arun
4 arguments158 words per minute1684 words636 seconds
Argument 1
AI reshapes job descriptions; junior developers become “managers of AI” rather than being eliminated, and many functions remain consultative (Sandhya)
EXPLANATION
Sandhya explains that AI changes the nature of technical roles, turning junior developers into overseers who manage AI‑generated code, while many consulting‑type services remain largely unaffected. This shift reduces displacement risk for certain roles.
EVIDENCE
She notes that the role of a junior developer now becomes that of a manager of AI, overseeing design, architecture, and security while delegating coding to AI agents [86-87]. She also points out that most of Wipro’s work is consultative and therefore not experiencing large-scale displacement [60-62].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Anthropic analysis that AI supports workers and changes role dynamics aligns with this transformation [S17]; coding productivity gains that enable oversight roles are described in [S9].
MAJOR DISCUSSION POINT
Job role transformation with AI
AGREED WITH
Sabina Dewan, Anurag Behar, Julie Delahanty
DISAGREED WITH
Anurag Behar
Argument 2
Policy, leadership, and governance frameworks are essential to steer AI benefits and guard against risks (Sandhya)
EXPLANATION
Sandhya stresses that effective policy, strong leadership, and robust governance structures are crucial to ensure AI delivers benefits while mitigating its risks. She likens the need for guardrails to those applied in nuclear energy.
EVIDENCE
She argues that it is up to humans to create policy, governance mechanisms, and guardrails to derive benefits and mitigate risks of AI, drawing a parallel with nuclear energy governance and emphasizing the role of leadership and guidelines [286-294].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for proactive policy, leadership and governance to harness AI benefits while mitigating risks appear in [S15] and [S16]; international calls for AI governance frameworks are detailed in [S18], [S19] and [S20].
MAJOR DISCUSSION POINT
Governance and leadership for AI
AGREED WITH
Sabina Dewan, Julie Delahanty, Anurag Behar
Argument 3
Companies are overhauling hiring criteria toward learnability and adaptability, and providing calibrated learning modules for AI‑augmented roles (Sandhya)
EXPLANATION
Sandhya describes a shift in recruitment toward assessing candidates’ learnability, communication, and adaptability, and notes that her firm has created role‑personas and specific learning modules to upskill employees for AI‑enhanced responsibilities.
EVIDENCE
She outlines that hiring criteria have moved to focus on learnability, communication, technical ideas, and adaptability [53-55], and that her organization has built role personas and calibrated learning modules to help employees adapt to AI-augmented roles [56-58].
MAJOR DISCUSSION POINT
Reskilling and hiring transformation
Argument 4
Human creativity, wisdom, and foresight are indispensable; AI will be guided by human policy and governance (Sandhya)
EXPLANATION
Sandhya argues that despite AI’s rapid evolution, human creativity, wisdom, and vision remain essential, and that policy and governance must keep humanity at the centre of AI development to ensure beneficial outcomes.
EVIDENCE
She emphasizes that creativity, wisdom, vision and human-centricity are core to any technology disruptor, and that policy, governance, and leadership are needed to keep humanity at the centre of AI’s evolution [280-287].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Emphasis on human-centred AI and the need for governance that keeps humanity at the core is found in [S16]; further support comes from governance discussions in [S18][S20].
MAJOR DISCUSSION POINT
Human‑centric AI
AGREED WITH
Sabina Dewan, Julie Delahanty
J
Julie Delahanty
4 arguments150 words per minute1060 words422 seconds
Argument 1
Past technological shifts caused both job loss and new opportunities; systematic data collection is needed to understand AI’s labor effects (Julie)
EXPLANATION
Julie reflects on historical technology waves, noting they caused both displacement and new jobs, and stresses the need for systematic, evidence‑based data collection to monitor AI’s impact on labour markets. She points to ongoing research programmes that gather household, firm and worker data.
EVIDENCE
She recalls the computer era’s job losses and the difficulty of predicting outcomes, then describes AI4D’s large research programme that collects household, firm-level and worker data in sub-Saharan Africa to track AI’s real-world labour impacts [127-129][136-138].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of systematic, evidence-based data collection to monitor AI’s labour impact is highlighted in [S15]; historical productivity improvements and their labour implications are discussed in [S9].
MAJOR DISCUSSION POINT
Historical perspective and data‑driven understanding
Argument 2
Strong regulatory and labour institutions, backed by research and tools like the Global Index on Responsible AI, enable evidence‑based policy (Julie)
EXPLANATION
Julie argues that robust regulatory and labour institutions, supported by research tools such as the Global Index on Responsible AI, are essential for crafting evidence‑based policies that protect workers and guide AI development responsibly.
EVIDENCE
She introduces the Global Index on Responsible AI, which covers 138 countries and provides rights-based data on labour protection, enabling governments to design better regulations; she also highlights the importance of evidence and comparative data for policy making [233-242].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Evidence-based policy making supported by robust institutions and tools is advocated in [S15]; the need for guardrails and institutional frameworks is reinforced in [S20].
MAJOR DISCUSSION POINT
Evidence‑based AI governance
AGREED WITH
Sabina Dewan, Sandhya Ramachandran Arun, Anurag Behar
DISAGREED WITH
Sabina Dewan
Argument 3
Ongoing research and country‑level evidence are needed to design effective skill‑development and social‑protection programmes (Julie)
EXPLANATION
Julie emphasizes that continuous research and country‑specific data are crucial for informing skill‑development strategies and social‑protection measures that can help workers transition in an AI‑driven economy.
EVIDENCE
She cites AI4D’s research program that gathers household, firm and worker data to understand who benefits or is displaced, informing governments on skills development, social protections and labour-rights policies [136-139].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Continuous research, country-specific data, and investment in skills and social protection programmes are emphasized in [S15]; productivity and labour impact studies in [S9] provide additional context.
MAJOR DISCUSSION POINT
Research for skill and protection policy
Argument 4
A human‑centred, co‑creation approach is required; while outcomes are uncertain, responsible design can safeguard workers (Julie)
EXPLANATION
Julie advocates for a human‑centred, co‑creation model where workers, communities and employers are involved in shaping AI systems, ensuring that technology enhances job quality and does not exacerbate inequality.
EVIDENCE
She explains that AI4D’s approach involves co-creating technologies with workers, communities and employers to improve job quality, productivity and equity, stressing the need to understand who benefits and to shape AI accordingly [132-138].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Human-centred, co-creation models for AI governance are promoted in [S16]; broader governance frameworks that enable responsible design are discussed in [S18][S20].
MAJOR DISCUSSION POINT
Co‑creation and human‑centred AI design
Agreements
Agreement Points
AI will cause both job displacement and creation, requiring reskilling and new role definitions.
Speakers: Sabina Dewan, Sandhya Ramachandran Arun, Anurag Behar, Julie Delahanty
AI will drive 30‑40% productivity gains that translate into large workforce cuts, with evidence of layoffs and algorithmic gig‑economy management (Sabina) AI reshapes job descriptions; junior developers become “managers of AI” rather than being eliminated, and many functions remain consultative (Sandhya) AI’s monetisation will come from labor reduction or new products, so both job destruction and creation are expected (Anurag) Past technological shifts caused both job loss and new opportunities; systematic data collection is needed to understand AI’s labour effects (Julie)
All four speakers acknowledge that AI will generate efficiency gains that can displace workers while also creating new types of work, making reskilling and redefining roles essential [8][86-87][34-38][127-129][136-138].
POLICY CONTEXT (KNOWLEDGE BASE)
This view mirrors analyses that AI will reshape job structures, eliminating some entry-level paths while creating new opportunities and demanding reskilling, as discussed in the Intelligent Coworker report [S35] and the comprehensive discussion on AI’s transformative potential [S36]; policy briefs also stress training, skilling and reskilling as core responses [S38].
Comprehensive policy, governance and institutional reforms are needed to manage AI’s impact on labour markets.
Speakers: Sabina Dewan, Sandhya Ramachandran Arun, Julie Delahanty, Anurag Behar
Immediate reforms are required: competition policy, antitrust, tax, labour law, universal social protection, and skill systems to mitigate AI‑driven disruption (Sabina) Policy, leadership, and governance frameworks are essential to steer AI benefits and guard against risks (Sandhya) Strong regulatory and labour institutions, backed by research and tools like the Global Index on Responsible AI, enable evidence‑based policy (Julie) Governments must design policies that balance innovation with labour‑market protection, using human‑centred AI design (Anurag)
The panel concurs that waiting is not an option and that coordinated regulatory, tax, competition, labour-law and social-protection measures, together with strong governance, are required to mitigate AI-driven disruption [320-334][176-183][286-294][130-138][112-118].
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for broad reforms echo the AI Impact Summit 2026 which urged proactive, people-centred policy, lifelong learning and social protection [S40]; the ILO’s agenda on AI and work also highlights institutional reforms and inclusive governance [S41]; and the UN High Commissioner emphasizes human-rights-centric frameworks [S54].
Evidence‑based monitoring and data collection are crucial for informed AI labour policies.
Speakers: Julie Delahanty, Sabina Dewan, Anurag Behar
Past technological shifts caused both job loss and new opportunities; systematic data collection is needed to understand AI’s labour effects (Julie) We do not have the luxury to wait for empirical evidence before acting on AI’s impact (Sabina) What lessons are being learned across countries to create AI opportunities without deepening inequality? (Anurag)
All three stress the need for robust, country-level data and research to guide policy, warning against inaction while evidence is gathered [233-242][176-183][227-232].
POLICY CONTEXT (KNOWLEDGE BASE)
Evidence-based monitoring is highlighted in the OECD AI Incidents Monitor initiative [S52] and in the AI Policy Research Roadmap which stresses sandbox environments and multi-stakeholder data sharing [S51]; the AI Impact Summit also identified transparent data collection as essential for policy design [S50].
AI intensifies inequality and precarity; a human‑centred approach is required to protect vulnerable workers.
Speakers: Sabina Dewan, Julie Delahanty, Sandhya Ramachandran Arun
AI intensifies inequality, surveillance, and even cognitive decline, demanding urgent precautionary measures (Sabina) A human‑centred, co‑creation approach is required; responsible design can safeguard workers (Julie) Human creativity, wisdom, and foresight are indispensable; AI will be guided by human policy and governance (Sandhya)
The speakers agree that AI risks exacerbate existing inequities and that placing humans at the centre of AI design and governance is essential to mitigate those risks [10-13][313-316][132-138][280-287].
POLICY CONTEXT (KNOWLEDGE BASE)
Human-centred approaches are advocated by the UN High Commissioner for Human Rights calling for rights-based AI governance [S54] and by the IGF’s human-rights-focused AI governance framework [S55]; the AI Impact Summit likewise placed people at the centre of AI strategy to mitigate inequality [S40]; and the ILO stresses protecting vulnerable workers in the age of AI [S41].
Similar Viewpoints
Both recognise that AI will generate efficiency gains that reshape jobs, but stress that human oversight and new skill sets will be needed rather than wholesale job loss [8][86-87][53-55][56-58].
Speakers: Sabina Dewan, Sandhya Ramachandran Arun
AI will drive 30‑40% productivity gains that translate into large workforce cuts, with evidence of layoffs and algorithmic gig‑economy management (Sabina) AI reshapes job descriptions; junior developers become “managers of AI” rather than being eliminated, and many functions remain consultative (Sandhya)
Both stress that AI’s impact is mixed and that systematic evidence is needed to guide policy responses [34-38][127-129][136-138].
Speakers: Anurag Behar, Julie Delahanty
AI’s monetisation will come from labor reduction or new products, so both job destruction and creation are expected (Anurag) Past technological shifts caused both job loss and new opportunities; systematic data collection is needed to understand AI’s labour effects (Julie)
Both argue that robust governance structures and institutional capacity are prerequisite for responsible AI deployment [286-294][130-138].
Speakers: Sandhya Ramachandran Arun, Julie Delahanty
Policy, leadership, and governance frameworks are essential to steer AI benefits and guard against risks (Sandhya) Strong regulatory and labour institutions, backed by research and tools like the Global Index on Responsible AI, enable evidence‑based policy (Julie)
Unexpected Consensus
Both a tech‑industry representative (Sandhya) and a labour‑market critic (Sabina) agree that waiting for AI impacts to fully materialise is not an option.
Speakers: Sabina Dewan, Sandhya Ramachandran Arun
We need to act now; we don’t have the luxury to wait for empirical evidence (Sabina) Watching and waiting is certainly not an option; we must act now (Sandhya)
Despite their differing tones-Sabina’s cautionary stance and Sandhya’s optimistic tech perspective-both stress immediate action rather than passive observation [176-183][355-358].
POLICY CONTEXT (KNOWLEDGE BASE)
The urgency expressed aligns with the statement from the AI for Social Empowerment panel that ‘watching and waiting is certainly not an option’ [S46] and with the AI Impact Summit’s call for immediate, coherent policy responses [S40].
Overall Assessment

The panel shows strong convergence on four core themes: (1) AI will both displace and create jobs, necessitating reskilling; (2) comprehensive policy and governance reforms are essential; (3) evidence‑based monitoring is critical; (4) AI risks exacerbate inequality and require human‑centred approaches. These shared positions cut across the digital economy, capacity development, AI governance, and human‑rights domains.

High consensus – most speakers align on the need for proactive, evidence‑driven policy and human‑centred governance to manage AI’s labour impacts, indicating a unified call for coordinated action across governments, industry and research communities.

Differences
Different Viewpoints
Magnitude of AI‑driven job displacement
Speakers: Sabina Dewan, Sandhya Ramachandran Arun
AI will drive 30‑40% productivity gains that translate into large workforce cuts, with evidence of layoffs and algorithmic gig‑economy management (Sabina) AI reshapes job descriptions; junior developers become “managers of AI” rather than being eliminated, and many functions remain consultative (Sandhya)
Sabina cites private research showing 30-40% time-saving that translates into workforce cuts [8] and points to recent mass layoffs in big-tech firms [152-154] as well as algorithmic gig-economy management lacking redress mechanisms [160-164]. Sandhya counters that while coding can be handed to AI, humans must oversee design, architecture and security, turning junior developers into AI-managers, and notes that most of Wipro’s work is consultative, so large-scale displacement is limited [86-87][60-62].
POLICY CONTEXT (KNOWLEDGE BASE)
National assessments such as Malaysia’s projection of up to 600,000 displaced workers illustrate the scale of potential displacement [S45]; broader analyses warn of cross-cutting massive job losses across sectors [S44].
Impact of coding efficiency on IT employment versus role transformation
Speakers: Anurag Behar, Sandhya Ramachandran Arun
Coding efficiency raises concerns for IT employment; broader industry impacts require new training pathways (Anurag) AI reshapes job descriptions; junior developers become “managers of AI” rather than being eliminated, and many functions remain consultative (Sandhya)
Anurag asks whether AI tools that can perform 50-70% of coding tasks will inevitably lead to IT job losses [66-70]. Sandhya replies that coding can be fully automated but human oversight of design, architecture and security remains essential, turning junior developers into managers of AI, and that consultative services are largely unaffected [84-87][60-62].
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions on role redesign in India highlight that AI-driven coding tools lead to organizational role changes rather than simple headcount reductions [S49]; the Intelligent Coworker report also notes transformation of IT roles rather than pure displacement [S35].
Urgency of policy reforms versus evidence‑based regulatory approach
Speakers: Sabina Dewan, Julie Delahanty
Immediate reforms are required: competition policy, antitrust, tax, labour law, universal social protection, and skill systems to mitigate AI‑driven disruption (Sabina) Strong regulatory and labour institutions, backed by research and tools like the Global Index on Responsible AI, enable evidence‑based policy (Julie)
Sabina calls for swift, comprehensive reforms across competition, tax, labour, and social protection, warning that waiting for more empirical evidence will be too late [176-183][320-334]. Julie stresses that robust institutions and systematic data (e.g., the Global Index on Responsible AI covering 138 countries) are needed to craft evidence-based policies, noting that standardized regulation does not yet exist [233-242][244-246].
POLICY CONTEXT (KNOWLEDGE BASE)
While some actors call for urgent, comprehensive reforms [S46], other analyses stress the need for evidence-based, sandbox-tested regulation and multi-stakeholder roadmaps [S51]; the governance debate at the AI for Democracy workshop reflects this tension [S56].
Unexpected Differences
Risk perception: cognitive decline and broader societal harms versus confidence in human‑centric governance
Speakers: Sabina Dewan, Sandhya Ramachandran Arun
AI intensifies inequality, surveillance, and even cognitive decline, demanding urgent precautionary measures (Sabina) Human creativity, wisdom, and foresight are indispensable; AI will be guided by human policy and governance (Sandhya)
Sabina links AI deployment to emerging cognitive decline among youth, higher rates of depression and anxiety, and calls for urgent precautionary measures [313-316]. Sandhya, while acknowledging AI’s transformative power, stresses that human creativity, wisdom and foresight will remain central and that appropriate policy and governance can keep humanity at the core, without highlighting health-related harms. The contrast between a health-focused alarm and a governance-focused optimism was not anticipated given both speakers’ expertise.
POLICY CONTEXT (KNOWLEDGE BASE)
The Global Risks Landscape identifies declining empathy and cognitive impacts as societal threats [S58]; conversely, the UN High Commissioner and AI governance guidelines emphasize human-centric safeguards to prevent such harms [S54][S59].
Overall Assessment

The discussion revealed clear divergences on how severe AI‑driven job displacement will be, with Sabina emphasizing large‑scale cuts and Sandhya portraying role transformation rather than elimination. There is also tension between calls for immediate, sweeping policy reforms and the need for evidence‑based regulation. While all participants concur on the necessity of governance, they differ on urgency, scope and the evidentiary basis of action. These disagreements highlight the challenge of aligning policy responses with differing risk assessments and stakeholder perspectives.

Moderate to high disagreement: the participants share a common goal of responsible AI governance but diverge substantially on the perceived magnitude of labour impacts and the pace and nature of policy interventions. This suggests that consensus‑building will require bridging gaps between alarmist and optimistic viewpoints, and between urgent reformist and evidence‑driven policy approaches.

Partial Agreements
All three agree that policy, governance and institutional frameworks are needed to manage AI’s impact on labour markets. Sabina pushes for urgent, sweeping reforms; Sandhya emphasizes the need for policy, leadership and guardrails; Julie highlights the role of evidence‑based regulation and data tools. Their shared goal is responsible AI governance, but they differ on the speed and evidentiary basis of action [176-183][320-334][286-294][233-242][244-246].
Speakers: Sabina Dewan, Sandhya Ramachandran Arun, Julie Delahanty
Immediate reforms are required: competition policy, antitrust, tax, labour law, universal social protection, and skill systems to mitigate AI‑driven disruption (Sabina) Policy, leadership, and governance frameworks are essential to steer AI benefits and guard against risks (Sandhya) Strong regulatory and labour institutions, backed by research and tools like the Global Index on Responsible AI, enable evidence‑based policy (Julie)
Takeaways
Key takeaways
AI is delivering 30‑40% productivity gains that are already translating into large workforce reductions and layoffs, especially in the tech sector. Job roles are being reshaped rather than simply eliminated; junior developers become managers of AI tools, and many functions remain consultative, strategic, or supervisory. Both job destruction and creation are expected; monetisation of AI will come from labor reduction and new products/services. Urgent governance is required: competition policy, antitrust, tax reforms, stronger labour laws, universal social‑protection and robust skill‑development systems. Human‑centred, co‑created AI design is essential to mitigate inequality, surveillance, and algorithmic gig‑economy exploitation. Evidence‑based policy depends on systematic data collection (e.g., Global Index on Responsible AI, AI4D research) to track labour‑market impacts. Current skill levels, particularly in India’s informal sector, are very low; massive up‑skilling is unrealistic without fundamental education reform. AI exacerbates existing precarity, inequality, and even cognitive‑health concerns, making immediate action preferable to waiting. Consensus among panelists that waiting for perfect evidence is not an option; proactive, coordinated action across government, industry, and academia is needed.
Resolutions and action items
Leverage the Global Index on Responsible AI to provide country‑level evidence for labour‑rights and AI governance reforms. Governments should strengthen regulatory and labour institutions, invest in research ecosystems, and design policies that balance innovation with worker protection. Companies (e.g., Wipro) to continue creating calibrated learning modules, role personas, and reskilling pathways that emphasize learnability, adaptability, and AI oversight. Develop universal social‑protection mechanisms (healthcare, unemployment benefits, consumption‑smoothing) to support workers displaced by AI. Implement human‑centred AI co‑creation processes involving workers, communities, and employers to ensure AI augments rather than replaces human labour. Accelerate data collection on AI’s labour impacts (household, firm‑level, worker‑level surveys) to inform skill‑development and social‑protection programmes.
Unresolved issues
Precise magnitude, timing, and sector‑specific breakdown of AI‑driven job displacement versus job creation remain unknown. Effective strategies for upskilling India’s large informal and low‑skill workforce to meet AI‑driven demand are not defined. Specific design of competition, antitrust, and tax policies to curb capital concentration and protect labour has not been detailed. Legal frameworks to address algorithmic management and redress in the gig‑economy are still lacking. How to mitigate AI‑related cognitive decline, mental‑health impacts, and broader societal well‑being has not been resolved. Balancing rapid AI innovation with regulation without stifling growth remains an open policy question.
Suggested compromises
Acknowledge that AI will cause both job losses and new opportunities; focus on transition policies rather than a purely doom‑or‑optimism narrative. Use ‘doom‑and‑gloom’ framing to motivate action while avoiding paralysis—encourage proactive policy without fostering panic. Combine tech‑sector optimism (e.g., new AI‑manager roles) with precautionary labour safeguards (social protection, skill training). Adopt incremental, evidence‑based regulations (human‑centred design, co‑creation) that allow innovation to continue while protecting workers.
Thought Provoking Comments
When you talk to companies privately, they will own up to anywhere between 30% to 40% time‑saving, which then translates into significant workforce cuts. AI systems are enabling surveillance, influencing who gets work, and grossly exacerbating inequality.
She reveals hidden, empirical data that companies acknowledge large productivity gains that likely lead to layoffs, and links AI to broader societal harms, shifting the debate from speculative to evidence‑based urgency.
Sets a tone of concern and urgency, prompting the panel to focus on regulation and social institutions rather than treating AI as a neutral technological advance.
Speaker: Sabina Dewan
What kind of jobs are going to get displaced, destroyed? And what kind of jobs are going to get created? What is the underlying dynamic because of which these jobs will be created and the jobs will be destroyed?
A direct, commonsense framing that forces the discussion from abstract AI hype to concrete labor market dynamics.
Steers the conversation toward concrete examples (coding, marketing, finance) and elicits detailed responses from Sandhya, opening the floor for nuanced analysis of job transformation versus displacement.
Speaker: Anurag Behar
The role of a junior developer becomes that of a little manager of AI, overseeing design, architecture, security, rather than being displaced.
She reframes the narrative of job loss into role evolution, highlighting how AI can augment rather than replace certain skill sets.
Introduces a nuanced perspective that tempers the alarmist view, prompting other panelists to consider upskilling and role redesign rather than outright job elimination.
Speaker: Sandhya Ramachandran Arun
Strong institutions—regulatory, labor, and research ecosystems—are essential to understand where job losses and biases happen, and to co‑create AI with workers, communities, and employers.
She shifts the focus from technology itself to the institutional capacity needed for responsible governance, emphasizing co‑creation and evidence‑based policy.
Broadens the discussion to include governance frameworks, leading to later mentions of the Global Index on Responsible AI and the need for data‑driven policy making.
Speaker: Julie Delahanty
We need to go beyond the quantity of jobs and look at the impact on the quality of work—e.g., algorithmic management in the gig economy where workers have no redressal mechanism.
Expands the labor conversation to include precarious work and platform economies, highlighting how AI changes power dynamics beyond simple headcount.
Redirects attention to the gig economy and platform governance, prompting Anurag and others to discuss informal sector vulnerabilities, especially in India.
Speaker: Sabina Dewan
In India, more than 90% of employment is informal; losing even a small share of formal jobs has cascading effects on the broader economy and deepens precarity.
She contextualizes the AI impact within the Indian labor market, challenging the assumption that AI effects are limited to the formal sector.
Shifts the conversation to the Global South, highlighting systemic risks and prompting a discussion on universal social protection and skill development.
Speaker: Sabina Dewan
The Global Index on Responsible AI provides country‑level, rights‑based data on labor protection and the right to work for 138 countries, helping governments design evidence‑based policies.
Offers a concrete, actionable tool for policymakers, moving the dialogue from abstract recommendations to tangible resources.
Creates a bridge between research and policy, reinforcing the earlier call for data‑driven governance and influencing later remarks about the need for evidence.
Speaker: Julie Delahanty
AI is attacking the very foundation of education; teachers and students are outsourcing their thinking, leading to cognitive decline and forcing a return to paper‑and‑pencil exams.
Introduces a new domain—education—where AI’s impact may be even more profound, linking cognitive health, assessment integrity, and societal outcomes.
Expands the scope of the discussion beyond labor markets, prompting reflections on the broader societal implications of AI and reinforcing the urgency for comprehensive governance.
Speaker: Anurag Behar (as education lead)
Overall Assessment

The discussion was shaped by a series of pivotal interventions that moved it from a generic hype‑centric talk to a multi‑dimensional analysis of AI’s labor, institutional, and societal impacts. Sabina’s evidence‑based alarm and focus on job quality set a critical backdrop, while Anurag’s probing questions forced concrete examinations of job displacement and creation. Sandhya’s reframing of roles and Julie’s emphasis on strong institutions and data‑driven tools introduced nuance and actionable pathways. Subsequent comments about the informal sector, gig economy, and education broadened the lens to include vulnerable populations and systemic risks. Collectively, these comments redirected the conversation toward urgent, evidence‑based policy design, highlighting the need for governance, upskilling, and protective social architectures across both developed and developing contexts.

Follow-up Questions
What specific policies should governments implement to mitigate AI-induced labor market disruptions and ensure a smooth transition for workers?
Understanding concrete policy measures is crucial for minimizing job losses and protecting workers as AI transforms the labor market.
Speaker: Anurag Behar (to Julie Delahanty)
How can we develop robust, comparable metrics (e.g., a global index) to monitor AI’s impact on labor rights and job quality across countries?
A standardized data set would help policymakers identify gaps, benchmark progress, and design evidence‑based interventions.
Speaker: Julie Delahanty
What mechanisms are needed to provide universal social protection for workers displaced by AI, especially in economies with large informal sectors like India?
Without safety nets, AI‑driven layoffs could exacerbate precarity and inequality among vulnerable populations.
Speaker: Sabina Dewan
In what ways does AI affect job quality—not just quantity—particularly in gig and platform economies where algorithmic management is prevalent?
Beyond headline job counts, AI may alter working conditions, autonomy, and fairness, requiring deeper investigation.
Speaker: Sabina Dewan
What is the relationship between reported cognitive decline among young people and their susceptibility to AI‑driven job displacement?
If AI replaces tasks that require higher cognition, declining cognitive abilities could increase vulnerability, warranting research.
Speaker: Sabina Dewan
How can education and skill development systems be redesigned to equip low‑literacy and remote populations for AI‑augmented work?
Effective upskilling is essential to ensure inclusive participation in the AI economy, especially where formal education is weak.
Speaker: Sabina Dewan
What role should competition policy and antitrust regulation play in preventing excessive concentration of AI benefits among a few large firms?
Concentration could worsen inequality; antitrust tools may be needed to maintain a competitive, inclusive market.
Speaker: Sabina Dewan
How can tax policy be leveraged to fund social protections, skill development, and other interventions needed in the AI era?
Identifying fiscal levers is important for financing the systemic changes required to mitigate AI’s disruptive effects.
Speaker: Sabina Dewan
What best practices exist for co‑creating AI systems with workers, communities, and employers to enhance job quality and protect rights?
Co‑creation can ensure AI aligns with human‑centred values and reduces adverse labor outcomes.
Speaker: Julie Delahanty
What are effective reskilling and upskilling strategies to transform junior developers into ‘managers of AI’ and similar hybrid roles?
Understanding how to redesign roles and training pathways is key for workforce adaptation to AI automation.
Speaker: Sandhya Ramachandran Arun
How does AI adoption differ across sectors (e.g., healthcare, finance, marketing) in terms of job displacement versus augmentation, and what sector‑specific research is needed?
Sector‑level insights can guide targeted policies and interventions tailored to distinct industry dynamics.
Speaker: Sandhya Ramachandran Arun
What governance frameworks are needed to embed human wisdom, empathy, and ethical considerations into AI deployment across industries?
Ensuring AI systems reflect human values is essential to prevent harmful outcomes and maintain public trust.
Speaker: Sandhya Ramachandran Arun
How can we systematically track and evaluate the long‑term effects of AI on workers’ mental health, productivity, and overall well‑being?
AI‑induced stress and cognitive changes could have broad societal impacts; longitudinal studies are required.
Speaker: Sabina Dewan
What lessons can be drawn from countries that have successfully integrated AI while minimizing inequality, and how can these be adapted to other contexts?
Cross‑country learning can inform policy design and avoid repeating mistakes in AI rollout.
Speaker: Julie Delahanty

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.