AI for Social Empowerment_ Driving Change and Inclusion
20 Feb 2026 11:00h - 12:00h
AI for Social Empowerment_ Driving Change and Inclusion
Summary
The panel opened by highlighting that the impact of AI on employment is still unfolding and that companies publicly downplay potential job disruptions while privately acknowledging 30-40 % productivity gains that could translate into workforce cuts [1-2][5-8]. They argued that AI is already amplifying inequality, concentrating capital in a few tech giants and shrinking labor’s share of income, which makes the question of job impact central to any discussion of social empowerment [10-13]. Sabina warned that waiting for definitive evidence would be too late and called for immediate regulatory and institutional action to manage the inevitable evolution of AI [15-16][21].
Anurag asked whether AI investment will be monetized through labor reduction or new products and what kinds of jobs will be lost or created [31-38]. Sandhya responded that while coding can be automated, the remaining work requires human oversight of design, architecture and security, turning junior developers into “managers of AI” rather than being displaced [74-88]. She added that in marketing, finance and healthcare AI handles routine processing, but strategic planning, interpretation and decision-making remain human tasks, suggesting a shift rather than wholesale job loss [93-98][104-106].
Julie emphasized that effective AI governance depends on strong labor and regulatory institutions, co-creation with workers, and robust research to track real-world labor impacts [129-138]. She pointed to the Global Index on Responsible AI, which provides country-level data on labor rights and helps policymakers design evidence-based regulations, skills programs and social protections [233-242].
Sabina presented concrete evidence of recent layoffs in large tech firms and warned that efficiency gains are already causing job cuts, especially in the gig economy where algorithmic management lacks redress mechanisms [152-160][166-170]. She argued that in India, where only about 10 % of workers hold formal jobs, AI-driven precarity threatens a large share of the workforce and calls for urgent reforms in competition policy, antitrust, taxation, labor law, social protection and skill development [197-206][320-334]. She stressed that without swift action, the combined pressures of AI, climate change and economic shocks could deepen inequality and destabilize economies [211-218][176-184].
Sandhya concurred that waiting is not an option and called for proactive policy, leadership and continuous reskilling to adapt to AI’s rapid evolution [355-363]. The panel concluded that AI will reshape work profoundly, and coordinated, human-centric regulation and investment in skills and social safety nets are essential to mitigate risks while harnessing benefits [339-346][400-408].
Keypoints
Major discussion points
– AI will generate significant productivity gains that are already translating into workforce reductions and broader inequality.
Sabina notes that companies privately admit “30 % to 40 % time-saving…which then translates into significant workforce cuts” [8] and that AI “enables surveillance…exacerbating inequality” [9-10]. She points to concrete evidence of layoffs in large tech firms [152-154] and highlights the gig-economy’s algorithmic management as a new labor-rights threat [160-164]. Anurag frames the investment-to-productivity link as “productivity…comes from labor reduction or new products” [31-36].
– There is an urgent need for proactive regulation, strong institutions, and evidence-based policy to mitigate labor market disruption.
Julie stresses that “without strong institutions…regulation of what’s happening in the labor market” is impossible [129-131] and that a “human-centric” approach requires co-creation with workers [132-135]. She cites the AI4D research program and the Global Index on Responsible AI as tools that provide the evidence governments need [233-242]. Sabina adds that competition policy, antitrust, tax, labor law reforms, and universal social protection must be acted on now, not after more data [320-326].
– Reskilling and redesign of work are central, but current education systems are ill-prepared.
Sandhya describes how Wipro has built “role personas” and specific learning modules for AI-augmented roles, with “COEs inside engineering colleges” [56-58][90-91]. She also notes that junior developers will become “managers of AI” rather than being displaced [84-87]. Sabina counters that only 4.1 % of India’s labor force reports formal skills, making large-scale AI training unrealistic [326-332].
– The panel reflects divergent perspectives: tech optimism versus labor-market caution, compounded by disclosed conflicts of interest.
Sandhya argues that “we are not seeing a displacement” because most work is consultative and AI merely improves efficiency [60-63]. Sabina challenges this, calling the contention “largely untrue” and warning of “precariat” growth [1-4][300-306]. Anurag reveals his personal conflict-his foundation owns 70 % of Wipro-highlighting the tension between corporate and social-justice agendas [254-257].
– Broader societal implications extend beyond jobs to health, precarity, and democratic governance.
Sabina links AI-driven gig-platform control to loss of redress [162-164] and warns of rising “precariat” (58 % self-employment, no safety nets) [305-307]. She also raises emerging cognitive-decline trends among youth and the risk of “outsourcing thinking” in education [313-317][380-383]. Sandhya echoes the need for human-centric policy to keep humanity at the centre of technological change [281-286].
Overall purpose / goal of the discussion
The panel was convened to assess how rapidly advancing AI will reshape labor markets, to contrast industry optimism with labor-market research, and to identify concrete policy, regulatory, and educational actions that can harness AI’s benefits while preventing widening inequality and job displacement.
Overall tone and its evolution
The conversation begins with a cautious, alarmist tone (Sabina’s warning that the impact “is yet to unfold” [1-2] and that companies hide job-loss figures [5-8]). It then shifts to a more optimistic, technocratic tone as Sandhya describes reskilling initiatives and the re-definition of developer roles [84-87]. Julie introduces a balanced, evidence-driven tone, emphasizing institutional capacity and research tools [129-135][233-242]. As the dialogue progresses, the tone becomes urgent and prescriptive, with repeated calls for immediate regulatory reforms and social protection [320-326][355-363]. The discussion closes on a reflective yet resolute tone, acknowledging the seriousness of AI’s societal impact while affirming that human-centric policies can steer outcomes [281-286][398-406].
Speakers
– Sabina Dewan – Expertise: Labor market impacts of AI; Role/Title: Researcher on labor markets, associated with the Just Jobs Network (panelist) [S1]
– Julie Delahanty – Expertise: AI governance, development policy; Role/Title: President, IDRC Canada [S2]
– Sandhya Ramachandran Arun – Expertise: Technology and AI implementation; Role/Title: Chief Technology Officer, Wipro Limited [S5]
– Anurag Behar – Expertise: Education, labor market research; Role/Title: Chief Executive Officer, Azeem Premji Foundation; Moderator of the discussion; Oversees three universities and works with >100,000 teachers [S7]
Additional speakers:
– None identified beyond the listed panelists.
1. Opening – labour-market uncertainty – Sabina Dewan opens by warning that the impact of artificial intelligence on jobs “is yet to unfold” and that firms publicly deny any threat while privately admitting 30 %-40 % time-saving that translates into workforce cuts [5-8]. She stresses that AI is not merely a set of algorithms but a socio-political system used for social, political and economic engineering [5-9]. Sabina cites AI-driven surveillance, hiring influence, and the concentration of capital in a few tech giants such as NVIDIA, whose market cap now exceeds $5 trillion [9-13]. She asks whether we can afford to wait for more evidence or must act now with regulation and new social institutions [15-21].
2. Framing the investment debate – Moderator Anurag Behar notes the massive flow of capital into AI and asks how that investment will be monetised – through productivity-driven labour reduction, new products and services, or a mix of both [31-36]. He then directs his first substantive question to Sandhya Ramachandran Arun about which jobs are likely to be displaced, which will be created, and the dynamics that drive those changes [40-44].
3. Technology-industry view – Sandhya Ramachandran Arun explains that AI is a “very huge impact…as a disruptor,” forcing a rethink of job creation, talent reskilling, and hiring criteria toward learnability, communication and adaptability [48-54]. Wipro has built role-personas, specific learning modules, and Centres of Excellence inside engineering colleges to upskill every employee-from the board to the newest hire-through calibrated AI-augmented learning [84-86][90-91]. She notes that AI can now generate 50 %-70 % of code, but success still depends on human oversight of design, architecture and security, turning junior developers into “AI-managers” rather than eliminating their positions [84-88].
Sector-specific impacts – In marketing, AI produces high-quality visual, audio and video content, while strategic planning and ROI assessment remain human responsibilities [93-99]. In finance, AI handles transaction processing, but humans provide the wisdom needed to interpret data and align outcomes with human values [104-106]. In healthcare, AI augments clinicians and improves fraud detection [111-115]. Because Wipro’s model is consultative, it has not yet seen large-scale displacement [60-63]. Sandhya likens the technological trajectory to a horse-carriage → carbon emissions analogy, arguing that just as societies introduced guardrails for carbon, AI will require human-centred guardrails akin to those for nuclear energy [200-202].
4. Governance perspective – Julie Delahanty stresses that effective AI governance cannot rely on technology alone; it needs strong labour-market institutions, regulatory bodies and a vibrant research ecosystem [129-131]. She highlights the AI4D programme’s human-centric co-creation with workers, employers and communities and its sub-Saharan Africa research programme that collects household, firm-level and worker-level data to understand real-world AI impacts [132-135]. Julie also points to the Global Index on Responsible AI (covering 138 countries with a dedicated labour-rights dimension) as a tool for evidence-based policy [233-242]. She warns against codifying regulations without sufficient evidence, urging a balance between innovation and safety [241-246].
5. Labour-market evidence and policy urgency – Sabina returns with concrete evidence: major tech firms have already laid off thousands of workers, publicly attributing cuts to macro-economic factors while AI-driven efficiency is a hidden driver [152-158]. She flags algorithmic management in the gig economy as a new labour-rights problem because workers can be removed from platforms with no avenue for redress [160-166]. Sabina argues that waiting for more data would be “way too late” and calls for urgent reforms of competition policy, antitrust law, and tax policy (including wealth and transaction taxes), alongside labour law, universal social protection and massive investment in skill-development systems [320-334][326-332][173-176][180-184].
India-specific context – Only about 10 % of Indian employment is formal, so loss of formal jobs would cascade into the informal economy [197-206]; 58 % of Indian workers are self-employed with no health insurance or safety net [305-311]. Emerging research shows cognitive decline, depression and anxiety among the current generation of young people, which could increase their replaceability by machines [313-317].
6. Conflict of interest disclosure – Anurag reveals that the Azeem Premji Foundation, which he leads, owns roughly 70 % of Wipro, creating a personal conflict between tech-sector interests and his mandate to protect the most vulnerable [254-257].
7. Education-sector alarm – Anurag warns that AI is “outsourcing thinking” for teachers and students, leading to cognitive decline and forcing universities to revert to paper-and-pencil examinations because AI-generated work is hard to assess [378-387][393-398]. He likens AI’s societal risk to nuclear technology, emphasizing that unlike nuclear hazards, AI permeates every individual’s daily life, making governance far more complex [401-406].
8. Consensus and concluding remarks – All panelists agree that AI will reshape work rather than simply eliminate it, creating new “AI-manager” roles while preserving functions that require human creativity, empathy and wisdom [281-286][300-306][355-363]. Immediate, evidence-based regulation is essential; waiting risks deepening inequality and labour-market precarity [15-16][173-176][355-363]. Julie’s Future of Work project and the Future Works Collective (funded by IDRC) are presented as platforms for re-thinking ways of working [447-452]. Sandhya stresses that “watching and waiting is not an option,” calling for proactive leadership, policy embedded in platforms, and continuous re-imagining of work and training [355-363]. The panel closes with a shared commitment to a human-centred, proactive governance framework supported by strong institutions, robust data and coordinated global action.
say, you know, it’s yet to unfold. We don’t know what the impact is and it’s yet to unfold. I believe that that contention is actually largely untrue. And let me tell you why. When you talk to companies privately, publicly they will not own up to the potential job disruptions as a result of AI. And partly that is because many of the big companies actually are known to be formal job creators, right? And that is a very important part of their image and their contribution to economies and societies. But when you talk to them privately, in India especially, our research shows that they will own up to anywhere between 30 % to 40 % time saving, right, productivity gains, which then translates into significant workforce cuts.
We already have plenty of empirical evidence that suggests that… that AI systems are enabling surveillance, they’re influencing decisions about who gets work, when, and what entitlements people have access to. We also know that AI systems are grossly exacerbating inequality. If you just look at the market caps of some of the top technology companies, you know, NVIDIA’s $5 trillion market cap, right? So there’s a massive accumulation of capital that really, you know, capital share is growing and labor share of income is getting smaller and smaller. So I guess, you know, this discussion that talks about social empowerment, a key question in that is the question of the impact on jobs. And the question that I, you know, put out there is, so if you even buy the idea that we don’t know, that we don’t know what the impact is, what the impact is going to be.
Can we afford to just wait, right? Or do we need to take every action possible in terms of regulations, in terms of building social institutions, in terms of really working to build systems that can manage this inevitable evolution of AI, whether we like it or not. The last thing I’ll say is just, you know, yes, there have been technologies before. Yes, they’ve had their own forms of inclusion and exclusion. But at the end of the day, this is the first time where you have the very pioneers of that technology, Jeffrey Hinton, Stuart Russell, Dario Amadai, the very pioneers of the technology themselves are ringing alarm bells. And would we not be wise to heed them?
So with that, I hope, provocative context setting, I am really grateful. On behalf of the Just Jobs Network, again, with support from IDR. CNF CDO to welcome our really esteemed panelists. Mr. Anurag Bihar, who is the chief executive officer of the Azeem Premji Foundation, has very graciously agreed to chair this conversation, moderate the discussion. We have Dr. Julie Delhanti, who is the president of IDRC Canada. Thank you, Julie. And Ms. Sandhya Ramachandran Arun, who is the chief technology officer of Wipro Limited. Thank you so much for being here, Sandhya. So, Anurag, over to you.
Thank you. Thank you, Sabina. Good evening, everybody. Thank you. There’s so much. There’s so much investment going into AI. why is it going into a why is so much investment there in AI? We are in the fifth day of the AI summit. So this is like the 42nd kilometer of a marathon. Right? At this stage, such investment has to be justified by some monetization. And where is that monetization going to come from? It’s either going to come from productivity, which comes from labor reduction, or it is going to come from new products and services or both, a combination of both. That’s where it’s going to come from. Right? We will talk more about that. At this moment, my job is easy.
I’m going to just ask Sandhya, because she’s the representative of the technology world here really, that which way is this technology headed? And in very simple terms, what is she seeing its implications on jobs? I mean, what kind of jobs are going to get displaced, destroyed? And what kind of jobs are going to get created? and what’s the underlying dynamic because of which these jobs will be created and the jobs will be destroyed. So how does she see it in the world of technology? Let’s start with that.
Sure, thank you so much. Thanks, Anurag, for the question. So as far as the tech industry is concerned, we are really witnessing a very huge impact of the AI evolution as a disruptor. We’ve had to revisit how job roles are created. We’ve had to revisit how talent has to be reskilled. And we have also revisited the responsibility, not just in terms of security, safety, but also in terms of what does it mean to our colleagues and our hiring. I think initially there was a huge amount of fear that we would not hire from colleges, which is now… despair because we’re broken. continues to hire from colleges, and so do our competitors. But the criteria for hiring has shifted to a more nuanced, a more calibrated way of looking at learnability, looking at whether a person communicates well, technical ideas, looking at whether a person is adaptable.
Because AI is a technology that is changing as we speak. So no one can claim to be an expert in AI and remain that way for the next five days, possibly, because there’s things that’s going on changing every day. With regard to our own talent, we have created role personas, and we have created very specific learning modules on how the role changes with AI. And everybody from the board to the CEO down to the youngest employee is going through a very calibrated learning process. And there is also a very… calibrated way in which services and ways of working are changing. So to that extent, we see a change. We are not seeing a displacement because most of the work that we do is consultative in nature, inspired of the market valuation erosion that we saw some time back because of a news from Anthropic and Palantir.
The insiders in the technology world were already aware of the transformative nature of these solutions coming up. And we have already been using these solutions significantly for over a year. So from a market sentiment point of view, possibly there was an erosion, but from a technology impact perspective, we have been bracing ourselves for the change and our journey of transformation continues.
I just have a follow -up on that, and then I’ll move to Julie. I’ll put it very, I mean, let’s say, a very, very simple, commonsensical question. Which is that, we are hearing about these tools where coding has become so much more easier, right? So, and this is not just about Wipro, it’s about the IT industry in general. So if coding is becoming so much easier, and 50 % or 70 % of coding can be done by these AI tools, then isn’t it inevitable that IT sector jobs will be lost? Or if there’s business or volume growth, much less hiring will happen. So that’s part one to my question. Part two is, if you move away from the IT world, and if you go to let’s say design and marketing, or, I mean, let’s say my world of the academy, the world of research, so many of research assistants and those of you who have used research assistants or work with research assistants, so much of that job is being done easily by AI.
So part one of my question, if coding is becoming so much more efficient, isn’t it inevitable jobs will be lost? so much hiding will not happen, whichever way. And aside from that, in the outside world, in other industries, what is it that you’re seeing?
Sure. Let me just address the coding part of it. I think for over 15 years, the industry has been trying to explain to the outside world and as well as to the talent aspiring for careers with us that we do not have coding roles primarily. Coding is a very small task in what a software engineer does or a software developer does. There is the need to understand business outcomes. There’s a need to understand customer experience. There’s a need to understand architecture and what is a well -engineered code, right? So this is not new today. This has been in existence. I mean, I’ve been doing digital transformation for the last 15 years, and we’ve been trying to change how the world thinks about these roles.
Yes, the day is here when coding can be completely handed off to an AI agent. And that is indeed a fact, right? But the fact that supports the success of this code in business is really the ability to have a human oversee the design, the engineering, the architecture, the security, as well as delegating the coding work to an agent. So the role of a junior developer really becomes that of a little manager of AI, as opposed to saying, you’re displacing my job. The person’s actually going up if the person really is aware and aligns to what the organization needs in terms of figuring out what is required. And those are the trainings that are happening.
That’s what’s happening in terms of selection. We now have COEs inside engineering colleges where we are talking to universities about this as well. And what about other industries? whatever you’re seeing? So other industries we work with, there is a variation. So if you think about it, marketing, there’s a lot of work that gets offloaded. The strategy, the planning, the oversight on execution, the ROI on marketing still remains a strategic thinking job that remains with humans. But you can generate a lot of good quality visual, audio, and video content using AI today. And probably it’s making marketing a whole lot more efficient. Now, if you take finance, for example, again, a lot of processing gets taken over by AI, but it still needs a human to bring in wisdom in terms of how the data gets interpreted, how decisions are being made, and also to make sure that the AI aligns to human values in some sense.
So those kind of changes are happening in these functions. And that’s why I’m so excited to talk to you today. And I’m so excited to talk to you today. And I’m so excited to talk to you today. And I’m so excited to talk to you today. And I’m so excited to talk to you today. Industry -wise, there is a lot happening positive, I would say, in, say, healthcare, for example, even in banking, for example, where we are able to fight financial crimes a whole lot better. In healthcare, we are augmenting technicians, clinicians, and doctors with more intelligent input for decision -making. And while AI can make the decision, you don’t allow it to make
So, Sandhya, just put a pin on something that you said, and I’ll come back in the second round. You used the word human and wisdom. So just put a pin on that, and I’m going to come back to that in my second round. Julie, if Sandhya was less than as optimistic as she is, she wouldn’t be representing the tech world, you know. So one should expect that she’s as optimistic. she is. But what I wanted to ask you was that, you know, eventually, and, you know, from your vantage point, you know, you’re seeing how governments are dealing with this evolving situation, and not just an AI safety and, you know, all the other things, but particularly on labor markets.
So how can governments and institutions govern AI responsibly, such that any disruption in labor markets is sort of minimized or handled well, or the transition happens well? So let’s assume this picture that Sandhya has painted, that, of course, there’s something that’s going on, the reproductive, like she talked about the marketing and advising. So some people are going to lose jobs there. So what should government institutions do? How does one govern this situation, such that the benefits are maximized? And I’m talking particularly about labor. markets, not the other stuff, while harms are minimized.
Yeah, thank you so much. I’m going to answer that question, but the last question just made me think about two things. One was, you know, I’m old enough to remember when computers first came around in the 70s and, you know, what we thought would happen with computers and the job losses that we anticipated. And, of course, we did lose jobs. There was a lot of labor disruption related to, you know, typing pools and different kinds of ways. But at the time, even home computers, nobody could even fathom what you would do with a home computer. The conversation then was that home computers would be used to develop recipes and that you’d have recipes because homes were only where homemakers were.
People couldn’t even, there’s such gendered ideas that people just could not understand what you would do with a home computer. So I think in the same way, some of what… is going to happen with AI in the labor market, we may not be able to anticipate just yet. So just as a reminder of where we came from with other important technologies. But when it comes to governance, I think the important issue is that it’s not really only about the technology, it’s really about institutions, it’s about workers, and it’s also about research. So when it comes to institutions, really without the kind of strong institutions in countries, regulatory institutions, labor institutions, strong research ecosystems that are able to really understand what’s happening in the labor market, I think it’s very difficult to end up having a strong regulation of what’s happening in the labor market.
So just those institutions are incredibly important to understanding where job losses might be, where biases might happen, and really investing in people and institutions is something that has to go hand in hand with our thinking around technologies. Another… Another area is around making sure that when we’re thinking about new technologies, that we’re making it very human -centric. And one of the things that the AI4D program does when we think, what do we mean by human -centric? It’s really about making sure that we’re co -creating new technologies with the co -creation of workers, of communities, of employers, so that we can understand how to enhance job quality, how to enhance productivity, rather than increasing inequalities or changing who benefits.
So really understanding who benefits, who’s going to face the kinds of disruptions is really important so that we’re not thinking about that as an afterthought. That we’re really shaping AI systems using that knowledge. And similarly… I think the importance of research in, and I’ll just give an example from our AI4D work is we’ve done a big research program with partners in sub -Saharan Africa that’s looking at, that’s collecting household data, firm level data, worker data, to understand what the real world impacts of AI are on labor markets. And it’s that kind of tracking, who’s going to benefit, understanding who’s going to be displaced, and how the tasks and skills are really changing that’s going to allow governments to better design and think about what kind of skills development they need, what kind of social protections they need, and how to support labor rights.
So really, I think growing AI responsibly doesn’t mean avoiding innovation or avoiding change, but it’s really about shaping AI so that it, it does strengthen labor markets and supports workers and creates more opportunities.
Thanks, Judy. Thank you so much. I’ll move to Sabina Sabina I mean since you are the labor market expert here amongst us and the researcher what is it that you see what is it that I mean there’s so much of news we have had this five days of this grand summer what is really going on what do we understand what we don’t understand in the context of the impact of AI on jobs how do you stack up
so just a little tongue -in -cheek we go back to the 1600s we’d asked chat GPT then if Galileo was correct it would have said no way right so this technology you know for all the possibilities that it brings notwithstanding it is not just a technology we can’t just at AI as machine learning, large language models. It is a system, it is an instrument that is being utilized for social, political, and economic engineering. And my job is to look at the impact of that in labor markets. So if we limit ourselves just to the question of how many jobs will be lost, how many jobs will be gained, that’s A, not even an appropriate question.
Two, I agree with my fellow panelists that we don’t necessarily know what sort of new possibilities there might be. But what we do know, what we already see, is also something that Sundar talked about, which is the efficiency gains. And any time there are efficiency gains, there are layoffs. And please, you do the research, right? Like, I do my job. But look at the newspapers. Companies are laying off thousands of workers already. All the big tech companies have in recent years been laying off workers. Now, sure, they can say that this is a confluence of many factors. It’s not just AI, and most of them will not just ascribe it to AI. They might ascribe it to macroeconomic conditions, to the confluence of various other forces like the pandemic or trade shocks, all of which is true.
But AI is one really big disruption that comes on top of all the other disruptions, and there’s already plenty of evidence that is suggesting that these disruptions are not just changing the quantity of jobs in terms of how many companies are already laying off workers. Again, I mean, we’ve heard also projections from the, tech companies themselves, right, what the possible projections are. of disruptions and layoffs are going to be. But we also already have evidence of people being laid off. But then on top of that, I would say let’s look beyond just how many jobs are lost and how many jobs are gained to actually look at, I mean, take the gig economy, for example, and algorithmic management of gig workers.
That is a labor market issue. If a gig worker is wronged, the platform just, you know, they just get kicked off the platform. There’s no mechanism for redressal because it’s an algorithm that’s managing the worker. So who do you talk to? I mean, I can go on and on and on. Now, we might be separating out platforms from AI, but actually the algorithms are AI, and it’s embedded in a platform economy that is increasingly becoming the architecture for transactions, and it’s deeply troubling. And then the last thing I’ll say is, so I’ve already said… like in terms of quantity of jobs, we are already seeing evidence of layoffs, right? We’re already seeing the evidence of layoffs.
It’s just that people aren’t necessarily able to pinpoint and ascribe it to AI. That’s point number one. Two, we need to go beyond the question of quantity of jobs and also look at the impact of this technology on quality of jobs. And third, we need to really deeply think about, again, to Julie’s point, the architectures that can help mitigate some of the potential adverse effects of this technology, both on the quantity and the quality of jobs. And we don’t have the luxury to sit and wait and say, hey, let’s get the empirical evidence and then we’ll figure out what to do. That will be way too late, right? So what do we need? We need countries to think about competition policy.
We need to look. We need to look very closely at tax policy. We need to look very closely at how labor laws need to change. We need to look at social protection systems. We need to look at skill systems, everything that Julie just mentioned, right? But we have to start from an urgency about this is having a huge impact already. It is likely to be, you know, even bigger, and we don’t have the luxury of time to just sit back and wait and say, hey, we need more empirical evidence before we figure out how to mitigate the negative or potentially negative circumstances. So that is what I think is, you know, really, really urgent, that everyone get on that bandwagon and say we need to create these systems and ask for them and do it in our work and do it in our advocacy.
Yeah, thank you. I’ll just follow up. I’ll just follow up with it. So, and Julie, please. Pardon. for saying this. I’m saying this tongue in cheek and all my friends and colleagues here who are not from India please pardon me for what I’m going to say. So, you know, we Indians, why should we care about all this? And the reason I’m saying that is because, you know, well just about 9 or 10 % of our employment is in the formal sector. So even if there is huge disruption in labour markets, maybe 2 % of these people are going to lose their jobs, right? So why should we care about all this stuff? Do you have any comments?
I do. You can be sure I do. You can be sure I have a comment about that. So if you look at the numbers, we are more than 90 % in India in formal employment. So Anurag’s exactly right. He knows his numbers. So, you know, essentially what you’re saying is 1 out of every 10 people stands to be potentially affected, right? That’s one way. of looking at it. The other way of looking at it is we have such few good jobs, right? We have such few jobs in the formal labor market. Only one in 10 people get to have a formal sector job. And now you’re taking that away as well, right? That stands to be disrupted. So again, we’re moving to a world of work that is much more precarious, much more insecure, much more uncertain, where workers don’t, they’re not even called workers anymore.
We call them self -employed contractors. They have no health insurance. They have, you know, this is the precaritization of the labor market. So not only do you have, you know, pandemic, climate change, energy transition, trade shocks, and AI destruction, but you have a world of work that is much more precarious, disrupting everything, but you also now are moving to a place where work is becoming more and more informal. Formal jobs are jobs are being, you know, gotten rid of in the name of, please apologize, in the name of efficiency gains, right? And so, yeah, so that’s why in India we should be really scared because we have such few formal jobs. And then imagine if you have these jobs in the IT sector in Bangalore disappearing, all the workers that used to go to bars and restaurants and get loans to buy houses and cars, that starts to disappear and it has cascading effects across the economy.
So, you know, so the impact of this is definitely in the global south. It is definitely beyond the few formal sector jobs. And it’s deeply disturbing. And we need to actually work to understand from technologists very clearly, you know, how these efficiency gains are going to happen and how they’re going to, how. What can different governments. and so on, and for architecture, public architecture, manage some of these changes. So we do need to care. Definitely need to care. We need to care urgently.
All right. So I’m going to come to Julie on this and come back to you, Sandhya, because I put a pin on something that you said, right? So, Julie, I mean, let’s assume that the alarm that Sabina is raising is at least half true, right? It’s more than half. You know, I have a deep conflict of interest, and I’ll tell you once I’m sort of done with this. So, Julie, how can, you know, what are the lessons that you’re seeing across countries, you know? You’re seeing the vast landscape, right, and IDRC has a view across the continents. So what lessons can be learned? From across the continent. such that AI is able to create opportunities, right, part of what Sandhya talked about, and doesn’t really deepen inequality or it minimizes it.
What are you seeing across the countries? Something, some good stuff.
What is that? What is that regulation? And I think one of the – we have this AI – the Global Index on Responsible AI that some of you may have heard about. It’s been talked about a lot during the conference, or at least some of the sessions that I’ve been to. And really what that is, it’s the largest global rights -based data set on responsible AI. And what is distinctive about it is that it includes a dedicated focus on labor protection and the right to work. And by providing that country level, that sort of comparable data, it looks at 138 countries. So by providing that comparable data, it’s helping governments to understand what they might need to do better, what some of the issues are, how they can improve.
So really using that information to support governments in understanding what is the regulation, what is the solution that they need, not just – You know, it has to be based on some evidence. And I think the third big thing, which won’t be a surprise to anybody here that I’m saying this, is that we really need to have good evidence, and evidence really matters when it comes to these issues. So tools like the Global Index on Responsible AI really allows policymakers to move beyond kind of the abstract must -fix regulation to assess how governance of AI actually affects people’s rights, affects their jobs, affects their working conditions, and supports more proactive policymaking on labor regulations, again, skills, social protections, et cetera.
And I think equally important is that we’re still learning. There is no standardized, here is the regulation that you need codified. Through the kind of work that we’re doing, I think we’re learning what’s the balance between… supporting innovation… and still supporting regulation and safety. And I think working together across many countries to share that kind of information is what’s going to support us in finding the right tools.
Thanks, Julie. I’m going to come to you, Sandhya. But I just want to disclose something to all of you. That’s my conflict of interest. You know, Sabina is a labor market researcher, and naturally I would think she’s saying what she’s saying. Julie represents IDRC, and therefore she’s saying what she’s saying. Sandhya is the tech person here, so she’s saying what she’s saying. My problem is I’m responsible for this organization, Azeem Premji Foundation. And my problem is the following. My problem is that the foundation owns about 70 % of Wipro. Okay. So whatever is good for a tech company. is good for us, right? On the other hand, my job is not to take care of the technology and this world.
My job is to take care of the most vulnerable people in the country, right? The very poorest, the most marginalized, those who have no recourse to social protection. That’s my job. So I am a deeply conflicted person, right? Very deeply conflicted person. And I wanted to disclose that because I’m going to come to that towards the end. And it has a specific bearing on the question that I’m going to ask Sandhya, which is, you said something fascinating. And I want to put a pin on that. And I’m pulling your leg, you know, which is that rarely do you hear such words from a tech person. She talked about human care and wisdom, right? Didn’t she?
Okay. So, you know, really, my takeaway from what you were saying is that the tech tech stuff, you know, the coding and that kind of stuff, that can get automated. but something that is human understanding people understanding desires how do you work with people that’s what is hard to do and that’s something that you’re already seeing right so would you want to sort of comment on that
yeah so the stereotype of techies aren’t human is a little unfair I think so don’t anchor it in your heads but then yeah so where do I start at the end of the day what does technology consulting and technology services try to do they try to help our client businesses become more successful and our client businesses in turn become more successful when they are innovative when they are creative when they are growing when they are growing when they are making their business and they are doing their business and they are doing their business and they are doing their business profitably and they are doing their business and they are doing their business and they are doing their business and they are doing their business and they are doing their business Or if they have already reached a state of maturity, they are trying to bring in a whole lot of efficiencies as well, right?
So it’s the S curve where you have an idea, you nail it, and then you kind of scale it, and then you kind of start sailing. And when you’re sailing, that’s when you become a big battleship and you have to focus on discipline and efficiency and ensure that you’re making profits just the same even while you’re running this big ship. But then the cycle doesn’t end there. It kind of keeps going. You keep coming up with new ideas, you keep scaling it, and you keep sailing it. And so profitability starts off with an investment, it grows, and then you have to become super efficient to remain profit. And I’m saying this to my boss because every dollar that we earn funds to the tune of about 66 cents whatever efforts the KMG Foundation uses for welfare, right?
And I think it’s a beautiful thing. It’s a beautiful thing. It’s a beautiful thing. model and I don’t think an AI could have thought of it. So therefore I do believe very strongly that creativity, wisdom, vision, foresight, human centricity is core to any technology disruptor that comes about. So if you imagine the days when there were horse carriages all the horses would have been kind of crowding the roads and people would have been going from place to place and at the end of the day you would have had a whole lot of methane which would have kind of ended the year a long time back because of global warming. But yes, vehicles did come and you did have carbon fuel and the evolution continues.
So I don’t think technology is going to stop. So human ingenuity is going to keep bringing technology disruptors. These technology disruptors are going to be more and more exponential in terms of what they can do. And it is up to humans to figure out how to create policy, how to create a governance mechanism, and how to ensure that we derive benefits, mitigate the risks, and at the same time ensure that humanity is at the center of all of this. Right? Now, this is easier said than done, but we’ve done it with nuclear energy. Despite the disasters, the fact that you and I are still alive today and thriving and living a better life than we ever lived in the last 100 years is an example that, yes, you could have accidents that are preventable, but accidents are created by humans.
And it’s up to the leadership to ensure that they put the required guardrails. It could be policy. It could be governance. It could be guidelines, whatever you call it. And you can even hire a leader. And you can even hire a leader. some of the
Yeah, it’s good to hear that, you know. I’m just going to come to one round and then perhaps have the last word, if I may. Yeah, okay. So, Sabina, what’s your take? What should we do? What should we do, really?
So I’ve already kind of said what we should do, but first, Sandhya, everything you said really resonated with me, right? And I fully agree that, you know, that the humans have to take responsibility. I can think of a few very worrying scenarios where there are leaders in the world that have access to, you know, nuclear weapons that perhaps… shouldn’t have access to nuclear weapons, right? So how much confidence do we have in people, and particularly when you look at the overall trend of growing precarity? Again, take India alone. Fifty -eight percent of our employment is now self -employment. It is not, you know, and these are people, workers, that have no coverage of health insurance or any kind of safety net.
Add to that the fact that, like, there’s all these different forces coming that we don’t know, you know, if AI disrupts jobs or pandemics happen. We all saw what happened with migrant workers walking back to their villages, hundreds and thousands of migrant workers, right? There is a lot more precarity in the labor market than there ever has been in the past in modern history. And the problem is that regulation, and the regulation of the labor market, and the regulation of the labor market, and the regulation of the labor market, across the globe are getting weaker and weaker in this respect. And then we don’t have precedent, as Julie said. Like, we’re still trying to figure out exactly what we should do, right?
But I will say, I mean, I’ve said many, and I will say that, you know, in the meantime, AI is different because this is also the first time research is now showing, the first time that the current generation of young people have shown cognitive decline, right? So, I mean, rates of depression, rates of anxiety, cognitive decline. How does cognitive decline affect your ability to operate at work and then be replaced by machines that are more efficient because you’re getting stupider? Like, right? Sorry, but this is a really worrying scenario. So what should we do? I think I’ve said this. Multiple times. Regulation and building of social institutions. institutions, but I’ll take Julie’s challenge and say, okay, let’s go a level deeper.
I think we need to look at competition policy very closely. We need to look at antitrust. We need to look at tax, and within tax, we need to look at, you know, how do we do look at the full gamut from, you know, certain kinds of transaction taxes to what person, like a wealth tax, you know, the whole corporate tax rates, the whole gamut of tax tools that we have at our disposal. We certainly, in an area that I know well, need to look at labor regulations, right? There’s a lot of discussion now about what should happen in the gig economy, but, you know, what about, how do we, how do you distinguish if two people have lost their job, how do you distinguish, you know, between them?
You can’t say, okay, this person lost their job to AI, so we’re going to give them health care and, you know, other kinds of support, but… person we’re not right like you need to have universal systems of support for workers of health care of other forms of Social Security that that enable consumption smoothing as well so the economies keep functioning we need to invest heavily in our skill systems for all the talk I can talk about Indian numbers till I’m blue in the face of all the investment and talk about skills training in India only 4 .1 percent of respondents in our labor force survey acknowledge you know identify as having any kind of formal skills only 4 .1 percent despite you know us saying skill India and talking about investments and skills for the last you know well over a decade and a half skill systems.
There’s also well -documented research about how education, you know, the quality of education is so poor. So how do you take a young person in a remote part of India who can barely read and write, might say that I’ve graduated, I’ve done eighth grade or tenth grade, eighth class, tenth class, you know, even twelfth class, but can barely do foundational reading or math, right? How do you take them and say, I’m going to train you for AI. Yeah, that’s what I’m going to do. Like, it doesn’t work. It doesn’t work. So we need to actually fundamentally think about regulations. We need to very urgently work on our education and skill systems that meet people where they are.
We need to definitely think about universal social protection systems. That enable workers to transition between occupations from one sector to another, from one to another. to another from one occupation to another. And I can go into much more detail because this is something that my organization has worked a great deal on. What kind of systems do we need to enable workers to be better protected and be able
Thanks, Sabina. We’ve got, I think, five minutes or so, so I’m going to try and wrap up. Judy, would you want to comment?
Yeah, I just want to make a fairly random point, I think. And that is, in addition to the Artificial Intelligence for Development program that we have, we also have a Future of Work project. And I think one of the interesting things there that we don’t talk about as much, everybody is very worried about job loss. That’s kind of the big, it’s job loss. But actually, one of the bigger issues that’s happening is rethinking how to work and ways of working and the disruption that’s happening within jobs and within the workplace. And so I think that’s a really good point. institutions and organizations, that’s not necessarily about job losses. It’s about a complete shift in the way that we do our work and how workers are going to adapt to that fundamental shift in the way that they work.
So it was just a random thought.
I don’t think it’s a random thought at all. I think it’s a salient foundational thought, you know, for this discussion. You want to comment on that one line? Because that’s such an important point.
Yeah, no, I mean, just to say that, you know, the Future Works Collective is a global consortium of researchers that IDRC funds that JustJobs is part of that focuses exactly on that. So I agree 100 % that that is a foundational and very important issue.
Sandhya, what about you? How would you want to respond to everything Sabina has said?
Look, I think… Watching and waiting is certainly not an option. I mean, we don’t want to be in a Game of Thrones situation when you’re saying winter is coming for some 22 seasons and then it comes. Nobody’s going to wait for it. So we know what’s coming, and we know what’s coming is also capable of evolving and changing tremendously. So we need to learn to change. And yes, we do need to elect good leaders. We do need to have policy at all levels. We need to have policy embedded in platforms. And of course, we need to have a lot of reimagining work and training of workforce. So yes, I think to some extent, painting doom and gloom is good.
Then we start acting, right? But to some extent, I think it also shouldn’t make you paranoid that you become deer in headlights. So yes, we should act, and we should move forward on all of that that all of us agree on.
It seems so. It seems so, absolutely. No, but, you know, I think that’s, in some senses, a very good summary, what you just now said, right? What I wanted to say was that this phrase that’s used, boomer and doomer, boomer and doomer. So in a sense, my head is the boomer and my heart is the doomer, given my role. I want to take you just for a minute, which is my job is more to do with education. So we run three universities. We work with, at any point in time, we are working with more than 100 ,000 teachers, right? And so I’m an education person. I’m not the labor market or the tech person here, right? And I am deeply concerned by the effect of AI on education, deeply, deeply concerned.
In fact, I feel that AI is attacking the very foundation of education. The very foundation of education. What AI is doing is saying, the phrase artificial intelligence, it suggests what it does, which means you essentially outsource your thinking. So teachers are outsourcing their thinking and students are outsourcing their thinking. So essentially, and that’s what Sabina was referring to, but she was referring to in the context of social media, that for the first time in this round of sort of assessments, we are seeing cognitive declines, or on test measures we are seeing declines in student performance. I cannot tell you how serious the issue is. And it’s impossible to regulate this. It’s impossible to regulate this because it’s everywhere.
So the only way we are able to deal with this, in the universities at least, is that all assessment, examination, is now returning to the old world paper and pencil, in class test. No home assignments, no project planning, no test. No project work, nothing. Just come here and sit. and write the examination. It is truly serious. I mean, we don’t know how to tackle this right now. And the reason I talk about that is I want to go back to what the analogy that Sandhya used. And I’m so glad that she did that, which is that it is as serious as the nuclear technology. It is as serious as the nuclear technology. And in one very deep way, it is far more serious than the nuclear technology because nuclear technology did not reach out and affect every individual human being.
The possibility of policies and governance to be able to circumscribe, to put boundaries, to manage, those possibilities were far greater. And the possibilities here were highly disruptive, not highly, perhaps the most disruptive of technologies is in retail form, right? This is retail transformation of humanity. It is so hard. to do this. But I’m really glad. I’m glad that with the three of you here, we have this sort of reasonable conclusion, if I may say so, that we are really facing something as serious as the nuclear technology. And you can’t run away from it. It’s happening. You can’t run away from it. Job losses will happen. We’ve got to figure a way out of it. And I would want to close on this human note, that eventually, perhaps, those jobs that require wisdom, empathy, care, human understanding, they are going to be the hardest to replace if at all.
And they will stay. And that’s what one can see in the tech world. So with that, I want to thank all three of you. Thank you so much. I want to thank all of you for coming here. Thank you very much. Thank you.
Sabina points out that AI is causing major disruptions that are already leading companies to lay off workers. Private research shows firms claim 30‑40 % time‑saving productivity gains, which translate…
EventEconomic | Future of work Study of LLMs in call centers showing 14% average increase in productivity, up to 35%. Studies in software and coding showing double-digit or even triple-digit productivity …
EventGenerative AI is expected to create exponential returns in productivity, particularly in enterprise systems. However, this productivity gain may put at risk many of the approximately one billion knowl…
EventShea Gopaul: So thank you, Sanda. And like Sandra, I’d like to thank the African Union, as well as Global Compact. is a co-sponsor with us today. My name is Shea Gopaul, I’m the permanent rep fo…
EventAn audience member articulated what they described as “overwhelming pessimism” among young people about career prospects, highlighting the disconnect between increased productivity and stagnant median…
EventEffective governance of AI’s labor market effects requires robust institutional infrastructure including regulatory bodies, labor institutions, and research capabilities. Without these strong foundati…
EventSometimes proactive regulation is necessary to prevent unchangeable negative consequences, rather than only acting after problems occur
EventUnderstanding what policy changes are needed will help drive systemic prevention rather than just remediation of forced labor
EventTeaching critical thinking and discerning facts from misinformation is crucial in the digital age. The traditional education system needs to adapt and modernize to adequately prepare students for the …
EventGlobal education systems are currently facing a learning crisis, with many schools falling short of literacy and numeracy levels. There is a lack of adequate skills being provided to students that are…
Event-Educational transformation is essential: Current educational systems must change to prepare people for unknown future jobs, starting from K-12 levels
EventThe tone was notably optimistic yet pragmatic, described as representing “hope” rather than the “fear” that characterized earlier AI summits. While panelists acknowledged significant risks around mark…
EventSummary:The speakers show broad agreement on AI’s transformative potential for development but significant disagreements on implementation approaches, risk priorities, and institutional capabilities. …
EventThe speakers show broad agreement on AI’s transformative potential for development but significant disagreements on implementation approaches, risk priorities, and institutional capabilities. Key tens…
EventIdekoba offered a more cautious perspective, noting that while remote work enables access to global talent, it is not applicable to all jobs, particularly in blue-collar sectors. This difference in vi…
EventModerate to significant disagreements with important implications. The speakers’ different perspectives on AI’s current capabilities versus future potential, individual versus industry-wide solutions,…
Event**Systemic Societal Risks**: Broader societal impacts, particularly profound labor market disruption that could create systemic challenges requiring coordinated responses.
EventThe implications of this shift extend beyond individual decision-making to broader societal trust in information systems. As Mpanya noted about the regulatory complexity: “Some weeks I spend more time…
Event81. The challenges posed by this variety of health measures are not confined to crosscountry comparisons but extend to within-country comparisons. Recent research on inequalities in h…
ResourceThere are many working definitions of global health. Some emphasize certain types of health problems (e.g., communicable diseases), whereas others emphasize certain populations of interest (…
Resource“Effective AI governance requires input from diverse stakeholders including the scientific community, innovators, and civil society organizations.”
The knowledge base states that comprehensive AI governance must involve a broad range of actors beyond government, such as scientists, innovators and civil society [S74].
“AI4D programme’s human‑centric co‑creation with workers, employers and other stakeholders.”
While the report mentions the AI4D programme, the knowledge base discusses inclusive AI initiatives that emphasize co-creation with workers and other stakeholders, aligning with the programme’s approach [S76].
“Wipro has built role‑personas, specific learning modules, and Centres of Excellence inside engineering colleges to upskill every employee.”
The knowledge base describes similar skilling initiatives in engineering colleges involving national platforms and multiple tech firms, indicating that such models exist though not specifically attributed to Wipro [S86] and [S87].
“Labour‑market uncertainty about AI’s impact on jobs; firms publicly deny threat while privately admitting 30‑40 % time‑saving that translates into workforce cuts.”
Research shows that despite rapid AI adoption, the labour market has remained stable and fears of large-scale job loss have not materialised, providing nuance to the claim of imminent workforce cuts [S73].
“AI can now generate 50‑70 % of code, turning junior developers into “AI‑managers” rather than eliminating their positions.”
Broader discussions in the knowledge base note that AI augments developer tasks and changes job roles, though they do not provide the specific 50-70 % figure, highlighting the shift toward AI-assisted development [S83].
The panel broadly concurs that AI will significantly reshape labour markets, creating new roles that require human oversight, creativity and empathy, while also posing risks of displacement and precarity. There is strong consensus on the necessity of immediate, evidence‑based policy action, robust institutions, and extensive skill development to manage these changes.
High consensus on the need for proactive governance, institutional strength and capacity building; moderate consensus on the extent of job displacement versus augmentation.
The panel shows clear divisions on how severe AI‑driven job losses are and how quickly policy should respond. Sabina stresses immediate, wide‑ranging reforms backed by observable layoffs, while Sandhya and Julie adopt a more nuanced view that AI primarily augments work and that robust institutions and evidence‑gathering should guide policy. Anurag’s questions highlight practical concerns about coding automation, further exposing the split between urgency and optimism.
High – The speakers diverge on both the magnitude of labor disruption and the pace and nature of policy response, which could hinder coordinated action on AI governance and labour protection. Consensus exists on the need for institutions and human‑centric safeguards, but disagreement on urgency and specific levers may delay effective interventions.
The discussion was driven by a series of pivotal remarks that moved the conversation from alarmist predictions of job loss to a layered analysis of job quality, systemic vulnerability, and the necessity of human‑centric governance. Sabina’s early disclosure of corporate admissions and the inequality angle forced the panel to confront the urgency of regulation. Sandhya’s nuanced view of AI‑augmented roles and Julie’s emphasis on participatory design and evidence‑based tools introduced constructive pathways forward. Subsequent interventions—especially Sabina’s focus on gig‑economy redress, the clarification about India’s formal sector, and the introduction of the Global Index—deepened the debate, linking macro‑policy, institutional capacity, and cross‑country learning. Anurag’s final remarks on education broadened the stakes, underscoring AI’s pervasive societal impact. Collectively, these comments reshaped the tone from speculative dread to a pragmatic call for coordinated policy, skill development, and safeguards that keep humanity at the centre of AI’s evolution.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

