How AI Is Transforming Indias Workforce for Global Competitivene

20 Feb 2026 14:00h - 15:00h

How AI Is Transforming Indias Workforce for Global Competitivene

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel convened to examine how artificial intelligence is reshaping the workforce in India and the UK, focusing on both disruption and opportunity [2][8-9]. Organisers split the discussion into three parts: identifying the nature of disruption, exploring skill requirements, and considering policy and education responses [13-16].


Srikrishna argued that AI capability is rapidly expanding and already displacing large chunks of work, especially in software engineering, which he now sees as the most affected area compared with testing or infrastructure [23-28][34-37][39-42]. He warned that infrastructure-related roles are on a plateau, while coding is becoming increasingly abstracted and cost-free, turning code into a commodity [39-42][160-166].


For fresh graduates, Srikrishna highlighted enormous opportunities but stressed the need to acquire a new skill set that goes beyond traditional coding [45]. Ravi added that the critical competencies include system-level judgment of AI outputs, interdisciplinary fluency across engineering, risk and regulation, a continuous-learning mindset, and deep contextual awareness of India’s linguistic diversity [57-71].


The UK panelist Sue noted widespread anxiety about job loss but described a national AI Skills Partnership that aims to upskill over a million people and turn anxiety into agency through reskilling and conversion courses [96-104]. She also emphasized that effective upskilling must combine technical, governance and “human” skills, and that schools and curricula need to be aligned with AI adoption [90-95][112-115].


Both Indian and UK speakers agreed that there is no single silver-bullet solution; instead, an iterative, collaborative ecosystem of government, industry and academia is required [109-112][354-358]. Role redesign is already underway, with software squads shrinking from ten members to three and delivery cycles accelerating, but adoption remains slow because AI still lacks contextual understanding [238-242][246-252].


Ravi described Mastercard’s governance model-chief AI officer, privacy-by-design, and horizontal AI teams-as a template for embedding interdisciplinary expertise early in product design [191-201]. The panel warned of concentration risk if only elite institutions control data and compute, urging broader access, tier-2/3 university involvement, and inclusive AI education [324-328][332-339][365-373].


Participants called for a new, interoperable skills taxonomy and lifelong-learning infrastructure to keep pace with rapid AI change [354-359][321-323]. The discussion concluded that while AI will transform many jobs, the priority is to equip the workforce with adaptable, cross-disciplinary skills and inclusive policies to harness the technology responsibly [351].


Keypoints

Major discussion points


AI is reshaping the IT services landscape, with software engineering now the most disrupted function.


Srikrishna notes that the “direction of travel… there is disruption” and that the impact has shifted from testing to software engineering as the biggest area of change [23-30][31-37]. He also stresses that “opportunities for a young technically-savvy person is enormous” if they acquire the right new skills [45-46].


New AI-driven roles demand a blend of technical, judgmental and interdisciplinary capabilities.


Ravi outlines four core skill clusters: system-level judgment, interdisciplinary fluency, a continuous-learning mindset, and deep contextual awareness [57-71]. He later reinforces that governing AI at scale “requires interdisciplinary skill” and early integration of AI governance into product design [190-202].


Role redesign and robust AI governance are essential to realise value while maintaining oversight.


Srikrishna describes how a typical software squad is being reduced from 7-10 people to as few as three (product owner, developer, tester) and that “role redesign” is a prerequisite for AI value [238-246]. Ravi details Mastercard’s AI governance framework-including a chief AI officer, privacy-by-design, and cross-functional AI governance teams-to embed oversight from the start [191-202]. Sue adds that even with AI-generated code, “someone needs to check the code” and that governance and assurance roles will evolve [169-177].


National-level strategies (especially the UK’s) focus on coordinated upskilling, infrastructure and adoption pathways.


Sue explains the UK’s AI Skills Partnership, the goal to train over one million people, and the push to turn “anxiety into agency” through reskilling, conversion courses and industry-government collaboration [90-98][101-108][109-124][125-131]. She later highlights investments in data and compute infrastructure, AI growth zones, and a shift from building foundations to accelerating adoption [313-321].


Inclusion, equity and risk mitigation (e.g., concentration risk) are seen as critical safeguards.


Ravi warns of “concentration risk” if only a few institutions control data, compute and talent, urging broader access to tools, curricula for tier-2/3 institutions and safeguards against over-automation [326-340]. Sue calls for “interoperability of skills credentials” and a national taxonomy to ensure mobility and recognition of learning [354-359]. Srikrishna caps the discussion by stating that “inclusiveness has to be by design” and that academia should make AI resources freely available [365-373].


Overall purpose / goal of the discussion


The panel was convened to explore how AI is transforming the workforce, diagnose the resulting disruptions, identify the new skill sets and governance structures required, and share how governments, industry and academia can coordinate to up-skill workers, mitigate anxiety, and ensure inclusive, sustainable adoption of AI technologies.


Overall tone and its evolution


– The conversation began informative and exploratory, with panelists mapping the scope of AI disruption.


– As the dialogue progressed, the tone shifted to solution-oriented and collaborative, highlighting concrete skill frameworks, governance models, and policy initiatives.


– Throughout, there was a balanced mix of optimism (about opportunities for young talent and economic growth) and caution (about anxiety, job displacement, concentration risk), maintaining a professional and forward-looking atmosphere until the closing remarks.


Speakers

Sangeeta Gupta – Panel moderator (moderated the AI and workforce transformation discussion) [S1]


Srikrishna Ramakarthikeyan – Senior executive in the IT services sector (provides perspective on software engineering disruption) [S2]


Ravi Aurora – Mastercard executive focusing on AI governance and responsible AI (discusses Mastercard’s AI governance framework and chief AI & data governance roles) [S3]


Speaker – Generic placeholder; no specific individual details provided in the transcript.


Sue Daley OBE – Director, Tech and Innovation, Tech UK (recognised with an OBE) [S7]


Additional speakers:


President, Global Public Policy and Government Affairs, Mastercard – Leads Mastercard’s public policy and government affairs globally (title listed in the opening speaker line) [S4]


Vishnu R. Dusar – Co-Founder and Managing Director, Nucleus Software (title listed in the opening speaker line) [S4]


Sue Daly – Director, Tech and Innovation, Tech UK (title listed in the opening speaker line) [S4]


Full session reportComprehensive analysis and detailed insights

The panel opened with moderator Sangeeta Gupta welcoming the participants – Vishnu R. Dusar (Mastercard), Srikrishna Ramakarthikeyan (Indian IT services), Ravi Aurora (Mastercard) and Sue Daley OBE (Tech UK) – and explicitly laid out a three-segment structure for the discussion: (1) the nature of AI-driven disruption, (2) emerging skill requirements, and (3) policy and education responses [1-8][13-16].


Nature of disruption – Srikrishna argued that AI capability is expanding rapidly and is reshaping software engineering more than testing or infrastructure [23-30][31-37]. He noted that coding costs are approaching zero, turning code into a low-cost commodity that can address problems previously considered too complex or expensive [39-42][160-168]. Adoption, however, will be gradual, with an estimated 1-2 % annual impact on employment, potentially rising to 2-3 % as organisations catch up with the technology [240-244]. He emphasized that AI’s value lies in enabling solutions that were impossible before, creating huge opportunities for technically-savvy graduates who acquire new problem-solving capabilities [45-46]. At the same time, he warned that a generation raised on AI tools may lack traditional coding fundamentals and will “think differently”, relying on “white-coding” approaches rather than deep algorithmic understanding [141-148].


Emerging skill taxonomy – Ravi outlined four key capability areas needed in regulated, high-stakes environments: (i) system-level judgement to detect model drift and assess outputs, (ii) interdisciplinary fluency across engineering, risk, regulation and user behaviour, (iii) a continuous-learning mindset to keep pace with evolving models, and (iv) deep contextual awareness of India’s multilingual and informal-sector realities [52-61]. He stressed that these capabilities must be embedded early in product design [84-90].


Role redesign and governance – Srikrishna described how a typical agile squad is shrinking from seven-ten members to as few as three (product owner, developer, tester), accelerating delivery cycles from two weeks to two days, and argued that without such redesign the AI value proposition cannot be realised [238-246]. Ravi illustrated Mastercard’s governance model: a chief AI and data-governance officer, a privacy-by-design approach, and a horizontal AI governance team that spans data science, product, legal, compliance and engineering [191-201]. Sue added that even when AI generates code, human verification remains essential, shifting many roles toward assurance and governance [169-177].


Education, upskilling and infrastructure – Sue detailed the UK AI Skills Partnership, which aims to train more than one million people, offers one-year conversion courses for non-AI graduates, and seeks to turn worker anxiety into agency [96-104][105-108]. TechUK’s TechSkills programme provides a “Gold Accreditation” degree recognised by employers, signalling a trusted pathway for graduates [112-118]. The UK is also investing in a national data library and establishing AI growth zones to supply compute resources for innovators [313-321]. She called for a national taxonomy of skills and interoperable credentials so that learning is portable across sectors [354-359].


Policy coordination – Sangeeta contrasted India’s fragmented, state-wise AI initiatives with the UK’s whole-of-government, coordinated approach, asking whether the UK model could inform India’s strategy [108-109]. Sue confirmed that the UK adopts an iterative, flexible policy framework rather than a single “silver-bullet” solution [109-112].


Risks and opportunities – Ravi warned of a concentration risk if data, compute and talent remain confined to a few institutions, which could marginalise tier-2/-3 universities and smaller firms, urging deliberate inclusion of these players [120-128]. Both speakers highlighted widespread worker anxiety and argued that structured reskilling, lifelong-learning pathways and human-in-the-loop governance can convert anxiety into agency [98-104][345-347]. Srikrishna and Sue stressed that inclusiveness must be “by design”, calling for free AI resources and open curricula to democratise access [365-373].


Closing remarks – The panel reiterated that AI will transform many jobs, but the decisive factor will be how quickly education, industry and government co-create inclusive, interdisciplinary pathways for the emerging talent pool. Across the three-segment discussion, there was strong consensus that interdisciplinary upskilling, early governance integration and coordinated policy are essential to harness AI’s potential while mitigating concentration, over-automation and exclusion risks [354-359][365-373].


Session transcriptComplete transcript of the session
Speaker

President, Global Public Policy and Government Affairs Mastercard, Vishnu R. Dusar, Co -Founder and MD, Nucleus Software, Sue Daly, Director, Tech and Innovation, Tech UK.

Sangeeta Gupta

Thank you so much, Pragya, and a very good morning to my wonderful panelists. We have a few audience in the room, but we have a lot more online. So I’m looking forward to, you know, yeah, we can get out. You are here, Ravi, next to me. And Vishnu is just on his way. He should be here shortly. I think the theme of our panel is AI and workforce transformation. And clearly, from a, you know, India perspective, the AI is obviously creating a number of opportunities. It’s also creating a lot of anxiety amongst the youth. And I think it’s important. It’s important to decode what does AI really mean and how do we navigate these shifts that are ahead of us.

So in terms of structuring the panel, I thought we’ll try and break it into. three different segments. The first segment is clearly about what is the disruption and how are we designing for it? So try and get perspectives from each of the panelists on how are you seeing this disruption? Are we shaping this disruption or is this disruption really shaping us? So Kish, if I can start with you maybe, right? From one of the sectors that’s most hotly debated is IT services and you’re a leading company in that space. How are you seeing this change for your employees? Do you see software coding now only being done through AI tools? So what is the job of the coder if you look at it?

But how real is this disruption and how are you staying ahead of the shifts that are there?

Srikrishna Ramakarthikeyan

So I think the direction of travel is indisputable. That there is disruption. There’s a lot of there’s an issue of technology capability and there’s an issue of adoption. And there’s always that technology capability leads adoption. Adoption is going to impact, is going to determine workforce displacement or disruption. But the capability, there’s no doubt that this capability that exists today, actually this capability that existed three months ago, six months ago, where there’s quite a large chunk of work that is done by the industry that could potentially be displaced or improved or in some way impacted by AI. What is it that is getting impacted is changing very rapidly. So you would ask me at the beginning of 24, right?

What services will get impacted? What services will get most impact? Out of say testing, actually I’ll put BP of India. I am saying in tech I would put testing first and I would have put software engineering last. Today I will flip that. I will say software engineering is the most. So the direction of travel I think…

Sangeeta Gupta

So you really think software engineering is bigger disruption than testing and infra management or other stuff, right?

Srikrishna Ramakarthikeyan

That is true. So I think whatever disruption we saw I thought would be there in infra. I think it is there but it is a plateau. I am not seeing leaps and bounds of change. What we saw as a potential change like a year ago and now is not so different. I think the massive difference is in software engineering.

Sangeeta Gupta

So you know if you are a young software professional… How do you see… What does this mean for me as that young fresher out of college right now?

Srikrishna Ramakarthikeyan

I’ll say opportunities for a young technically savvy person is enormous now there are things they need to think of and do differently for that opportunity to become real for them because the real value of AI is not in reducing headcount in blah functions whatever it is where it’s in BPO or some functional work that’s not the real value the real value is in being able to solve problems that you could not solve before and I think you need to arm yourself with a completely different set of skills to make that real but if you do that I think the opportunities are enormous for a young age

Sangeeta Gupta

Thanks Kish, I’ll come back to you Ravi if I can come to you, MasterCard is very strong obviously in financial services but you have a very strong data and technology play how are you seeing this workforce disruption and for a company like yours which has a very large GCC in India what are the different kind of skill sets that you’re thinking about today

Ravi Aurora

Sure, thank you very much and thanks to NASCOM great to be here on the panel with Sue and Sri Krishna so I think like I mean a lot of change right over the last two decades when I look at our industry I guess if you look at it like all the professionals in privacy, cyber security data protection, technology risk they’ve all been enablers of digital transformation right? They have, I mean, create what we enjoy today in terms of digital empowerment and the ability, let’s say, talking from a payment lens, you know, very seamless in terms of wherever in the world you are, right? All that is riding on trust, right? And there’s a lot that goes in, you know, to build that trust, right?

So now we are seeing as artificial intelligence, AI is being embedded into a kind of decision -making, public infrastructure, service delivery, right, and governance. So it’s no longer kind of a downstream compliance function as such. So I think that’s why we need, you know, the shift is in kind of the fintech disruption that came about before. I think what we are… We are seeing a bigger shift that AI is bringing in terms of the kind of skill sets, you know, that are required. So, you know, to your question on what kind of skills are required, right? I think the skills I would say is that the, what do you call, the capability for system level judgment is needed.

So what we mean by that is that are you able to, you know, take what outputs are coming? You know, from AI. And you need to have the capability to understand is the model drifting, you know, in high stakes and regulated industry like ours. It becomes essential because decisions scale very instantly and as do the systemic errors, right? And the impact of those errors if left unchecked. So I think that it’s important to have that system. level judgment. Then, interdisciplinary fluency is important because the AI challenges are not just technical, right? They are at the intersection of engineering, of regulation, of risk, you know, user behavior. So, if we have professionals who are across those domains, right, that’s important and to have that interdisciplinary approach rather than working in silos as such.

Then, it comes to need for a very continuous learning mindset because the AI systems are evolving with data, right? And the workforce needs to evolve that too. And the ability to learn from live environments, right? What’s happening to adapt models, kind of, to be able to refine the decision -making. So, that’s important. So, system -level judgment, interdisciplinary fluency. continuous learning mindset and I think last but not the least is a deep contextual awareness is needed now in a country like ours in India you know multiple languages dialects informal systems so if an AI agent is interacting with the user the question is does it understand the context and the intent and the kind of the real -life realities or is it just a language right so because the context is shaped by the whole models are being trained which means that engineers have to consciously design for it so that contextual ability and awareness is very important

Sangeeta Gupta

so the typical engineer who was the coder as we know knew it obviously has to build a very differentiated set of skills is what you’re really talking about right so understanding interdisciplinary learning understanding context the ability to continuously learn I think that in itself is becoming a skills. So clearly, I think there’s a lot of change that is needed at a college level and school level on how, you know, even how you’re learning so that you are ready for this very, very changing world. So if I can come to you, right, how are you, you know, you represent Tech UK here, how are you seeing the AI disruption in the UK workforce? Is there anxiety?

Is there opportunities that you are seeing? And how are you as an organization, and of course, the UK government supporting this transition that’s

Sue Daley OBE

Well, thank you. That’s a question in a panel all in itself. It’s a real pleasure to be here. Thank you so much for the invitation to be part of the summit. Just to say to everybody, you’ve done an amazing job. So thank you. But also to this really important panel discussion. And it is absolutely a discussion that we’re having in the UK. And what I found really useful this week, if I can be slightly selfish for a moment, is that I think it’s a really important discussion that we’re be having in the UK. And I think it’s a really important discussion that we’re going to be having in the UK. And I think it’s a really important discussion that we’re going to be is to listen to the conversations that you guys are having here and the other global people that are here at the summit and to kind of compare notes.

Are we having the same conversations? Are we facing the same kind of issues? I think what I’ve just heard from my fellow panellists are some of the conversations that are happening in the UK. Yes, there is change. Yes, there is disruption happening. And to your point, absolutely, what we’re seeing is a lot of roles, not just in our industry and sector, but across industries and sectors, moving from very much admin tasks, very much cognitive tasks. Those are being increasingly automated. But then that’s freeing people up to do more problem -solving and to look at more client advisory governance and using and being able to shift those skills to look at AI governance. But also I would say client -facing as well, which goes to your point around skills.

I’ll come back to your broader question but yes it’s technical, yes it’s governance looking at other skills but it’s also those people skills, those human skills if we are shifting people, if jobs are shifting towards more of yes this automation can do the job but what’s the added value that I can provide and it’s my human skills which sounds very weird to say human skills, you know what I mean it’s that ability to interact, it’s that social, more social skills then are we teaching those as well as the technical as well as the legal, the governance as well as the software, as well as the technical skills, are we also teaching people and the young people coming through how to interact with people as well if they’re more client facing so absolutely the disruption we’re feeling it in the UK, we’re having that discussion in the UK, definitely in the industry is questioning what will my role be, where will I sit government is in the UK is focusing very squarely on this so as part of its AI opportunities action plan the UK government has created an AI skills partnership bringing together the government bodies that are looking at how do we upskill, how do we retrain, how do we get society ready for this next wave of AI that’s coming, not just the one we have now, but the one that’s coming down the line, and bringing together with companies and bodies such as TechUK and others to look at how do we do this in collaboration.

So how do we reach the wider population, and I’m not just thinking our industry here, but the wider society population with what are the training courses, what are the upskilling courses, what are the opportunities to learn and gain skills to thrive in an AI world, but then also how do we train our industry and sector for the shift that is happening as well. I think generally that task force is looking to train over one million people in AI so that we can help the greater population. be ready for working in this era. I think there is anxiety. I think there is concern. Some workers understandably worrying about displacement, worrying about if they’re at high exposure to automation, what does that mean?

How do they shift? How do they move? But I think what we are looking at is how do you turn and this is a word I’ve heard a lot about this week, how do you turn anxiety into agency? How do we encourage people to take a lead, lead, to take what they’ve learned but as you said, continuous learning, continuous upskilling because that is what you will need to thrive in this world. But I think what we’re looking at in terms of helping people do that is through restructured training, reskilling programs. It’s pathways for mid -career into new careers. One of the very interesting initiatives that the UK government introduced was around how people coming out of university that might not have an AI degree, can do a one -year conversion course to become then able to work in the AI industry.

so I think there are lots of, perhaps we’ll go into a little bit more, there are lots of different initiatives that the UK are doing which could be applicable here and vice versa, we want to learn from how you’re addressing this but I think there is anxiety but then how do you turn that into opportunity and agency

Sangeeta Gupta

and you know one of the issues in India we keep talking with the government is that we have a very disaggregated focus right now within India, there are multiple governments multiple state governments, organisations places, organisations like NASSCOM, we’re all trying to do some part of the pie but there is no if I can use that word, whole of government or whole of country approach right, I’m saying this is how if this is such a big disruption, this is how we will go about doing it, do you see that in the UK that there is an integrated approach and then obviously every actor has their own role to play in that

Sue Daley OBE

I think it’s coming first of all I don’t think there’s a silver bullet, I don’t think there’s one pure answer because the moment, as you said things are moving rapidly and quickly the moment you put in a task force or initiative, it may very quickly need to shift and need to change. So I think in all of these, and AI generally having an iterative, flexible approach that can adapt and shift as technology evolves and has new developments evolve is really, really key. So I think the AI skills partnership, which we’ve signed up to with the UK government, has really kind of become a bit of a cornerstone, a bit of a nucleus of how do we retrain, how do we upskill the general population.

But then I think there’s also the conversation about how are we ensuring our schools, our education curriculum, what young people are learning in schools, how is that joined up to the AI revolution? And I think while there’s some thinking there, I think that could be more joined up. And then, yes, of course, how are we training the industry? How are we getting people leaving, as you said, the freshers leaving universities with the skills that we need as industry? TechUK is part of part of TechUK is an organisation called TechSkills, go and check them out not right now but maybe afterwards and we at TechSkills work directly with employers directly with technology companies and universities so we be that bridge between the two to make sure that industry employers can provide input into the university, the courses what they’re teaching students so that when they come out of university they have a degree, it’s called a TechSkills Gold Accreditation Degree which means employers will recognise that degree and kind of go, yes you’ve got what I need, come and work for me so there’s no one single answer to this I think it’s a number of initiatives that need to work together but at TechUK as others we’re trying really really hard to join the dots but I think the TechSkills addresses the what do employees need from universities, how do we get universities and employers employers working more closer together what role can government do and what can government do that industry can’t and vice versa what can industry do that government can’t it’s really got to be a partnership and a collaboration but there’s no one I think single initiative that will in my view that will fix this or solve this or address this

Sangeeta Gupta

I think that’s probably a great way to think about it that there’s just so many changes that one single, there’s no single silver bullet like you said, you really got to figure out a way how you tie the different threads but let maybe a thousand flowers bloom because that’s the nature of what we’re dealing with right if you can bring it together and say here’s our coordinated approach I definitely think in the UK we could join up more these initiatives and maybe India with your scale can do that and you’ve definitely brought the world together in the summit so I’ve no doubt that you can definitely do that wonderful so Keesh if I can come back to you right again from an IT services perspective we’ve been always one of the largest employers for the engineering talent in this country now with the new skills that Ravi talked about do you see this as a way to focus will be largely on more elite top tier institutions and a large volume of students that were probably studying in tier 2, tier 3 colleges across the country and had a phenomenal career in our industry.

We are closing out opportunities for them.

Srikrishna Ramakarthikeyan

I want to make a point on a previous question and then I’ll address this. While, you know, and I agree there’s no silver bullet. However, I’ll say that, you know, I live in the US. The conversation I hear about policy around AI is should we regulate, should we not regulate? Who should regulate? Should it be the state? Should it be the central government? I’m not hearing what I heard here, which is a big focus on inclusiveness. And I think while, you know, it may not have all of the… I think while, you know, it may not have all of the… I think it’s still a very material difference in approach of how government I see here is thinking about.

And actually, I heard that from the UK. I did minister there before. I heard from President Macron yesterday in the plenary session. So I think there’s a big difference in some of countries relative to at least what I’m hearing in the U .S., much more focused on how to make it work for everyone. How to make it inclusive, which I think is a huge difference. I think it will lead to a very material difference in outcomes over a period of time. Now, coming back to your question. So first, do I know all the answers? No. But here are some things, some pieces that I think are true. First. I think we’re going to have to look at the data.

I think we’re going to have to look at the data. I think we’re going to have to look at the data. I think the I’ve seen young air native talent is much better at many things than think somebody even who’s in their 30s and trying to retrain them it’s much like you know do you use Instagram I don’t actually you know but there are kids who are grown up with it right so I think it’s the same difference the digital native I think you’re going to see an air native generation and we find actually like last year the there’s like a set of people we hire from the absolute top engineering schools like IT we had them train our management team on white coding in May last year because white coding back then was brand new and they were like and guess who the kids were the best in it in the company the people who came out of college right there in the best so we had them trainer so I think this part is going to be true right whenever we think of pyramid we have to bear in mind that sometimes the best talent is the youngest one that is coming the second one that’s going to choose I think ultimately the new opportunities cleared by AI go far outlaw far greater than the number of jobs this direct could reduce now there’s going to a period where you know there is a transition period and I’m not sure exactly you know how to clear but I’m very confident that ultimately AI is going to so many more things that will need building applications building tech for and I think power I think the third is also true that for kids the problem to solve is not tech is not coding.

It’s not, you know, creating data structures or whatever it is that kids are trying to solve. I think that’s a solved problem by some of the tech, by AI. So now you’ve got to think of what problems that you want to solve, which is something else, which is where the big

Sangeeta Gupta

So, Keech, I’m going to hold you to that where you said AI will create more jobs than it changes. So we’ll see how that plays out. But you know, one of the conversations I was having with another IT services company, and they were like this AI native talent is great, but that talent will have never learned to, you know, work without AI. And does that mean that some of your foundational and core skills will not be as solid as they were in the past because this is the world you’ve grown up with, and your dependence on these tools will be so high that does it lead to a lack of some foundational skills also, right?

Srikrishna Ramakarthikeyan

Listen, And I was in the United States for a couple of years. And I was in the United States for a couple of years. And I was in the United States for a couple of years. And I was in the United States for a couple of years. And I was in the United States for a couple of years. There was a time when coding you had to do in C++, right? And then there were, the whole evolution of coding as an example has been abstracting what you need to code for to something, right? So you wouldn’t have IDEs like I don’t know how many years ago, right? But who codes without an IDE now? Nobody, right?

And that’s been true for whatever, a decade. So I think that same question will become who codes now? And I don’t think anybody will code, okay? That’ll be a solved problem. So no. Is it going to be a discipline? I think far from it. I think it’s going to become a significant advantage. I think the cost of coding is going to become zero. Cost of code is going to become zero. What that means is you can solve. any number of problems with code that you couldn’t solve for before because it is too complex or too expensive to do so. So, absolutely not. I think it’s going to be a big advantage.

Sue Daley OBE

Yeah, no, really fascinating. I think just on the coding point, you’re absolutely right. And I’m just thinking as a woman in tech as well, we had a big focus in the UK of getting girls into coding. Brilliant. But actually well now, why? But there is also an opportunity there but there’s also a risk. So coding, AI for coding, great. But we will need somebody to check the code. So again, it’s that shifting and that moving of skills. And then my brain went to okay, well the people that were doing the code could we reskill them into checking the code and going more into governance. But then my brain goes to, but hang on, but AI might be able to check the code quicker than a human can.

but then you get to that point of somebody then needs to check that the AI has checked the code correctly so there is, you know, you’re baking in governance and assurance in AI, humans will need to be in the loop, so how can people in the coding world be shifted in their role, shifted to help more on the governance side I did have another point, however my jet lag brain means I’ve forgotten it, so I’ll give away

Sangeeta Gupta

But if you’ve never coded in your life how do you know what to check for?

Sue Daley OBE

Oh, I remembered my point that’s kind of related in a way to the gentleman from Mastercard was saying about context and the completely important context is really, really key and something that is in my brain as well is that people that work in organisations over the last couple of years, they have, you know, done junior roles, they’ve learnt the company they’ve learnt the sector, they’ve learnt the industry they’ve kind of done the grunt work you know, to learn the context and learn what’s important and what’s important and what’s important and what’s important what concerns me slightly is that people coming in using AI will not using AI but when do we give them time to learn the company, when do we give them time to learn the context, are they getting exposed to, you know when I first started in a company, I started in the basement I worked my way up but I knew my sector, I knew my industry, I knew that background I knew that context, I knew what I was checking and why so if automation takes those junior roles away, how are we teaching people, how are people getting exposed to the context and what a fintech industry needs and what it looks like if those opportunities which came through more junior roles are now no longer there, so I think there’s huge opportunities here but there’s also some rethinking we need to do as an industry and a sector of are we skilling people with the right things for what the industry needs going forward as well

Sangeeta Gupta

Thank you so Ravi if you want to go both on the question on we have a million plus engineers graduating every year what are the jobs for them and you know obviously you’ve talked about the skills they need but will we as and you know today tech jobs are not just in the tech industry they are in every sector but what you see as the opportunity for them and secondly this whole part about right what will humans do if AI does all the coding sorry what would humans what would humans do or the engineer do if AI is going to do all the coding right so

Ravi Aurora

flows, how operational controls shape risk over time and when to intervene. Then I think we have to make governance interdisciplinary and influential which is requiring fluency for people and putting things together along law, technology, ethics, operations. Like I mentioned before, privacy, AI, governance, they cannot operate only in silos. So the future readiness requires a big structural change in design, in procurement design, deployment. And we also have to close the uneven digital capability across institutions. We talked about that. If there are central agencies and large enterprises can attract talent, and large can attract talent, then we have to while smaller cannot. So that will create governance gaps and governance gaps especially where AI is expanding the most.

And those are risks that we need to make sure that we have the right solutions or the right thought process because it is around going beyond kind of elite specialization towards more of a broad -based AI digital literacy. So at MasterCard, like, you know, what we do is I think that, you know, we have spent, you know, several of our last years operationalizing responsible AI, right? And not just as a policy exercise but as a workforce and capability challenge. Now, we have a very formal established AI governance framework. We have a chief AI and data governance officer. We have a chief privacy officer, you know. And we have a privacy by design approach into everything.

ensure that AI risks are addressed before systems are built and deployed and not afterwards. And we have an AI governance team that is working horizontally across data, science, product, legal, compliance, engineering because knowing how important that integration layer is because we have and then the product and engineering leaders, you could say they are the first line stewards of risk and AI risk. They are not kind of the recipients of compliance decisions. They are the stewards up front. But that happens when you get that right integration up front. So I think that for us from a MasterCard perspective what we have learned that governing AI at scale, it’s fundamentally a workforce challenge that requires interdisciplinary skill.

and early integration is required into product design and we need governance professionals who can manage risk and not just enforce rules. So it is a privacy by design, security by design. Those are kind of core principles, but then how do you bring those things together in this evolving is important.

Sangeeta Gupta

And I think that’s a fascinating part of this conversation, right? The whole focus on ethics, principles, trust, security, privacy by design, right? And as you think about, Ravi, going back to this large student workforce, right, that we are building for tomorrow, how do we get them to imbibe many of these principles? Obviously, when they come into your organizations, there’s structured programs that you’re running to drive this thinking. But if we had to take this back to the whole college -university ecosystem, any recommendations? Any recommendations you have on how to drive that?

Ravi Aurora

I think, no, absolutely. So clearly, you know, from a corporation perspective, I think, you know, I was looking at this morning when I took a picture of that. I think when I, you know, opened the news this morning, you know, the very first thing on the TV was around, you know, the headlines were AI skills or skill gap, right? And a lot of discussion based on, you know, obviously this week of what’s happening at the, you know, as part of this summit, right? So, and I think that, you know, clearly the role that business, academia, government, right, we all have a role to play in navigating this workforce transition. I think for corporations, it’s not just enough to say you’re offering internships, right, you know, to students.

I personally feel, you know, how are we taking our, maybe, you know, chief learning officer, or other, or engineering. kind of who are at the front line, how are they working with, you know, people in academia and actually helping think through and design courses based on real world examples of, and situations that are coming, you know, then, and certainly, obviously, when people come into internships, it helps them get that exposure, take that back into their learning environment. But I think in, you know, the whole facet of curriculum and curriculum design is changing, where it needs, it should not be only restricted only to computer science majors, but I think that this is something that is required, you know, in terms of AI in every different form across a broad set of disciplines, right?

So it’s not something that we can leave it only for, you know, computer science majors, you know, per se. So I think that the, you know, in terms of priority, embedding AI governance and interdisciplinary. interdisciplinary collaboration into, you know, is one of the very first layers that we have to begin with. So that, you know, the people coming in, you know, as you talked about the engineer, they’re trained to think across the full life cycle of AI system, you know, and not just in a very silent approach, right? And that is what talked about bringing engineers, product, risk, policy, all of those, you know, together. And, you know, then I think another priority is, I know we talk about it and we have to think, focus on role redesigning and not just reskilling.

And I think that, you know, because AI is transforming tasks within jobs rather than eliminating, you know, roles entirely. So I think that the work, you know, we have to see is how do we kind of redesign roles, right? Rather than only focusing on reskilling, right? And we have to build inclusive and distributed talent pipeline. So I think here, I mean, I go back to CII, you know, and other organizations where we have worked with where you go on the field and you’re working, let us say, with MSMEs, right? And, you know, working with the last mile, understanding their challenges and their pain points and bringing that into our product design and, you know, and output that’s required.

And because that provides the context. Right. So I think that, you know, the ability to take our talent pipeline and expose them to real world and helping them contextualize, you know, is very, very important.

Sangeeta Gupta

Thank you. Kish, if I can follow up the question with you, right? I think Ravi spoke about two themes. One is role redesign. So how are you seeing the role redesign happen from a technology services context? And secondly, I think there’s so much we hear about the changing role of the engineer. Now this whole forward deployed engineers becoming like the new buzzword in town. How are you seeing this happen in your organization?

Srikrishna Ramakarthikeyan

Thank you. I mean specifically on role redesign that is absolutely true I mean just again going back to software engineering you think of a typical kind of squad that builds software may have had 7, 8, 10 people some developers some testers scrum master typical roles I think in the extreme case we are seeing down to 3 people one product owner one developer one tester and that substantial redesign of the role and the time it takes to do it is coming down from 2 weeks to 2 days so yes you won’t see value unless you are redesigning the role you won’t see real value from AI now we have been speaking a lot about capabilities right I I I think we should spend enough time on adoption.

And I think there is a pretty big gap. Actually, I think that gap is good for workforce. Because no matter what the capabilities are, I think by the time it becomes real, adopted at scale into workforce, into our enterprise customers, it’s several years. In aggregate, ultimately, I think of the impact of work, and hence workforce, is maybe in low single -digit percentages per year at most. Even 1 to 2 percent right now. Maybe they’ll expand to 2 to 3 next year. This is because of the speed of adoption and the multiple constraints in adoption. Because I don’t think AI knows context. Right? Right. Everybody’s speaking in a watered -like… But, you know, mad could mean what the word mad means for one enterprise.

It could mean the old world for Chennai and another enterprise, right? So, there are many reasons why adoption, I think, is going to be slow. But, and frankly, one of the reasons is role redesign, because it is not as simple as getting a coding toy or whatever data tool. It is an organizational redesign to make that happen.

Sangeeta Gupta

And are you engaged in enabling all your employees to be able to use these tools, given, you know, some of the issues around governance risks that are being talked about?

Srikrishna Ramakarthikeyan

Yeah, 100%. Okay. I think it is kind of… A little bit silly to tell employees that you cannot use. We’ve already got… We are already in the second generation of retaining our employees on the air. I think first generation was whatever, on Gen AI and I would say even as of Jan last year, the whole concept of agentic came in, whatever you learn till that point becomes useless. And so we are doing that second generation of training. Now, what we found is that earlier we used to mandate training. We wanted everybody to learn and we were pushing employees to learn. Suddenly we stopped it. We said, hey, it’s up to you. The truth is, if you don’t learn, you are going to be redundant.

Yeah, so it’s not for us that we learn. It’s for you. And suddenly we’re finding that the number of people who are actually getting trained is more, not less, once you stop mandating it. So, yeah, I mean, are there privacy risks with Facebook? do people use it? The answer for both is yes. So I think you’re just going to find a generation of people who think about the resources here and very differently.

Sangeeta Gupta

So you know yesterday at the Impact Summit, the CEO of Anthropic spoke about I think what was the 100x geniuses in a data center, right? That’s the kind of intelligence at scale that will exist as these technologies really mature to a deployment and scale the deployment gap. How do you see the role of humans shifting and what is this human -AI collaboration that we are all talking about, right?

Srikrishna Ramakarthikeyan

See, the thing is this, I tell my customers this. Stop chasing the shiniest object. There is always going to be advancement in technology every month, every two months, every three months. Something better will come. And in the quest to keep chasing that, actually what you’re doing is not realizing value from anything. So, for me, most enterprises can get significant value if they fully adopt systematically capabilities that existed a year ago. Certainly capabilities that existed six months ago. So, what are the relevance of data center full of geniuses for most enterprises? I think it’s zero. What problems can it solve that enterprises… I think enterprise problems are not to do with IQ. It is far more complex than a linear IQ issue.

So, I think yes, it may be true that AI can do like a thousand things that humans can’t, but it’s not relevant. So I think the real focus is not about capability, about how do you help enterprise adopt and that is the real answer to your earlier problem, earlier question. What do people do if machines do coding? Actually the problem you are trying to solve is not writing code, you are trying to always solve for some other problem. I think that’s the re -skilling that engineers and young talent need to go through. For me now, AI knowledge is like English, it’s foundational, it’s fundamental. I need to be in the business of solving for something else.

And there I think the point you have made several times in terms of engineering, engineering and interdisciplinary I think is crucial. So how many times do you go to a doctor and get frustrated? Listen, I don’t want an eye doctor. I don’t want a nose doctor. I actually want a doctor. Right? And you know, that’s true in engineering. You think about robotics. You don’t want a mechanical engineer. You don’t want a software engineer. You don’t want an AI engineer. You don’t want an electrical engineer. You want an engineer. And I think that is where our talent needs to go. Now, frankly, I think academia has a big job to do to help them get there because our courses are not designed like this right now.

They’re designed as electrical and whatever else. But I think young talent who are reorienting themselves that, hey, AI is not the skill. AI is very foundational. But I’m going to use that to solve for something more meaningful. I think we’ll just be fine on workforce.

Sangeeta Gupta

yeah so if i can come to you right i think you’ve heard a lot about how learning has to change and uh you know whether it’s critical thinking that we’re talking about problem solving experiential use case based uh but at the same time you need access to data you need access to compute you need access to research right so how how are you think how how is uk thinking about this and you know are there examples that india can learn from from there

Sue Daley OBE

yeah absolutely so when we think about realizing the opportunities economic and social opportunities of ai it isn’t just about obviously skills skills is part of it but it’s it’s about it getting to use that word again the foundation’s right so in the uk particularly last year we focused a lot around um and a lot of initiatives a lot of investment has been put into getting the infrastructure right so whether that is looking at our data infrastructure um the uk government infrastructure right so whether that is looking at our data um announcing a national data library initiative to try and um announcing a national data library initiative to try and um announcing a national data library initiative to try and we have, well I was about to say we have huge data sets but you guys have massive data sets, but the data sets we have, how are we using them, how are we bringing them together, not just for public services and public sector use but potentially for industry use as well so data infrastructure absolutely, a lot of investments gone into compute infrastructure so the creation of AI growth zones so dedicated areas in the UK where perhaps we don’t have the compute infrastructure resource right now, how are we building that, part of it is also investment gone into AI, so an AI research resource, so dedicated computer resource compute power chips to allow AI researchers at that fundamental research level to do the work that they’re doing as well so absolutely a lot of focus and I think if I think about and if I reflect on the last, when 2025 in the UK yes the conversation was a lot about how do we get the foundations right, how are we getting the infrastructure right where I think and where I want the conversation to shift is to now adoption yes we’ve been talking about adoption there is already adoption happening in the UK whether it’s financial services, whether it’s in our healthcare system whether it’s transport, logistics but boy there’s so much potential completely agree and at Tech UK we’re really looking at how do we accelerate that AI adoption at pace and speed in a way that we don’t get it wrong from a governance, from an ethics, from a responsible from a regulation point of view absolutely and how do we get it right for people but how do we move quickly enough to realise the opportunity and that’s really really something that we’ll be advocating for more this year because again what can government do to help that but what can we as industry particularly the tech industry help other sectors and industries to understand how they can do that as well and that’s really our core mission of my work at Tech UK and I’m really excited about the future of Tech UK and I’m really excited about the future of Tech UK skills comes into it of course but also does public trust and confidence none of what we’re talking about here is going to really fly if people don’t trust and have confidence in using AI so there is, or having AI used about them so there’s lots of initiatives happening, compute infrastructure absolutely, access to data making sure that researchers have what they need, industry have what they need SMEs have what they need but skills is an integral part of that it’s all linked, it’s all connected but I completely agree adoption is really the key and I was at a UK, I had a reception last night, the High Commission and the Rishi Sunak, the previous Prime Minister was talking about which country will win the AI race, we’re talking about sovereignty we’re talking about the previous panel was talking about sovereignty is kind of key for India it’s key for a lot of countries and we’re looking at what does data what does tech, what does AI sovereignty mean for the UK but Ritchie Sunak’s point was like the countries that will win the race in AI are not the countries that are looking at sovereignty or looking at stack or looking at infrastructure it’s the countries that can demonstrate adoption and can win the race in adoption and that can integrate AI across all the sectors and across all your industry and your economy and definitely in the UK we’re very much tying digital AI adoption and deployment diffusion into society into our economy as a key driver of growth and productivity as well so lots going on but with that central core theme of how do we get this right as well.

Sangeeta Gupta

I fully agree I think getting deployment right is really the opportunity or challenge for economies that are not competing for the LLMs right so I think that that’s what India has to get right because AI can help solve to Keisha’s point we necessarily the shiniest toy is not needed for the enterprise it’s needed to solve India’s deep healthcare challenges it’s needed to solve some of our agriculture related issues right and I think that’s where the whole inclusion focus and what AI can do for you it really means.

Sue Daley OBE

I think sometimes we have to take a step back and just realize how transformational, how exciting this technology is. I mean, many of us have been talking about this for a number of years. But where we are in terms of compute infrastructure and compute power that we never had before, in terms of the digital data and the data sets that we’ve never really had before, I don’t know, I’m feeling quite this does feel like a step change. This does feel like a different moment in time. And it’s how do we grasp that moment in time, which I think is really important. How do we help young people and everybody working in the industry to understand what grasping this opportunity means for them as well?

Sangeeta Gupta

No, I think we’re reaching the end of our session, but I just want to get to the last session and quick comments from all of you, right? You know, what would be, Ravi, your top three priorities for business, academia, and government to successfully navigate this AI workforce transition? And, you know, what are some risks it should plan for?

Ravi Aurora

Great question. I think like, you know, the priorities, I think I mentioned, you know, to you about this whole interdisciplinary collaboration, the whole, you know, aspect around redesign and so forth, right? And I think in risks, I would also see, I think like, if I go to, you know, how what we’ve been talking around AI and how it has democratized access and so forth, right? But there is also the concentration risk that we have to be aware about, right? Because kind of when we have a small set of institutions or companies or talent pools pull ahead disproportionately because they have access to better data or compute and research ecosystems, right? Then I think we have to be very deliberate in how we design for systems.

Right? Right. I think this is where. You know, India, we have a position of strength because, you know, our engineers and you talked about the million plus engineers that are, you know, we are coming from a position of strength because India has contributed to the global technology revolution. You look at all the growth of our global capability centers, you know, kind of reflect that depth of the talent pool, you know, that exists, right? And I think that we have to, as we go forward, you know, get that, you know, design aspect right, right? Because foundational digital and AI literacy into school curriculum, right? Because equitable access to tools, infrastructure, right? Hands -on exposure across geographies, right?

So, and then also we have to go beyond top tier institutions to tier two. Tier two and tier three because other. Otherwise, again, we’ll come back to a concentration risk, you know, that will exist. And, you know, because we don’t need just people who can build AI. We can, we need folks, you know, and professionals who can build with AI, who can govern AI, and who can, and know when to override AI, right? So I think that’s kind of important. So, and we have to make sure that in terms of risk, we don’t go towards over -automation, you know, without adequate human oversight. And, you know, biases need to be taken into account because it should work well for both formal workers as well as informal workers, right?

Women entrepreneurs, you know, vernacular, because, and we talked about context and the contextual aspect of it, right? So we, otherwise, we risk exclusion at scale. And, you know, to Sri’s point, we want that inclusion that you talked about. You know, we have a lot of people who have been in the industry for a long time, and we’ve had a lot of people who have been in the industry for a long time, and we’ve had a lot of people who have been in the industry for a long time, and we’ve had a lot of people who have been in the industry for a long time, and we’ve had a lot of people who have been in the industry for a long time, and we’ve had a lot of people who have been in the industry for a long time, and we’ve had a lot of people who have been in the industry for a long time, You know, and I already talked about the…

I’m sorry, we know we are ending the session, so whoever is ringing the bell, please, we’ll be there on time. Yeah, okay. So, okay, so therefore, I mean, I’ll just conclude there that, you know, it is about this transformation that we need

Sangeeta Gupta

So I think you articulated it very well, right, the risk of concentration, the risk of exclusion, and obviously not doing it very thoughtfully, right? So I think those are very, very well articulated. So if I can come to you, right, what do you see both as from a workforce transition framework, what are our big opportunities and risks, right?

Sue Daley OBE

Yeah, I’m glad you could hear that bell as well. I thought it was so funny in my head. So the question in terms of priorities, so very quickly for businesses, so touching on some of the points you were making as well, embed lifelong learning we need to continuously learn we all do actually but also our organisations I think think about for businesses not just jobs and roles but tasks, what are organisations looking for people to do and I think also organisations need to think about the opportunities but the risks they need to invest in human skills along with technical skills, governance skills but for government as well we see something in the UK that we think should be prioritised and I don’t know if this will resonate with here in India but it’s interoperability of skills credentials so if I get a credential if we’re focusing on lifelong learning if I learn a skill, if I take a course, if I have a credential how is that transferable can that be recognised elsewhere because people will need to shift and people will need to move but also a national taxonomy of skills and perhaps requirements and fundamental foundational skills that we’re talking about?

Are we all talking the same language? Are we all talking about the same skills? Some priorities there, but I’ll leave it there.

Sangeeta Gupta

So a new skills taxonomy and interoperability of skills, I think that’s going to be very important in this environment. But technology is changing so fast, right? Because what was applicable last year is now going to be applicable this year. Keech, if we can come back to you for the closing comments. How are you seeing this?

Srikrishna Ramakarthikeyan

I’ll maybe just say one thing, okay? Sorry. I think inclusiveness has to be by design.

Sangeeta Gupta

Okay, we’re just ending. We said that we’re ending. It’s just 24 seconds, right? Yeah, why didn’t you just close that, Keech?

Srikrishna Ramakarthikeyan

If you look at it, internet is very inclusive. That’s because academia made something free. I think we need academia to do that for AI. that’s how it become more inclusive and I think this has to be a huge priority

Sangeeta Gupta

Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. To welcome you all today for our session, Reimagining AI and STEM Education for India’s Next Generation. Celebrating the vision of Vixit Bharat and its grandeur, we are witnessing the AI revolution during the AI India Summit. With a young population and vibrant digital ecosystem and strong policy momentum, we are uniquely positioned to harness AI not only for the economic future,

Related ResourcesKnowledge base sources related to the discussion topics (36)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“Moderator Sangeeta Gupta welcomed the participants and laid out a three‑segment structure for the discussion: (1) nature of AI‑driven disruption, (2) emerging skill requirements, and (3) policy and education responses.”

The knowledge base identifies Sangeeta Gupta as the panel moderator and notes that the discussion was explicitly broken into three segments covering disruption, design, and related perspectives [S1] and [S11].

Additional Contextmedium

“Srikrishna argued that AI capability is expanding rapidly and is reshaping software engineering more than testing or infrastructure.”

A related source discusses how AI is rapidly reshaping software developer careers, indicating a broader impact on software engineering roles, which adds nuance to the claim [S8].

Additional Contextlow

“Adoption will be gradual, with an estimated 1‑2 % annual impact on employment, potentially rising to 2‑3 % as organisations catch up with the technology.”

Research on labour markets shows that despite rapid AI adoption, overall employment impacts have remained modest and anxiety about job loss has not translated into large-scale displacement, providing additional perspective on the size of the effect [S106].

External Sources (118)
S1
How AI Is Transforming Indias Workforce for Global Competitivene — -Sangeeta Gupta- Panel moderator (role/title not specified in transcript) -Srikrishna Ramakarthikeyan- (Role/title not …
S2
How AI Is Transforming Indias Workforce for Global Competitivene — -Srikrishna Ramakarthikeyan- (Role/title not clearly specified, but appears to be from IT services sector based on discu…
S3
How AI Is Transforming Indias Workforce for Global Competitivene — – Ravi Aurora- Srikrishna Ramakarthikeyan- Sue Daley OBE – Ravi Aurora- Sue Daley OBE – Ravi Aurora- Sue Daley OBE- Sr…
S4
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S5
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Speaker 1- Role/title not specified (appears to be a moderator/participant) -Speaker 2- Role/title not specified (appe…
S6
Building the Workforce_ AI for Viksit Bharat 2047 — -Speaker 1- Role/Title: Not specified, Area of expertise: Not specified -Speaker 3- Role/Title: Not specified, Area of …
S7
How AI Is Transforming Indias Workforce for Global Competitivene — – Srikrishna Ramakarthikeyan- Sue Daley OBE
S8
What happens to software careers in the AI era — AI is rapidly reshaping what it means to work as a software developer, and the shift is already visible inside organisat…
S9
The mismatch between public fear of AI and its measured impact — Insoftware development, “AI-assisted coding” usually means autocomplete, boilerplate generation, or debugging assistance…
S10
Fireside Conversation: 02 — Economic Impact and Gradual Transformation When addressing AI’s economic impact, LeCun cites economists including “Phil…
S11
https://dig.watch/event/india-ai-impact-summit-2026/how-ai-is-transforming-indias-workforce-for-global-competitivene — And I think there is a pretty big gap. Actually, I think that gap is good for workforce. Because no matter what the capa…
S12
State of Play: AI Governance / DAVOS 2025 — Krishna emphasizes the need to drive down the cost of AI technology to make it more inclusive and accessible globally. H…
S13
Singapore takes global lead in AI skills adoption — Workers in Singapore have emerged as the world leaders in adopting AI skills, according toLinkedIn’s recent Future of Wo…
S14
Bridging the AI innovation gap — LJ Rich: to invite our opening keynote. It’s a pleasure to invite to the stage the director of the Telecommunications St…
S15
AI agents offer major value but trust and data gaps remain — AI agents coulddrive up to $450 billion in economic value by 2028, according to new research by Capgemini. The gains wou…
S16
How can Artificial Intelligence (AI) improve digital accessibility for persons with disabilities? — The omnipresence of Artificial Intelligence (AI) and its applications across different sectors necessitates considering …
S17
How Multilingual AI Bridges the Gap to Inclusive Access — “AI can only serve the public good if it serves all languages and all cultures.”[1]. “Today, linguistic exclusion remain…
S18
Generative AI: Steam Engine of the Fourth Industrial Revolution? — To ensure widespread innovation and access to AI, it is imperative to keep AI platforms open and avoid closed ecosystems…
S19
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — This comment fundamentally reframes how we think about AI system design – moving from standardization that excludes outl…
S20
Day 0 Event #248 No One Left Behind Digital Inclusion As a Human Right in the Global Digital Age — Rather than retrofitting accessibility, inclusive design must be built into systems from the beginning. This requires re…
S21
Artificial intelligence (AI) – UN Security Council — Furthermore, there was a consensus on the necessity for enhanced data literacy and data management skills. As AI systems…
S22
AI (and) education: Convergences between Chinese and European pedagogical practices — Norman Sze: Thank you for introduction. It’s my honor to join this forum and share insight from perspective of professio…
S23
Inclusive AI Starts with People Not Just Algorithms — Future careers will combine multiple disciplines, and the power lies in asking the right questions rather than just know…
S24
High Level Session 3: AI & the Future of Work — Nthati Moorosi: Thank you, Programme Director, the moderator, and thank you for affording me this opportunity to talk a …
S25
How will AI transform the UK’s job landscape? — The Institute for Public Policy Research (IPPR) released its report, ‘Transformed by AI’, signalling a potential structu…
S26
Interdisciplinary approaches — AI-related issues are being discussed in various international spaces. In addition to the EU, OECD, and UNESCO, organisa…
S27
Why science metters in global AI governance — These key comments collectively transformed what could have been a technical discussion about AI governance into a profo…
S28
AI-Driven Enforcement_ Better Governance through Effective Compliance & Services — These key comments fundamentally shaped the symposium by establishing a framework for responsible, human-centric AI adop…
S29
Building the Next Wave of AI_ Responsible Frameworks & Standards — The panel demonstrated a maturing field where practitioners are converging on core principles while offering complementa…
S30
How to make AI governance fit for purpose? — These key comments fundamentally elevated the discussion from a typical regulatory debate to a sophisticated exploration…
S31
Secure Finance Risk-Based AI Policy for the Banking Sector — The panel explored how AI governance frameworks must account for India’s linguistic diversity, demographic heterogeneity…
S32
The UK government unveils a new Wireless Infrastructure Strategy — The UK government has announced a newWireless Infrastructure Strategyto boost digital connectivity, with an ambition for…
S33
Contents — JD is a Chinese retailer with significant e-commerce logistics. The operation of infrastructure networks, logistics, sou…
S34
Part 3: ‘Readiness across the spectrum: Countries’ — ESCWA’s19 public policy recommendationsprovide guidance for governments navigating the metaverse landscape. These recomm…
S35
Ministerial Roundtable — There’s a stark contrast between countries that have achieved near-universal connectivity (like Azerbaijan) and those st…
S36
Table of Contents — 1. (I) The process of identifying, measuring, and controlling (i.e., mitigating) risks in information systems so as to r…
S37
Summary — The Principality of Liechtenstein is supporting, developing and shaping digitalisation for the benefit of the population…
S38
e-Commerce Policy 2.0 (2025-30) Expanding Markets. Empowering People. Enabling Trust. — Participation will be open to startups, EMIs, banks, and other innovators, including nonlicensed entities, sup…
S39
Diplomatic policy analysis — Overreliance on technology:While machine learning and analytics are powerful tools, they are not infallible. Overdepende…
S40
Driving Social Good with AI_ Evaluation and Open Source at Scale — Moderate disagreement with significant implications. The disagreements reflect deeper tensions between technical efficie…
S41
Open Forum #79 Regulation of Autonomous Weapon Systems: Navigating the Legal and Ethical Imperative — **Human Control and Oversight**: Despite different approaches, speakers across perspectives emphasized the importance of…
S42
How AI Is Transforming Indias Workforce for Global Competitivene — And I think there is a pretty big gap. Actually, I think that gap is good for workforce. Because no matter what the capa…
S43
Building Inclusive Societies with AI — So I have a last question to each of you for what I request is maybe just a minute or two, a quick one. So Arutati, as p…
S44
AI adoption reshapes UK scale-up hiring policy framework — AI adoption is prompting UK scale-ups torecalibrateworkforce policies. Survey data indicates that 33% of founders antici…
S45
Australia weighs risks and rewards of rapid AI adoption — AI is reshaping Australia’s labour market at a pace that has reignited anxiety aboutjob security and skills. Experts say…
S46
Generative AI: Steam Engine of the Fourth Industrial Revolution? — Additionally, the analysis notes that the need for skill development aligns with the Sustainable Development Goals (SDGs…
S47
Living with the genie: Responsible use of genAI in content creation — In conclusion, the summary reiterates that AI algorithms are significantly shaped by their input data, with predominantl…
S48
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — And this requires proactive and coherent policy responses. First, people must be at the center of AI strategy, as we hea…
S49
Projecting Digital economy rules on Global South’s AI regulations: what is needed to safeguard human rights? ( Data Privacy Brasil Research Association) — Additionally, the analysis notes a neutral argument that there is a regulatory “race to the bottom.” This perspective hi…
S50
Interdisciplinary approaches — AI-related issues are being discussed in various international spaces. In addition to the EU, OECD, and UNESCO, organisa…
S51
AI for Social Empowerment_ Driving Change and Inclusion — The required policy responses span multiple domains:
S52
Hard power of AI — In conclusion, the analysis provides insights into the dynamic relationship between technology, politics, and AI. It hig…
S53
How to make AI governance fit for purpose? — This philosophical insight resonated throughout the discussion, providing a framework for understanding why AI governanc…
S54
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — The discussion maintained an optimistic and collaborative tone throughout, characterized by constructive problem-solving…
S55
Artificial intelligence (AI) – UN Security Council — Furthermore, there was a consensus on the necessity for enhanced data literacy and data management skills. As AI systems…
S56
AI (and) education: Convergences between Chinese and European pedagogical practices — Norman Sze: Thank you for introduction. It’s my honor to join this forum and share insight from perspective of professio…
S57
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — AI is not just a technology but a social technical system, a system of systems, and one discipline alone is not sufficie…
S58
Responsible AI for Children Safe Playful and Empowering Learning — Wonderful. So there’s two things that we need for empowerment. One is foundational skills. The child needs to have a bas…
S59
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — The discussion reveals strong consensus on key strategic directions: comprehensive ecosystem development beyond chip man…
S60
Discussion Report: AI as Foundational Infrastructure – A Conversation Between Laurence Fink and Satya Nadella — So the mindset we as leaders should have is we need to think about changing the workflow with the technology. Then that …
S61
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — Armando José Manzueta-Peña:Well, thank you, Luca, for the presentation. I’m more than thrilled to be present here and to…
S62
NextGen AI Skills Safety and Social Value – technical mastery aligned with ethical standards — Dr. Sarabjot emphasizes critical thinking, questioning AI outputs, and understanding AI limitations as the primary requi…
S63
How AI Is Transforming Indias Workforce for Global Competitivene — Are we having the same conversations? Are we facing the same kind of issues? I think what I’ve just heard from my fellow…
S64
What happens to software careers in the AI era — AI is rapidly reshaping what it means to work as a software developer, and the shift is already visible inside organisat…
S65
TCS boosts development with AI-driven engineering — Tata Consultancy Services (TCS) isharnessinggenerative AI to accelerate development in the rapidly growing field of engi…
S66
https://dig.watch/event/india-ai-impact-summit-2026/how-ai-is-transforming-indias-workforce-for-global-competitivene — That is true. So I think whatever disruption we saw I thought would be there in infra. I think it is there but it is a p…
S67
Lightning Talk #246 AI for Sustainable Development Public Private Sector Roles — Guo argues that universities, as engines of knowledge and innovation, have a responsibility to lead AI development in wa…
S68
Workshop 6: Perception of AI Tools in Business Operations: Building Trustworthy and Rights-Respecting Technologies — The discussion identified key AI governance challenges including bias, transparency, privacy, and oversight. Addressing …
S69
High-level AI Standards panel — Need to move from purely technical approach to multidisciplinary, socio-technical paradigm
S70
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — However, one lingering challenge in AI regulation is finding the right balance between adaptability and regulatory predi…
S71
AI-Driven Enforcement_ Better Governance through Effective Compliance & Services — These key comments fundamentally shaped the symposium by establishing a framework for responsible, human-centric AI adop…
S72
Pre 12: Resilience of IoT Ecosystems: Preparing for the Future — As AI becomes integrated into IoT systems, proper governance frameworks are essential to ensure ethical and trustworthy …
S73
Balancing innovation and oversight: AI’s future requires shared governance — At IGF 2024, day two in Riyadh, policymakers, tech experts, and corporate leaders discussed one of the most pressing dil…
S74
AI governance struggles to match rapid adoption — Accelerating AI adoptionis exposingclear weaknesses in corporate AI governance. Research shows that while most organisat…
S75
The UK government unveils a new Wireless Infrastructure Strategy — The UK government has announced a newWireless Infrastructure Strategyto boost digital connectivity, with an ambition for…
S76
Bridging the Digital Skills Gap: Strategies for Reskilling and Upskilling in a Changing World — Adapting global best practices to local contexts while maintaining international cooperation and knowledge sharing Addr…
S77
Contents — JD is a Chinese retailer with significant e-commerce logistics. The operation of infrastructure networks, logistics, sou…
S78
UK government invests £1.1 billion to upskill workforce in future technologies — The UK governmenthas unveileda £1.1 billion package to upskill thousands of individuals in future technologies such as A…
S79
United Kingdom — The UK Digital Strategy, published in 2022, outlines a comprehensive approach to strengthening digital foundations, prom…
S80
Adoption of the agenda and organization of work — Inclusion of safeguards such as human rights provisions is necessary for international cooperation and law enforcement …
S81
What is the nature of the internet? Different Approaches | IGF 2023 WS #445 — Bruna Martin-Santos:Thanks, Paula. Yeah, just to add some more thoughts to this, I think I agree with some of both our c…
S82
AI That Empowers Safety Growth and Social Inclusion in Action — Second, they want to close capacity gaps. Many developing countries need infrastructure, skills, and compute to particip…
S83
Ministry of Communications & Information Technology — The following key principles guide our approach to information security and further maintain the confidentiality, integr…
S84
Digital ECOnOMy POliCy lEgal inStRuMEntS — 18. See for example Dark Reading, 2012 which provides examples of longterm attacks at the US Chamber of Commerce, Norte…
S85
Laying the foundations for AI governance — The tone was collaborative and constructive throughout, with panelists building on each other’s points rather than disag…
S86
The Dawn of Artificial General Intelligence? / DAVOS 2025 — The purpose of this panel discussion was to explore different perspectives on the development of artificial general inte…
S87
Comprehensive Report: AI’s Impact on the Future of Work – Davos 2026 Panel Discussion — The tone was notably optimistic and solution-oriented rather than alarmist. While acknowledging legitimate concerns abou…
S88
The Foundation of AI Democratizing Compute Data Infrastructure — The tone was collaborative and solution-oriented throughout, with speakers building on each other’s ideas rather than de…
S89
How AI Is Transforming Diplomacy and Conflict Management — The discussion maintained a consistently thoughtful and cautiously optimistic tone throughout. Participants demonstrated…
S90
Networking Session #132 Cyberpolicy Dialogues:Connecting research/policy communities — The tone of the discussion was collaborative and solution-oriented. It began in a more formal, presentation-style format…
S91
High-Level Dialogue: The role of parliaments in shaping our digital future — The discussion maintained a tone of cautious optimism throughout. Speakers acknowledged significant challenges and risks…
S92
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — The tone was consistently optimistic yet pragmatic throughout the conversation. Speakers maintained an encouraging outlo…
S93
Comprehensive Discussion Report: Governance Frameworks for Reducing Digital Divides in African and Francophone Contexts — The tone was pragmatic and solution-oriented, with speakers expressing both frustration with past failures and cautious …
S94
WS #103 Aligning strategies, protecting critical infrastructure — The tone was largely collaborative and solution-oriented. Speakers built on each other’s points and emphasized the need …
S95
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — The discussion maintained a consistently optimistic and collaborative tone throughout. Speakers expressed enthusiasm abo…
S96
Host Country Open Stage — The tone was consistently professional, optimistic, and forward-looking throughout. Speakers maintained an informative, …
S97
Multigenerational Collaboration: Rethinking Work, Learning and Inclusion in the Digital Age — The discussion maintained a professional yet urgent tone throughout, with speakers expressing both optimism about collab…
S98
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — The discussion maintained a professional, collaborative, and optimistic tone throughout. Panelists demonstrated mutual r…
S99
AI for Good Technology That Empowers People — -Vishnu Ram OV- Session moderator/host
S100
Internet standards and human rights | IGF 2023 WS #460 — Challenges faced at standard forums were discussed, and there was an emphasis on finding ways to overcome these challeng…
S101
The Global Power Shift India’s Rise in AI & Semiconductors — High level of consensus with complementary perspectives rather than conflicting views. The speakers come from different …
S102
https://dig.watch/event/india-ai-impact-summit-2026/fireside-conversation-01 — Dario, you were in India in October, and you’re back again now. You spend a lot of time, actually, with the developer co…
S103
What Proliferation of Artificial Intelligence Means for Information Integrity? — Hicks argues that the information environment is undergoing rapid transformation that we haven’t fully grasped, with soc…
S104
Reinventing Digital Inclusion / DAVOS 2025 — Robert argues that many AI applications, especially for government and local use cases, don’t require the most advanced …
S105
Emerging Markets: Resilience, Innovation, and the Future of Global Development — Because it’s more open source. The costs are lower. It allows you to build on top of that.
S106
Labour market remains stable despite rapid AI adoption — Surveys show persistent anxiety aboutAI-driven job losses. Nearly three years after ChatGPT’s launch, labour data indica…
S107
© 2019, United Nations — From the perspective of ‘creative destruction’ (Schum -peter, 1942), the introduction of new technologies leads to some …
S108
The Impact of Digitalisation and AI on Employment Quality – Challenges and Opportunities — During a session focused on the impact of digitalisation on employment, experts from the International Labour Organisati…
S109
IGF Leadership Panel Event — However, Vint Cerf provided a counterbalancing perspective, arguing that “despite challenges, we must maintain enthusias…
S110
AI for Safer Workplaces & Smarter Industries Transforming Risk into Real-Time Intelligence — Piyush Nangru articulated the transformation in educational terms, stating that “coding is no longer a skill. It’s table…
S111
Open Forum: A Primer on AI — In conclusion, AI holds great promise in reshaping industries and driving innovation. It has the potential to create new…
S112
AI to disrupt jobs, warns DeepMind CEO, as Gen Alpha faces new realities — AI will likely cause significant job disruption in the next five years,accordingto Demis Hassabis, CEO of Google DeepMin…
S113
Building Trustworthy AI Foundations and Practical Pathways — However, Thakkar warns that current AI systems suffer from underlying problems that companies are addressing with superf…
S114
Writing as thinking in the age of AI — In hisarticle, Richard Gunderman argues that writing is not merely a way to present ideas but a core human activity thro…
S115
Experts urge broader values in AI development — Since the launch of ChatGPT in late 2023, the private sectorhas led AI innovation. Major players like Microsoft, Google,…
S116
Acknowledgements — The report identifies four key areas where support is needed:
S117
Rights and Permissions — Creating a skilled workforce for the future of work rests on the growing demand for advanced cognitive skills, sociobeha…
S118
Towards a Reskilling Revolution — In addition to more generalized strategic workforce planning, companies can upgrade their future workforce preparedness …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Srikrishna Ramakarthikeyan
12 arguments139 words per minute2207 words950 seconds
Argument 1
AI reshapes software engineering more than testing or infra (Srikrishna Ramakarthikeyan)
EXPLANATION
He argues that AI’s disruptive impact is greatest on software engineering, surpassing its effects on testing and infrastructure management. The shift reflects how AI tools are increasingly automating coding tasks.
EVIDENCE
He first noted that earlier he would have placed testing before software engineering in terms of AI impact, but now flips that view, stating software engineering is the most affected area [34-37]. He also observes that disruption in infrastructure is plateauing and not showing leaps and bounds of change, unlike software engineering [39-42].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External analyses highlight that AI is fundamentally changing software developer roles and that AI-assisted coding mainly affects engineering tasks rather than testing or infrastructure, supporting the claim that software engineering is most impacted [S8], [S9].
MAJOR DISCUSSION POINT
Scope of AI disruption in IT services
Argument 2
Adoption will be gradual, yielding low single‑digit workforce impact per year (Srikrishna Ramakarthikeyan)
EXPLANATION
He predicts that AI adoption will proceed slowly, resulting in only modest workforce displacement of a few percent annually. The limited impact is due to the time needed for organizations to integrate AI capabilities.
EVIDENCE
He estimates the impact on workforces to be low single-digit percentages per year, perhaps 1-2% now and 2-3% next year, citing the slow pace of adoption and multiple constraints [238-245].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Commentary notes that AI-driven productivity gains are modest (around 0.6% annually) and that a gap exists before large-scale adoption, indicating a low single-digit impact each year [S10], [S1].
MAJOR DISCUSSION POINT
Pace of AI adoption and workforce impact
DISAGREED WITH
Ravi Aurora
Argument 3
AI drives coding cost toward zero, enabling solutions previously infeasible (Srikrishna Ramakarthikeyan)
EXPLANATION
He claims that AI will reduce the cost of writing code to near zero, allowing organizations to solve problems that were previously too complex or expensive. This democratizes software creation and expands the range of possible applications.
EVIDENCE
He states that the cost of coding will become zero, which will let anyone solve many problems that were previously unaffordable or too complex, effectively turning code into a free resource [164-168].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The speaker himself is quoted saying “the cost of coding is going to become zero,” directly corroborating the argument [S1].
MAJOR DISCUSSION POINT
Economic impact of AI on software development
DISAGREED WITH
Sue Daley OBE
Argument 4
Digital‑native talent holds a natural advantage in AI adoption (Srikrishna Ramakarthikeyan)
EXPLANATION
He observes that younger, digitally native workers adapt more quickly to AI tools than older employees, giving them a competitive edge. Their familiarity with digital platforms translates into faster AI uptake.
EVIDENCE
He notes that young AI-native talent outperforms older workers, comparing it to the difference between Instagram users and non-users, and cites hiring top engineering school graduates who excelled at new AI techniques [141-142].
MAJOR DISCUSSION POINT
Talent advantage of digital natives
DISAGREED WITH
Sangeeta Gupta
Argument 5
Squad sizes shrinking (e.g., from 7‑10 to 3) and delivery cycles accelerating (Srikrishna Ramakarthikeyan)
EXPLANATION
He explains that AI enables smaller, more efficient development teams, reducing typical squad sizes and cutting delivery timelines dramatically. This reflects a re‑engineering of agile processes.
EVIDENCE
He describes a typical software squad shrinking from 7-10 members to just three (product owner, developer, tester) and the delivery cycle dropping from two weeks to two days [238-239].
MAJOR DISCUSSION POINT
Organizational restructuring due to AI
Argument 6
Role redesign is critical to realize AI value; adoption gaps hinder impact (Srikrishna Ramakarthikeyan)
EXPLANATION
He stresses that without redesigning roles to incorporate AI, organizations will not capture its benefits. Adoption gaps further slow the realization of AI’s potential.
EVIDENCE
He emphasizes that AI value will not be seen unless roles are redesigned, noting that adoption gaps are a major barrier and that impact may stay in low single-digit percentages [238-247]. He also points out that slow adoption is partly due to the need for organizational redesign rather than just new tools [252-253].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Discussion stresses that redesigning roles-not merely reskilling-is essential for capturing AI benefits, and that adoption gaps keep impact low [S1].
MAJOR DISCUSSION POINT
Importance of role redesign
Argument 7
Inclusion by design; free academic AI resources to democratize access (Srikrishna Ramakarthikeyan)
EXPLANATION
He argues that AI should be made inclusive from the outset, leveraging free academic resources to broaden participation. This approach mirrors how the internet became inclusive through open academic contributions.
EVIDENCE
He states that inclusiveness must be built by design, citing the internet’s inclusivity due to free academic resources and calling for similar openness for AI [365-373].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for open AI ecosystems and inclusive design echo the need for free academic resources to broaden participation [S18], [S20].
MAJOR DISCUSSION POINT
Inclusive AI development
Argument 8
Inclusiveness must be built into AI systems by design (Srikrishna Ramakarthikeyan)
EXPLANATION
He reiterates that AI systems need to be designed with inclusivity at their core, ensuring that diverse users benefit equally. This principle should guide policy and technical development.
EVIDENCE
He succinctly says, “inclusiveness has to be by design,” and expands by noting the internet’s inclusive nature stemming from free academic contributions, urging the same for AI [365][371-373].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The same inclusive-by-design principle is advocated in literature on open AI and universal design [S18], [S20].
MAJOR DISCUSSION POINT
Designing inclusive AI
Argument 9
AI literacy should be treated as a foundational skill comparable to language proficiency, essential for all professionals
EXPLANATION
He likens AI knowledge to English, suggesting that understanding AI will become a basic requirement for effective participation in the modern economy.
EVIDENCE
He states “AI knowledge is like English, it’s foundational, it’s fundamental” [291-292].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Data literacy is highlighted as essential for AI work, suggesting a broader foundational literacy analogous to AI literacy [S21].
MAJOR DISCUSSION POINT
AI as a basic literacy
AGREED WITH
Ravi Aurora
Argument 10
Future engineers need to be interdisciplinary generalists rather than siloed specialists, because AI problems span multiple domains
EXPLANATION
He argues that the traditional separation of engineering disciplines is no longer useful; instead, a unified engineering approach is required to address AI challenges.
EVIDENCE
He says “You don’t want a mechanical engineer, you don’t want a software engineer, you don’t want an AI engineer… you want an engineer” [300-304].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Future AI careers are described as requiring multiple disciplinary fluency, reinforcing the need for interdisciplinary engineers [S23].
MAJOR DISCUSSION POINT
Interdisciplinary engineering for AI
Argument 11
AI will generate more new job opportunities than it eliminates, because it enables solving problems previously infeasible, expanding the scope of work.
EXPLANATION
He argues that AI expands the range of problems that can be tackled, creating entirely new domains of activity and therefore more employment than the jobs it displaces. This net‑positive effect stems from AI lowering the cost of coding to near zero and unlocking solutions that were previously unaffordable.
EVIDENCE
He notes that opportunities for a young, technically savvy person are enormous and that AI’s real value lies in solving problems that could not be solved before, while also stating that the cost of coding will become zero, allowing anyone to address previously impossible challenges [45][166-168].
MAJOR DISCUSSION POINT
Net job creation from AI
Argument 12
AI policy should prioritize inclusiveness and broad participation rather than focusing solely on regulation, contrasting US approach with more inclusive models.
EXPLANATION
He argues that discussions in the United States centre on whether and how to regulate AI, while the United Kingdom and other regions emphasize making AI inclusive for all, suggesting that policy should be designed to ensure broad access and participation.
EVIDENCE
He notes that in the US the conversation is about regulation-who should regulate, whether it should be the state or central government-whereas he observes a focus on inclusiveness in the UK and elsewhere, highlighting a material difference in approaches and emphasizing that inclusiveness will lead to better outcomes [120-134].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Policy discussions emphasize inclusive AI development over pure regulatory focus, aligning with the argument [S20], [S23].
MAJOR DISCUSSION POINT
Inclusive AI policy design
R
Ravi Aurora
14 arguments142 words per minute2127 words896 seconds
Argument 1
AI is embedded in decision‑making, shifting skill needs beyond downstream compliance (Ravi Aurora)
EXPLANATION
He points out that AI is now part of core decision‑making processes rather than a back‑office compliance function, requiring new skill sets across the organization.
EVIDENCE
He explains that AI is being embedded into decision-making, public infrastructure, service delivery and governance, moving it beyond a downstream compliance role [51-53].
MAJOR DISCUSSION POINT
AI’s expanded role in organizations
Argument 2
Need for system‑level judgment to monitor model drift and high‑stakes decisions (Ravi Aurora)
EXPLANATION
He emphasizes that professionals must be able to assess AI outputs, detect model drift, and intervene when decisions have high stakes, especially in regulated industries.
EVIDENCE
He describes the need for system-level judgment to understand AI outputs, detect model drift, and act appropriately in high-stakes, regulated contexts [57-64].
MAJOR DISCUSSION POINT
Critical skill: system‑level judgment
Argument 3
Interdisciplinary fluency across engineering, regulation, risk and user behavior (Ravi Aurora)
EXPLANATION
He argues that AI challenges sit at the intersection of multiple domains, so professionals need fluency across engineering, regulation, risk management, and user behavior.
EVIDENCE
He notes that AI challenges are not purely technical but span engineering, regulation, risk, and user behavior, requiring interdisciplinary fluency [64-66].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for interdisciplinary skill sets in AI careers is underscored in discussions about future AI workforces [S23].
MAJOR DISCUSSION POINT
Interdisciplinary skill requirement
Argument 4
Continuous learning mindset to keep pace with evolving AI models (Ravi Aurora)
EXPLANATION
He stresses that because AI systems continuously evolve with data, workers must adopt a lifelong learning attitude to stay relevant.
EVIDENCE
He highlights the necessity of a continuous learning mindset as AI models evolve with data and the workforce must evolve alongside them [67-70].
MAJOR DISCUSSION POINT
Lifelong learning for AI
Argument 5
Deep contextual awareness for multilingual and diverse user contexts (Ravi Aurora)
EXPLANATION
He points out that AI must understand varied linguistic and cultural contexts in India, requiring engineers to embed contextual awareness into models.
EVIDENCE
He discusses the need for AI agents to grasp multiple languages, dialects, informal systems, and real-life contexts to avoid misinterpretation [71-74].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Research on multilingual AI stresses that serving all languages and cultures is essential for inclusive AI outcomes [S17].
MAJOR DISCUSSION POINT
Contextual awareness in AI
Argument 6
AI literacy must be foundational across all disciplines, not just computer science (Ravi Aurora)
EXPLANATION
He argues that AI education should extend beyond CS majors to all fields, ensuring a broad base of AI‑literate professionals.
EVIDENCE
He states that AI should not be limited to computer-science majors and must be incorporated across a broad set of disciplines, emphasizing curriculum redesign [217-219] and embedding AI governance early [220-222].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for data literacy across the workforce imply that AI literacy should also be cross-disciplinary [S21].
MAJOR DISCUSSION POINT
Broad AI education
Argument 7
Early integration of governance with interdisciplinary teams is required (Ravi Aurora)
EXPLANATION
He describes Mastercard’s AI governance model where governance functions work horizontally across data, science, product, legal, compliance, and engineering, highlighting the need for early integration.
EVIDENCE
He outlines a formal AI governance framework with a chief AI & data governance officer, privacy-by-design, and a cross-functional AI governance team spanning data, science, product, legal, compliance, and engineering [191-200].
MAJOR DISCUSSION POINT
Governance integration
Argument 8
Shift from pure coding to AI governance, code verification and oversight (Sue Daley OBE)
Argument 9
Emphasis on redesigning roles rather than only reskilling existing staff (Ravi Aurora)
EXPLANATION
He argues that AI transforms tasks within jobs, so organizations should focus on redesigning roles to incorporate AI rather than merely reskilling employees.
EVIDENCE
He stresses the need to redesign roles, not just reskill, because AI changes tasks rather than eliminates whole roles, and highlights role redesign as a priority [222-226].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses point out that role redesign, not just reskilling, is the primary lever for AI adoption impact [S1].
MAJOR DISCUSSION POINT
Role redesign vs reskilling
Argument 10
Corporations should co‑design curricula with academia and embed AI governance principles (Ravi Aurora)
EXPLANATION
He suggests that companies collaborate with universities to create courses that reflect real‑world AI challenges and embed governance principles from the start.
EVIDENCE
He describes how corporations can work with chief learning officers and academia to design courses based on real-world examples, ensuring AI governance is embedded early in curricula [215-222].
MAJOR DISCUSSION POINT
Industry‑academia curriculum co‑design
Argument 11
Concentration risk – need equitable access to data, compute and training across geographies (Ravi Aurora)
EXPLANATION
He warns that a few institutions may dominate AI development due to superior data and compute resources, creating concentration risk; equitable access is essential.
EVIDENCE
He highlights concentration risk where a small set of companies or talent pools pull ahead because of better data, compute, and research ecosystems, and calls for equitable access across geographies, including tier-2 and tier-3 institutions [326-339].
MAJOR DISCUSSION POINT
Concentration risk in AI ecosystem
Argument 12
Mastercard’s formal AI governance framework with chief AI & data governance officer, privacy‑by‑design (Ravi Aurora)
EXPLANATION
He outlines Mastercard’s comprehensive AI governance structure, featuring dedicated leadership and a privacy‑by‑design approach to manage AI risks.
EVIDENCE
He details a formal AI governance framework with a chief AI and data governance officer, a chief privacy officer, and a privacy-by-design approach integrated across product, engineering, legal, compliance, and data science teams [191-200][194-197].
MAJOR DISCUSSION POINT
Corporate AI governance model
Argument 13
Contextual awareness is vital to avoid bias and exclusion in AI outcomes (Ravi Aurora)
EXPLANATION
He reiterates that AI must understand local contexts, languages, and cultural nuances to prevent biased or exclusionary results.
EVIDENCE
He stresses the importance of contextual awareness for multilingual and diverse user contexts, noting that without it AI can produce biased outcomes [71-74].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Inclusive design literature and studies on bias highlight that contextual awareness is key to preventing exclusionary AI results [S16], [S17], [S20].
MAJOR DISCUSSION POINT
Avoiding bias through context
Argument 14
Effective AI governance requires dedicated risk‑management professionals who proactively oversee AI systems rather than merely enforce compliance rules.
EXPLANATION
He stresses that governing AI at scale is fundamentally a workforce challenge that needs people who can manage AI‑related risk across functions, not just auditors checking boxes. Proactive risk‑management roles are essential for responsible AI deployment.
EVIDENCE
He states that governing AI at scale is a workforce challenge requiring interdisciplinary skill and that governance professionals must manage risk and not just enforce rules, highlighting the need for a cross-functional AI governance team [201-202].
MAJOR DISCUSSION POINT
Proactive risk management in AI governance
S
Sue Daley OBE
9 arguments184 words per minute2940 words957 seconds
Argument 1
Youth anxiety can be turned into agency through upskilling and reskilling (Sue Daley OBE)
EXPLANATION
She suggests that the anxiety young workers feel about AI can be transformed into proactive agency by providing upskilling and reskilling opportunities.
EVIDENCE
She acknowledges existing anxiety and concern among workers about displacement, then describes turning that anxiety into agency through continuous learning, upskilling programs, and reskilling pathways, including a one-year conversion course for non-AI graduates [98-104][105-107].
MAJOR DISCUSSION POINT
Managing youth anxiety
Argument 2
Lifelong learning plus human/social (“soft”) skills are essential (Sue Daley OBE)
EXPLANATION
She emphasizes that alongside technical abilities, workers need human‑centric soft skills such as communication and client‑facing abilities to add value in an AI‑augmented workplace.
EVIDENCE
She notes that automation frees people for problem-solving and client advisory roles, stressing the need to teach human/social skills in addition to technical and governance competencies [94-95][95-96].
MAJOR DISCUSSION POINT
Importance of soft skills
Argument 3
UK AI Skills Partnership aims to train >1 million people, includes one‑year conversion courses (Sue Daley OBE)
EXPLANATION
She describes the UK government’s AI Skills Partnership, which targets training over a million individuals and offers a one‑year conversion programme for those without an AI degree.
EVIDENCE
She cites the AI Skills Partnership’s goal to train over one million people and mentions a one-year conversion course for university graduates lacking an AI background [95-107].
MAJOR DISCUSSION POINT
Large‑scale upskilling initiative
Argument 4
Whole‑of‑government, industry‑academia collaboration is needed for coordinated upskilling (Sue Daley OBE)
EXPLANATION
She argues that coordinated effort across government bodies, industry, and academia is essential to deliver effective AI upskilling and reskilling at scale.
EVIDENCE
She references the UK’s AI Opportunities Action Plan, the AI Skills Partnership bringing together government, TechUK, and other bodies to design upskilling programmes, and stresses the need for joint collaboration across sectors [89-96].
MAJOR DISCUSSION POINT
Coordinated upskilling effort
Argument 5
Interoperability of skill credentials and a national taxonomy are priorities (Sue Daley OBE)
EXPLANATION
She highlights the need for a common framework that makes skill credentials portable and recognizable across employers and borders, supported by a national taxonomy of skills.
EVIDENCE
She discusses the importance of interoperable skill credentials, a national taxonomy, and a common language for skills, noting initiatives like TechSkills Gold Accreditation that align university curricula with employer needs [110-118].
MAJOR DISCUSSION POINT
Standardising skill credentials
Argument 6
Effective AI governance requires interdisciplinary skills and human oversight (Sue Daley OBE)
EXPLANATION
She asserts that AI governance cannot rely solely on technical controls; it needs interdisciplinary expertise and continuous human oversight to ensure responsible AI use.
EVIDENCE
She emphasizes that AI challenges are interdisciplinary and that human skills are needed to interact with clients and oversee AI outcomes, linking governance with both technical and human competencies [92-95].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Future AI roles are described as interdisciplinary, requiring human oversight for responsible governance [S23].
MAJOR DISCUSSION POINT
Interdisciplinary governance
Argument 7
Human‑in‑the‑loop needed for code checking and decision validation to prevent over‑automation (Sue Daley OBE)
EXPLANATION
She warns that even with AI‑assisted coding, humans must verify code and AI decisions to avoid errors and over‑automation.
EVIDENCE
She points out that while AI can generate code, there remains a need for humans to check that code, and later to verify the AI’s own checks, highlighting the necessity of human oversight in governance [175-178].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of AI-assisted coding note that tools are not fully autonomous and still need human verification, supporting the human-in-the-loop requirement [S9].
MAJOR DISCUSSION POINT
Human oversight in AI coding
Argument 8
AI initiatives need an iterative, flexible approach because static policies quickly become obsolete as technology evolves
EXPLANATION
She argues that there is no single, permanent solution for AI governance; programmes must be designed to adapt rapidly to new developments and shifting contexts.
EVIDENCE
She says “I don’t think there’s a silver bullet… the moment you put in a task force or initiative, it may very quickly need to shift and need to change” [109-110].
MAJOR DISCUSSION POINT
Need for adaptable AI policy frameworks
Argument 9
AI upskilling programmes should include clear pathways for mid‑career professionals to transition into new AI‑related roles, not only entry‑level training.
EXPLANATION
She points out that beyond training fresh graduates, there is a need for structured reskilling routes that enable existing workers to move into AI‑focused positions, ensuring the whole workforce can adapt to AI‑driven change.
EVIDENCE
She mentions “pathways for mid-career into new careers” as part of the AI Skills Partnership initiatives, indicating a focus on reskilling workers already in the labour market [105-107].
MAJOR DISCUSSION POINT
Mid‑career reskilling pathways
S
Sangeeta Gupta
7 arguments137 words per minute1828 words796 seconds
Argument 1
India’s fragmented approach versus the UK’s integrated model (Sangeeta Gupta)
EXPLANATION
She contrasts India’s disaggregated, multi‑government efforts on AI workforce transition with the UK’s more coordinated whole‑of‑government strategy.
EVIDENCE
She notes that in India multiple governments and organisations like NASSCOM are working on parts of the AI upskilling pie without a whole-of-government approach, and asks whether the UK has an integrated model [108-109].
MAJOR DISCUSSION POINT
Comparative policy coordination
Argument 2
AI creates both opportunities and anxiety among Indian youth, requiring clear understanding and navigation of AI-driven workforce shifts
EXPLANATION
She points out that while AI opens new possibilities for the Indian workforce, it also generates significant concern among young people, making it essential to demystify AI and guide the transition.
EVIDENCE
She notes that AI is “obviously creating a number of opportunities” but also “creating a lot of anxiety amongst the youth” and stresses the need to “decode what does AI really mean and how do we navigate these shifts” [9-12].
MAJOR DISCUSSION POINT
Balancing AI opportunities with youth anxiety
Argument 3
Higher education and school curricula must be restructured to embed AI principles, interdisciplinary skills, and continuous learning
EXPLANATION
She argues that current college and school programmes are insufficient for the rapidly changing AI landscape and must be updated to equip graduates with the necessary technical and soft skills.
EVIDENCE
She states that “there’s a lot of change that is needed at a college level and school level on how, you know, even how you’re learning so that you are ready for this very, very changing world” [73-74].
MAJOR DISCUSSION POINT
Curriculum redesign for AI readiness
Argument 4
Upskilling efforts must go beyond elite institutions and include tier‑2 and tier‑3 colleges to avoid marginalising a large talent pool
EXPLANATION
She warns that focusing only on top‑tier universities would exclude many capable graduates from smaller colleges, and calls for inclusive training programmes that reach the broader student base.
EVIDENCE
She asks whether the new AI skill focus will be “largely on more elite top tier institutions and a large volume of students that were probably studying in tier 2, tier 3 colleges” and notes that “We are closing out opportunities for them” [117-118].
MAJOR DISCUSSION POINT
Inclusive upskilling across educational tiers
Argument 5
AI‑native talent may lack foundational coding and problem‑solving skills due to over‑reliance on AI tools
EXPLANATION
She raises the concern that a generation raised with AI assistance might never develop core technical competencies, potentially weakening the overall skill base.
EVIDENCE
She observes that “AI native talent is great, but that talent will have never learned to, you know, work without AI” and asks whether this leads to a “lack of some foundational skills” [145-148].
MAJOR DISCUSSION POINT
Risk of skill erosion in AI‑dependent workforce
Argument 6
Role redesign and the emergence of forward‑deployed engineers are key trends that organisations must adapt to
EXPLANATION
She highlights that AI is reshaping job structures, prompting a shift from traditional roles to new configurations such as forward‑deployed engineers, and asks how companies are responding.
EVIDENCE
She asks Kish about “role redesign” and the concept of “forward deployed engineers becoming like the new buzzword in town” [231-236].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Discussion of AI-driven transformation stresses role redesign as a central organizational response [S1].
MAJOR DISCUSSION POINT
Organisational role redesign in the AI era
Argument 7
AI education curricula must embed ethics, trust, security and privacy principles to ensure future professionals develop responsible AI practices.
EXPLANATION
She asks for recommendations on how universities can incorporate ethical foundations, trust, security and privacy into AI programmes, highlighting the necessity of responsible AI education for upcoming talent.
EVIDENCE
She explicitly requests guidance on embedding ethics, principles, trust, security and privacy into curricula when speaking with Ravi Aurora about education recommendations [204-209].
MAJOR DISCUSSION POINT
Embedding AI ethics and trust in education
S
Speaker
1 argument95 words per minute25 words15 seconds
Argument 1
Multi‑stakeholder collaboration is essential for AI workforce transformation
EXPLANATION
The opening remark brings together senior representatives from government policy, industry, and technology innovation, signalling that tackling AI’s impact on jobs and skills requires coordinated action across these sectors.
EVIDENCE
The speaker lists the President of Global Public Policy and Government Affairs at Mastercard, the Co-Founder and Managing Director of Nucleus Software, and the Director of Tech and Innovation at Tech UK, demonstrating a deliberately cross-sectoral panel assembled to discuss AI and workforce issues [1].
MAJOR DISCUSSION POINT
Need for coordinated multi‑sector effort on AI and workforce change
AGREED WITH
Sangeeta Gupta, Sue Daley OBE
Agreements
Agreement Points
Role redesign and interdisciplinary skill requirement are essential to realize AI value
Speakers: Srikrishna Ramakarthikeyan, Ravi Aurora, Sue Daley OBE
Role redesign is critical to realize AI value; adoption gaps hinder impact Emphasis on redesigning roles rather than only reskilling existing staff Effective AI governance requires interdisciplinary skills and human oversight
All three speakers stress that without redesigning roles and embedding interdisciplinary expertise, organisations will not capture AI benefits; adoption gaps and the need for human oversight are highlighted as barriers to value creation [238-247][222-226][92-95].
POLICY CONTEXT (KNOWLEDGE BASE)
International policy discussions emphasize interdisciplinary approaches to AI governance, such as OECD and UNESCO initiatives [S50] and calls for a new interdisciplinary field to address AI complexity [S57]; India’s AI workforce strategy also highlights role redesign and cross-skill development [S59].
A continuous/lifelong learning mindset is required to keep pace with AI evolution
Speakers: Ravi Aurora, Sue Daley OBE
Continuous learning mindset to keep pace with evolving AI models Lifelong learning plus human/social (‘soft’) skills are essential
Both speakers emphasise that workers must adopt a lifelong learning approach to stay relevant as AI systems evolve rapidly, with continuous upskilling being crucial for future readiness [67-70][98-104].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with SDG 4 on quality education and skill development for the digital economy [S46]; the AI Impact Summit 2026 stresses lifelong learning and social-protection policies for workers adapting to AI [S48]; and expert panels call for continuous upskilling and critical thinking in AI curricula [S62].
Youth anxiety about AI should be transformed into agency through upskilling
Speakers: Sangeeta Gupta, Sue Daley OBE
AI creates both opportunities and anxiety among Indian youth, requiring clear understanding and navigation of AI‑driven workforce shifts Youth anxiety can be turned into agency through upskilling and reskilling
Both acknowledge that AI generates anxiety among young workers and propose targeted upskilling programmes to convert that anxiety into proactive agency and career opportunities [10-12][98-104].
POLICY CONTEXT (KNOWLEDGE BASE)
Australian experts note heightened job-security anxiety due to rapid AI adoption and stress empowerment through skills development [S45]; child-focused AI literacy frameworks also advocate building agency via foundational AI skills [S58]; inclusive AI policy discussions similarly call for turning anxiety into participation [S43].
Coordinated whole‑of‑government and multi‑stakeholder collaboration is needed for AI workforce transformation
Speakers: Sangeeta Gupta, Sue Daley OBE, Speaker
India’s fragmented approach versus the UK’s integrated model Whole‑of‑government, industry‑academia collaboration is needed for coordinated upskilling Multi‑stakeholder collaboration is essential for AI workforce transformation
All three highlight that effective AI workforce transition requires an integrated approach linking government, industry and academia, moving beyond fragmented efforts to a unified strategy [108-109][89-96][1].
POLICY CONTEXT (KNOWLEDGE BASE)
The AI Impact Summit calls for coherent whole-of-government strategies and multi-stakeholder investment in skills [S48]; interdisciplinary policy work is being pursued across the EU, OECD, UNESCO and other bodies [S50]; and policy roadmaps emphasize cross-sector collaboration for AI empowerment [S51].
Interdisciplinary fluency across technical, regulatory and user‑behavior domains is vital for AI implementation
Speakers: Ravi Aurora, Sue Daley OBE
Interdisciplinary fluency across engineering, regulation, risk and user behavior Effective AI governance requires interdisciplinary skills and human oversight
Both speakers argue that AI challenges span multiple domains and therefore require professionals who can operate across engineering, risk, regulation and user-experience, coupled with human oversight to ensure responsible deployment [64-66][92-95].
POLICY CONTEXT (KNOWLEDGE BASE)
International AI governance discussions highlight the need for interdisciplinary fluency to address technical, legal and societal dimensions [S57]; policy research roadmaps call for a new interdisciplinary field to manage AI’s system-of-systems nature [S57]; skill frameworks stress combining technical, regulatory and user-experience knowledge [S62].
AI literacy is a foundational skill comparable to language proficiency and must be embedded across all disciplines
Speakers: Srikrishna Ramakarthikeyan, Ravi Aurora
AI literacy should be treated as a foundational skill comparable to language proficiency, essential for all professionals AI literacy must be foundational across all disciplines, not just computer science
Both assert that AI knowledge will become as essential as basic language skills, requiring its integration into curricula and professional development across all fields, not limited to computer-science majors [291-292][217-219].
POLICY CONTEXT (KNOWLEDGE BASE)
UN Security Council consensus underscores data and AI literacy as essential competencies for all professionals [S55]; educational initiatives for children stress AI literacy as a basic right and skill [S58]; and SDG 4 emphasizes AI-related literacy as part of quality education [S46].
Similar Viewpoints
Both stress that AI systems must retain human oversight, with professionals required to verify AI‑generated code and decisions to avoid over‑automation risks [191-200][175-178].
Speakers: Ravi Aurora, Sue Daley OBE
Effective AI governance requires interdisciplinary skills and human oversight Human‑in‑the‑loop needed for code checking and decision validation to prevent over‑automation
Both highlight that AI policy and programme design must be adaptable and iterative, as static solutions quickly become outdated in a fast‑moving technological landscape [109-110][238-247].
Speakers: Ravi Aurora, Sue Daley OBE
AI initiatives need an iterative, flexible approach because static policies quickly become obsolete as technology evolves I don’t think there’s a silver bullet… the moment you put in a task force or initiative, it may very quickly need to shift and need to change
Unexpected Consensus
Both UK and US‑influenced speakers stress the need for flexible, iterative policy approaches despite differing regulatory philosophies
Speakers: Ravi Aurora, Sue Daley OBE
AI initiatives need an iterative, flexible approach because static policies quickly become obsolete as technology evolves I don’t think there’s a silver bullet… the moment you put in a task force or initiative, it may very quickly need to shift and need to change
While Ravi discusses AI governance from a corporate perspective and Sue from a policy standpoint, both converge on the unexpected consensus that AI initiatives must be designed with flexibility and continuous revision, a point not explicitly raised by other participants [109-110][238-247].
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses note the gap between rapid technological change and slower policy cycles, urging flexible, iterative frameworks [S52][S53]; concerns about a regulatory “race to the bottom” further motivate adaptable policy design [S49]; UK scale-up hiring surveys illustrate cautious, adaptive workforce policies in response to AI [S44].
Overall Assessment

The panel shows strong convergence on several key themes: the necessity of role redesign and interdisciplinary skill sets; the centrality of lifelong learning; the importance of coordinated multi‑stakeholder action; the view of AI literacy as a basic skill; and the need to address youth anxiety through upskilling. These shared positions indicate a high level of consensus on how to manage AI‑driven workforce transformation.

High consensus across speakers, suggesting that policy makers, industry leaders and educators are aligned on the strategic priorities for AI workforce transition, which should facilitate coordinated actions and accelerate effective implementation.

Differences
Different Viewpoints
AI‑native talent is an advantage versus a risk of eroding foundational skills
Speakers: Srikrishna Ramakarthikeyan, Sangeeta Gupta
Digital‑native talent holds a natural advantage in AI adoption (Srikrishna Ramakarthikeyan) AI‑native talent may lack foundational coding and problem‑solving skills due to over‑reliance on AI tools (Sangeeta Gupta)
Srikrishna argues that younger, digitally native workers adapt faster to AI and give organisations a competitive edge, citing examples of Instagram-savvy hires and top-engineering-school graduates [141-142]. Sangeeta counters that a generation raised with AI tools may never learn to work without them, potentially weakening core coding and problem-solving abilities [145-148].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on AI education balance the benefits of AI-native talent with preserving core foundational skills, as highlighted in child-focused AI literacy and SDG-aligned skill development frameworks [S58][S46].
AI will reduce coding costs to near zero versus the continued need for human code verification and governance
Speakers: Srikrishna Ramakarthikeyan, Sue Daley OBE
AI drives coding cost toward zero, enabling solutions previously infeasible (Srikrishna Ramakarthikeyan) Shift from pure coding to AI governance, code verification and oversight; humans must still check AI‑generated code (Sue Daley OBE)
Srikrishna predicts that the cost of coding will become zero, allowing anyone to solve complex problems and making coding a solved problem [164-168]. Sue argues that even with AI-generated code, humans are required to review and validate the output to avoid errors, emphasizing a human-in-the-loop governance model [175-178].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy analyses warn against overreliance on AI without human oversight, emphasizing risks of bias and the necessity of verification [S39]; the tension between technical efficiency and human governance is discussed in efficiency-oversight debates [S40]; autonomous-systems discussions also stress maintaining human control in critical decisions [S41].
Speed of AI adoption and its workforce impact
Speakers: Srikrishna Ramakarthikeyan, Ravi Aurora
Adoption will be gradual, yielding low single‑digit workforce impact per year (Srikrishna Ramakarthikeyan) AI is already embedded in core decision‑making and requires rapid role redesign and interdisciplinary integration (Ravi Aurora)
Srikrishna estimates AI will affect only 1-2 % of the workforce this year and 2-3 % next year, citing slow adoption and organizational redesign constraints [238-245]. Ravi stresses that AI is now part of decision-making across finance, risk and governance, urging swift integration, role redesign and continuous learning to keep pace [51-53][222-226].
POLICY CONTEXT (KNOWLEDGE BASE)
Reports document rapid AI diffusion and its labour-market effects, from India’s projected workforce shifts [S42] to UK founders anticipating job cuts [S44] and Australian anxiety over job security [S45]; these illustrate the accelerating impact on workers.
Unexpected Differences
Optimism about zero‑cost coding versus necessity of human oversight
Speakers: Srikrishna Ramakarthikeyan, Sue Daley OBE
AI will make coding cost zero, turning code into a free resource (Srikrishna Ramakarthikeyan) Even with AI‑generated code, humans must verify and govern it to prevent errors (Sue Daley OBE)
Srikrishna’s vision of coding becoming a cost-free, democratized activity [164-168] clashes with Sue’s insistence that a human-in-the-loop remains essential for code validation and governance [175-178], revealing an unexpected tension between the promise of full automation and practical governance needs.
POLICY CONTEXT (KNOWLEDGE BASE)
Reflects the same policy concerns about overdependence on generative AI and the need for human governance noted in overreliance studies [S39] and the efficiency-oversight debate [S40].
Speed of AI impact versus perceived immediate strategic urgency
Speakers: Srikrishna Ramakarthikeyan, Ravi Aurora
AI adoption will be slow, yielding modest workforce impact (Srikrishna Ramakarthikeyan) AI is already embedded in core decision‑making and requires rapid organizational change (Ravi Aurora)
While Srikrishna forecasts a gradual rollout with low-single-digit impact [238-245], Ravi portrays AI as already central to decision-making and urges swift role redesign and continuous learning [51-53][222-226]. The contrast between a measured rollout and an urgent transformation agenda was not anticipated given the shared focus on AI.
POLICY CONTEXT (KNOWLEDGE BASE)
Highlights the documented mismatch between fast-moving AI technologies and slower policy responses, a theme in AI governance literature calling for urgent yet thoughtful action [S52][S53].
Overall Assessment

The panel largely agrees on the need for upskilling, interdisciplinary skills, and inclusive AI policies. The main points of contention revolve around the implications of AI‑driven automation: whether AI will render coding essentially free and eliminate the need for human coders, versus the necessity of human oversight; and whether AI‑native talent represents a strategic advantage or a risk of eroding core technical foundations. A secondary tension exists over the expected speed of AI adoption, with some participants forecasting a gradual impact and others urging rapid transformation.

Moderate disagreement. The divergences are focused on future expectations and implementation details rather than fundamental goals, suggesting that consensus on overarching objectives (upskilling, inclusion, interdisciplinary collaboration) remains strong, but policy and practice pathways will require careful negotiation to balance optimism about automation with safeguards for skill integrity and governance.

Partial Agreements
All three agree that upskilling is essential, but differ on the primary mechanism: Ravi pushes for direct industry‑academia collaboration and role redesign [215-222]; Sue highlights a government‑led AI Skills Partnership with conversion courses and a national taxonomy for portable credentials [95-107][110-118]; Sangeeta calls for broader curriculum reform at school and college levels [73-74].
Speakers: Ravi Aurora, Sue Daley OBE, Sangeeta Gupta
Need for upskilling and reskilling the workforce for AI (Ravi Aurora, Sue Daley OBE, Sangeeta Gupta) Industry‑academia co‑design of curricula and embedding AI governance early (Ravi Aurora) National AI Skills Partnership targeting >1 million learners and credential interoperability (Sue Daley OBE) Revamping school and college curricula to embed AI principles (Sangeeta Gupta)
All concur on the goal of inclusive AI, yet propose different pathways: Ravi warns of concentration risk and calls for equitable infrastructure across geographies and tier‑2/3 institutions [326-339]; Srikrishna stresses designing inclusiveness from the start, citing the internet’s open academic origins [365-373]; Sue focuses on a coordinated whole‑of‑government partnership and skill‑credential interoperability [89-96][109-110].
Speakers: Ravi Aurora, Srikrishna Ramakarthikeyan, Sue Daley OBE
Inclusive AI ecosystem is required (Ravi Aurora, Srikrishna Ramakarthikeyan, Sue Daley OBE) Concentration risk must be mitigated through equitable access to data, compute and training (Ravi Aurora) Inclusiveness should be built by design, leveraging free academic resources (Srikrishna Ramakarthikeyan) Whole‑of‑government, industry‑academia partnership needed for coordinated upskilling (Sue Daley OBE)
Takeaways
Key takeaways
AI is reshaping software engineering more than testing or infrastructure, driving coding costs toward zero and enabling previously infeasible solutions. Adoption of AI in the workforce will be gradual, likely resulting in low single‑digit percentage impacts on employment each year. New skill requirements include system‑level judgment, interdisciplinary fluency (engineering, regulation, risk, user behavior), continuous learning mindset, deep contextual awareness, and strong human/social (soft) skills. AI governance must be embedded early in product design, with interdisciplinary teams and human‑in‑the‑loop oversight to manage model drift, bias, and high‑stakes decisions. Role redesign (e.g., smaller agile squads, faster delivery cycles) is essential; merely reskilling staff is insufficient. Education systems need to integrate AI literacy across all disciplines, co‑design curricula with industry, and provide pathways for conversion courses and lifelong learning. A coordinated, whole‑of‑government and industry‑academia approach (as exemplified by the UK AI Skills Partnership) is critical for scaling upskilling and ensuring inclusivity. Risks include concentration of talent and resources, over‑automation without human oversight, exclusion of non‑English or informal sector workers, and fragmented policy approaches (especially in India). Inclusiveness must be built into AI systems by design, leveraging free academic resources and equitable access to data and compute.
Resolutions and action items
Commitment from panelists to promote interdisciplinary collaboration between industry, academia, and government for AI curriculum design (Ravi Aurora). Recommendation to develop a national taxonomy of AI‑related skills and ensure interoperability of skill credentials (Sue Daley OBE). Suggestion for corporations to involve frontline engineers in co‑creating training programs and real‑world use‑case curricula with universities (Ravi Aurora). Call for governments to invest in data and compute infrastructure (AI growth zones, national data library) to support widespread AI adoption (Sue Daley OBE). Encouragement for organizations to shift from mandatory training mandates to self‑directed learning, leveraging the natural motivation of digital‑native employees (Srikrishna Ramakarthikeyan). Proposal to focus on role redesign rather than only reskilling, reducing squad sizes and accelerating delivery cycles (Srikrishna Ramakarthikeyan).
Unresolved issues
Exact magnitude and timeline of job displacement versus job creation across different sectors remain uncertain. How to ensure that young professionals acquire deep foundational coding and system knowledge while relying heavily on AI tools. Specific mechanisms for coordinating fragmented Indian state and central initiatives into a unified AI workforce strategy. Details on how to provide contextual AI training for multilingual and informal‑sector workers at scale. Methods to monitor and mitigate concentration risk where a few institutions dominate access to data, compute, and talent. Concrete metrics or benchmarks for measuring the effectiveness of upskilling programs and AI governance frameworks.
Suggested compromises
Balancing automation of routine tasks with investment in human‑centric skills (soft skills, governance) to retain meaningful work (Sue Daley OBE). Adopting an iterative, flexible policy approach rather than a single, fixed solution, allowing rapid adaptation as technology evolves (Sue Daley OBE). Combining top‑tier talent pipelines with broader inclusion of tier‑2/3 institutions to avoid concentration while still leveraging elite expertise (Ravi Aurora, Srikrishna Ramakarthikeyan). Encouraging voluntary upskilling rather than mandatory mandates, trusting employee motivation while still providing resources (Srikrishna Ramakarthikeyan).
Thought Provoking Comments
Software engineering is now the most disrupted area, even more than testing or infrastructure management.
He challenged the common assumption that testing would be the first casualty of AI, highlighting a rapid shift in where AI impact is felt.
Prompted the moderator to ask about implications for fresh graduates and led the panel to focus on upskilling software engineers rather than just QA staff.
Speaker: Srikrishna Ramakarthikeyan
The real value of AI is not in reducing headcount but in solving problems we couldn’t solve before.
This reframed AI from a threat of job loss to an opportunity for new problem‑solving capabilities.
Shifted the tone from anxiety to opportunity, encouraging other speakers (Ravi and Sue) to discuss skill sets needed to leverage AI rather than merely defending jobs.
Speaker: Srikrishna Ramakarthikeyan
We need system‑level judgment, interdisciplinary fluency, a continuous learning mindset, and deep contextual awareness to work with AI in high‑stakes, regulated environments.
He introduced a nuanced skill taxonomy that goes beyond technical coding ability, emphasizing judgment and context.
Guided the discussion toward the importance of governance, risk, and domain knowledge, influencing Sue’s remarks on AI governance and the need for interdisciplinary training.
Speaker: Ravi Aurora
Turn anxiety into agency – empower people to take the lead in upskilling, with initiatives like the UK AI Skills Partnership aiming to train over one million people.
She offered a concrete, large‑scale response to workforce anxiety, linking policy to personal empowerment.
Created a turning point where the conversation moved from describing problems to presenting actionable government‑backed solutions, prompting Sangeeta to compare India’s fragmented approach with the UK’s coordinated effort.
Speaker: Sue Daley OBE
In the US the debate is about regulation; in the UK (and elsewhere) the focus is on inclusiveness – making AI work for everyone.
He highlighted a strategic difference in national AI policy approaches, introducing the theme of inclusivity versus regulation.
Sparked a deeper dialogue on policy design, leading Sue to stress the need for iterative, flexible frameworks and Sangeeta to question India’s disaggregated governance.
Speaker: Srikrishna Ramakarthikeyan
The cost of coding will become zero; AI will make code cheap, allowing us to solve problems that were previously too complex or expensive.
He projected a radical shift in the economics of software development, challenging the notion that coding skills will remain a premium commodity.
Prompted Sue to discuss the future role of humans in checking AI‑generated code and raised concerns about losing foundational coding knowledge, deepening the debate on future job functions.
Speaker: Srikrishna Ramakarthikeyan
Automation of junior roles removes the pathway through which people learn context; we must consider how to teach context if those roles disappear.
She raised a subtle but critical point about the hidden value of entry‑level positions for building domain knowledge.
Shifted the conversation toward the importance of preserving experiential learning, influencing Ravi’s later emphasis on role redesign rather than pure reskilling.
Speaker: Sue Daley OBE
AI transforms tasks within jobs rather than eliminating entire roles; we should focus on role redesign and building inclusive, distributed talent pipelines.
He reframed the narrative from job loss to task evolution, emphasizing redesign over reskilling.
Steered the panel toward concrete strategies for organizations and governments, leading Srikrishna to discuss adoption speed and the need for systematic rollout.
Speaker: Ravi Aurora
There is a concentration risk: a few institutions or companies could pull ahead due to better data, compute, and talent, leaving others behind.
He introduced a macro‑level risk that goes beyond individual skill gaps, highlighting systemic inequality in AI development.
Prompted Sangeeta to ask about inclusive policies and Sue to mention the need for interoperable skill credentials and national taxonomy, expanding the discussion to ecosystem‑wide solutions.
Speaker: Ravi Aurora
Inclusiveness has to be by design; academia should make AI education free and open, just as the internet became inclusive.
He offered a clear, actionable principle for making AI benefits broadly accessible, tying back to earlier points on policy and education.
Served as a closing rallying call, reinforcing the earlier themes of inclusive policy and education, and aligning with Sue’s emphasis on universal access to AI infrastructure.
Speaker: Srikrishna Ramakarthikeyan
Overall Assessment

The discussion was shaped by a series of pivot points where speakers moved the conversation from fear of displacement to concrete opportunities and systemic solutions. Early insights about which job families are most affected (software engineering) and the reframing of AI’s value set the stage for deeper analysis of required skill sets. Ravi’s articulation of system‑level judgment and interdisciplinary fluency, followed by Sue’s policy‑level response (turning anxiety into agency), introduced a practical roadmap that shifted the tone from speculative to actionable. Srikrishna’s contrasts between regulatory focus and inclusivity, plus his bold claim that coding will become free, injected strategic and economic perspectives that broadened the debate. Concerns about loss of contextual learning and concentration risk added nuance, prompting calls for role redesign, inclusive education, and interoperable credentials. Collectively, these comments redirected the panel from describing disruption to proposing coordinated, inclusive, and interdisciplinary responses, highlighting the need for policy, industry, and academia to work together.

Follow-up Questions
Which specific services or functions within IT services are most likely to be impacted by AI, and how will that change over time?
Identifying the most vulnerable services helps firms prioritize reskilling and investment decisions.
Speaker: Srikrishna Ramakarthikeyan
How can we ensure that AI‑native talent retains foundational coding and problem‑solving skills despite heavy reliance on AI tools?
There is a risk that over‑dependence on AI erodes core technical competencies needed for future adaptability.
Speaker: Sangeeta Gupta
What concrete measures can turn AI‑related anxiety among workers into agency and proactive upskilling?
Converting fear into action is essential for a smooth workforce transition and for maintaining productivity.
Speaker: Sue Daley OBE
How can school and university curricula be better aligned with the AI revolution to provide relevant, interdisciplinary skills?
Curriculum alignment ensures graduates possess the skills demanded by AI‑driven industries.
Speaker: Sue Daley OBE
What effective models exist for role redesign in IT services, especially regarding the emergence of ‘forward‑deployed engineers’?
Understanding new role structures is critical for organizations to leverage AI while preserving employee value.
Speaker: Sangeeta Gupta
How can AI governance and interdisciplinary collaboration be embedded into higher‑education curricula across non‑technical disciplines?
Broad AI literacy beyond computer science is needed to create a workforce capable of responsible AI deployment.
Speaker: Ravi Aurora
What strategies can mitigate concentration risk where a few institutions or firms dominate AI talent, data, and compute resources?
Preventing concentration ensures equitable access to AI opportunities and avoids widening socioeconomic gaps.
Speaker: Ravi Aurora
How can a national skills‑credential interoperability framework be created to allow lifelong learning and mobility across sectors?
Interoperable credentials enable workers to upskill continuously and transition between roles and industries.
Speaker: Sue Daley OBE
What policies and investments are needed to provide equitable access to data, compute, and research infrastructure for SMEs and under‑served regions?
Access to foundational AI infrastructure is a prerequisite for widespread adoption and inclusive growth.
Speaker: Sue Daley OBE
How can AI systems be designed to avoid exclusion of informal workers, women entrepreneurs, and vernacular language users?
Ensuring AI works for diverse user groups prevents large‑scale exclusion and supports inclusive economic development.
Speaker: Ravi Aurora
What pathways exist for developers whose coding tasks are automated to transition into code‑review or governance roles, and how can organizations support this shift?
Providing clear career transition routes helps retain talent and maintains oversight of AI‑generated code.
Speaker: Sue Daley OBE
How can organizations preserve the contextual learning traditionally gained in junior roles when those roles are automated?
Contextual knowledge is vital for effective AI oversight and for making informed business decisions.
Speaker: Sue Daley OBE
What evidence‑based upskilling programs are most effective for mid‑career professionals transitioning to AI‑augmented roles?
Mid‑career reskilling is crucial to avoid large‑scale displacement and to retain experienced talent.
Speaker: Sue Daley OBE
How can AI skills partnerships be structured to remain iterative and flexible as technology evolves?
Flexibility ensures that initiatives stay relevant and can adapt to rapid AI advancements.
Speaker: Sue Daley OBE
What data is needed to accurately quantify AI‑driven workforce impact (e.g., displacement percentages) across industries?
Reliable metrics are essential for policymakers and businesses to design effective transition strategies.
Speaker: Srikrishna Ramakarthikeyan
How can AI be taught as a foundational literacy akin to English across all educational levels?
Embedding AI literacy early prepares the next generation for an AI‑centric economy.
Speaker: Srikrishna Ramakarthikeyan
What specific examples from the UK’s AI infrastructure and adoption initiatives could be adapted for India?
Learning from UK successes can accelerate India’s AI deployment while avoiding known pitfalls.
Speaker: Sangeeta Gupta
How can inclusiveness be built into AI development and education by design?
Design‑level inclusivity ensures that AI benefits are broadly shared and that barriers to entry are minimized.
Speaker: Srikrishna Ramakarthikeyan

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.