NextGen AI Skills Safety and Social Value – technical mastery aligned with ethical standards
20 Feb 2026 15:00h - 16:00h
NextGen AI Skills Safety and Social Value – technical mastery aligned with ethical standards
Summary
The panel, chaired by Sh. Subodh Sachan, examined the widening AI talent gap in India and how the nation can build a next-generation AI ecosystem to sustain rapid industry transformation [3-5][8][9]. Participants agreed that AI is reshaping business and the workforce, requiring people not only to use AI tools but to coexist with an evolving AI ecosystem [5][6-7]. Dr. Sarabjot emphasized that future AI practitioners must be critical thinkers who question AI outputs, recognize its deficiencies, and be willing to take risks [39-46]. Dr. Devinder Singh added that next-gen AI talent should possess strong AI expertise, research ability, cross-sector experience, and awareness of regulatory frameworks [49-54]. Professor Jawar Singh highlighted the necessity of solid grounding in hardware and computer-science fundamentals to translate algorithms into efficient, secure implementations [57-60]. Professor Alok Pandey described the ideal AI professional as “T-shaped,” combining deep domain knowledge with fluency in AI techniques and skills in red-teaming and containment [63-66]. Kunal Gupta and Vikas Srivastava stressed that beyond technical mastery, ethical judgment and real-world problem-solving are essential components of next-gen AI talent [84-87][71-78]. Dr. Sarabjot further noted that assessing talent should focus on problem-solving ability, self-directed learning, and creativity rather than mere familiarity with libraries [103-110]. Both Kunal and Vikas pointed out that many candidates struggle to define problems correctly, a skill they consider half of any solution [205-208]. The panel identified a systemic lag in academic curricula, calling for de-bureaucratised, fast-moving curricula and greater autonomy for institutions, especially state technical colleges, to keep pace with AI advances [239-244][283-291]. In the telecom sector, Dr. Devinder explained that 6G will embed AI in every component, requiring engineers to master machine learning and adhere to emerging AI standards, with the government already publishing relevant guidelines [124-138][140-147]. Addressing algorithmic bias and robustness, Dr. Devinder outlined quantitative fairness and bias indices that can be used by developers, regulators, and deployers to ensure trustworthy AI systems [320-327]. The discussion concluded that closing the AI talent gap demands coordinated action among academia, industry, and policymakers to foster critical thinking, interdisciplinary expertise, ethical awareness, and agile education reforms, thereby enabling India to leverage AI as an infrastructure of intelligence [71-78][239-244][315].
Keypoints
Major discussion points
– The AI talent gap and the competencies needed for “next-gen” AI professionals – Panelists repeatedly stressed that future AI talent must go beyond tool-level knowledge. Critical thinking, the ability to question AI outputs, risk-taking, and a solid grounding in both algorithms and hardware are essential [39-46][57-60][63-66][83-86]. A “T-shaped” profile (deep domain expertise plus fluency in AI and red-team/containment skills) was highlighted as the ideal model [63-66]. Vikas added that technical mastery, ethical judgment and real-world problem-solving are the three pillars of next-gen talent [83-86].
– Curriculum reform and the need for faster, more flexible education pathways – Several speakers pointed out that current curricula are too slow and bureaucratic, especially in state technical institutions, and that universities must gain autonomy to create rapid, industry-aligned programs [252-256][279-282][283-293]. Alok called for “de-bureaucratised” curricula, more faculty training, and stronger industry-university MOUs to keep pace with AI’s velocity [237-250]. Sarabjot described a “passion-project” model that pairs students with industry mentors to fill gaps that formal curricula cannot [259-276].
– AI as foundational infrastructure that will reshape sectors, especially telecom and vernacular services – Kunal described next-gen AI as an “infrastructure of intelligence” that multiplies human reasoning and enables vernacular-language interfaces [71-78]. Devinder explained how 6G will embed AI in every network component, shifting from static planning to self-learning, edge-distributed decision-making [124-136]. Subodh linked this to the need for sector-specific skill sets and standards for AI-enabled operations [148-152].
– Skilling initiatives and ecosystem building – Subodh outlined the STPI “Skill-Up” programme, the creation of regional training hubs, and a partner network of 18 training organisations to deliver AI up-skilling at scale [9-12]. Vikas noted the use of AI-driven adaptive learning tools that assess individual skill gaps and recommend personalised pathways [315-316]. Kunal emphasized a data-driven “employability intelligence layer” that matches market-demanded jobs with candidate capabilities [212-219].
– Standards, bias, fairness and evaluation of AI systems and talent – The discussion moved to the importance of regulatory standards, bias indices, and fairness metrics for trustworthy AI deployment [320-327]. Devinder highlighted existing telecom AI standards and the need for engineers to follow them [142-147]. The panel agreed that evaluating talent must include problem-definition ability, ethical awareness, and the capacity to work within these standards [204-209][320-327].
Overall purpose / goal of the discussion
The session was convened to diagnose the current AI talent gap in India, explore what “next-gen” AI expertise should look like, and chart concrete actions-through curriculum reform, industry-academia partnerships, national skilling programmes, and standards development-to build a robust AI ecosystem that can drive economic transformation and inclusive societal impact.
Overall tone and its evolution
The conversation began with a forward-looking, optimistic tone, celebrating AI’s transformative potential and the launch of new skilling initiatives [3-5][9-12]. As the panel delved into specific challenges, the tone shifted to urgent and problem-focused, highlighting gaps in education, industry readiness, and regulatory frameworks [204-209][252-256][320-327]. Throughout, the discourse remained collaborative and respectful, with speakers building on each other’s points and repeatedly calling for joint action across government, academia, and industry.
Speakers
– Professor Dr. Jawar Singh – Role/Title: Professor, Indian Institute of Technology Patna; Founder, Kuturna Labs.
Areas of Expertise: AI algorithms, hardware implementation, neuromorphic/brain-inspired computing, AI product development, hardware security. [S1]
– Dr. Sarabjot Singh Anand – Role/Title: Co-founder & Chief Data Scientist, TATRAS; Co-founder, Sabath Foundation.
Areas of Expertise: Artificial intelligence, data science, talent development, social-impact AI solutions, AI education and mentorship. [S2]
– Vikash Srivastava – Role/Title: Chief Growth Strategist, Vincis IT Services Private Limited.
Areas of Expertise: Enterprise consulting, cloud workforce upskilling, AI talent reskilling, industry-focused AI training. [S3]
– Kunal Gupta – Role/Title: Managing Director, Mount Talent Consulting.
Areas of Expertise: Talent advisory, recruitment, AI-driven skill-gap analysis, job-search portal operations, industry-academia talent alignment. [S5]
– Professor Dr. Alok Pandey – Role/Title: Professor and Dean, UP Jindal University.
Areas of Expertise: Finance, governance, higher education, fintech, AI applications, curriculum development, academic-industry collaboration. [S7]
– Dr. Devinder Singh – Role/Title: Deputy Director General, Department of Telecommunications (TEC).
Areas of Expertise: Telecom standards formalisation, AI integration in telecommunications, 6G technology, AI governance and regulatory frameworks. [S9]
– Audience – Role/Title: General participants (e.g., Vikram Tripathi, village resident and aspiring panchayat candidate).
Areas of Expertise: Not specified.
– Sh. Subodh Sachan – Role/Title: Director, SGPA Headquarters; Moderator of the session.
Areas of Expertise: AI ecosystem development, skilling initiatives, industry-government liaison, national AI policy implementation. [S14]
Additional speakers:
None identified beyond the listed speakers.
The session opened with Sh Subodh Sachan framing the discussion around a widening talent gap in India’s AI ecosystem. He argued that the present era is “the most exciting time in the industry because AI is transforming everything” – from business models to the workforce – and that success now depends on the ability to “co-exist with the whole AI ecosystem together” (see [3-5]). He highlighted his 27-year experience across industry and government and noted that “there is always a gap in opportunity” that must be addressed to sustain the transformative potential of AI (see [8]). To that end, he announced the STPI “Skill-Up” programme, which will soon launch multiple regional training hubs and currently partners with 18 training organisations across India, with plans to expand the network further (see [9-12]).
After a brief introduction of the panel, Sachan introduced the speakers: Professor Dr Alok Pandey (Dean, UP Jindal University) with three decades of experience in finance, governance and fintech; Professor Dr Jawar Singh (IIT Patna, founder of Kuturna Labs); Dr Devinder Singh (Deputy Director General, Department of Telecommunications) with expertise in standards; Dr Sarabjot Singh Anand (co-founder of TATRAS and Sabath Foundation); Vikas Srivastava (Chief Growth Strategist, Vincis IT Services) and Kunal Gupta (MD, Mount Talent Consulting) (see [13-31]).
The first substantive contribution came from Dr Sarabjot, who distinguished two camps in the AI workforce: those who generate the next wave of AI and those who use AI to become more efficient. He stressed that “critical thinkers … are more important than any technology as such” because “there is a great move towards outsourcing your thinking to AI” and warned against treating AI as an oracle, urging practitioners to recognise its deficiencies, question outputs and be willing to take risks (see [39-46]).
Dr Devinder Singh added a complementary perspective, asserting that next-gen AI talent must possess “strong expertise in AI” together with the ability to solve real-world problems, adapt to new technologies, conduct research across sectors and remain aware of regulatory frameworks governing AI (see [49-54]).
Professor Dr Jawar Singh broadened the technical scope by insisting that future AI professionals need a “solid grounding of hardware, solid grounding of computer science, or even the engineering domain” to map algorithms efficiently onto hardware and to ensure security, noting the stark energy gap between a typical NVIDIA processor (500-700 W) and the human brain (≈20 W) and calling for neuromorphic, brain-inspired computing as a research priority (see [57-60][153-164]).
Professor Dr Alok Pandey then presented a concise talent model: a “T-shaped” profile that combines deep domain specialisation, fluency in AI software and hardware, and the capability to perform red-team testing and containment of AI systems (see [63-67]). This model was echoed by Vikas Srivastava, who identified three pillars for next-gen talent – technical mastery, ethical judgement, and real-world problem-solving – and argued that professionals must know where AI fits and where it does not (see [83-87]).
Kunal Gupta described AI as “infrastructure of intelligence” that multiplies human reasoning, creativity and values, highlighted its potential to democratise access through vernacular-language interfaces, and drew a parallel with how TikTok expanded content creation beyond English-speaking users (see [71-78]). He also cited AI-enabled hydroponics as an example of how AI can create high-yield, pesticide-free agriculture without dependence on weather (see [220-227]).
When asked how fresh AI talent should be evaluated, Dr Sarabjot outlined a practical rubric centred on problem-solving ability, self-directed learning, curiosity and creativity, rather than mere familiarity with libraries. He illustrated this with TATRAS’s “passion-project” approach, where students work on real customer problems under mentorship from industry experts, thereby gaining domain insight that “doesn’t matter what technology you use” as long as the solution solves the problem (see [103-110][259-276]).
Vikas Srivastava argued that conventional classroom training, which focuses heavily on theory, must be supplemented with (i) applied problem-solving on real data sets and (ii) production-level exposure that moves models from notebooks to secure, scalable systems; he indicated that a third, as-yet-unspecified layer would also be needed (see [303-311]).
Both Kunal and Vikas highlighted the role of AI-driven tools in scaling upskilling. Kunal described an “employability intelligence layer” that uses AI to perform scientific gap analysis, match market-demanded jobs with candidate profiles and recommend personalised learning pathways, while Vikas noted that adaptive learning platforms now assess individual skill gaps and suggest targeted upskilling, thereby improving employability outcomes (see [212-224][315-316]).
Curriculum reform emerged as a recurrent theme. Sachan linked the discussion to the National Education Policy, noting that it already grants greater autonomy for faster curriculum evolution (see [252-256]). Alok called for “de-bureaucratised” curricula, more autonomy for Institutions of Eminence, and stronger industry-university MOUs to increase faculty capacity and keep pace with AI’s velocity (see [237-250]). Jawar added that centrally funded technical institutes (CFTIs) already enjoy the freedom to launch new courses without delay, suggesting that the bottleneck lies primarily in state-run institutions, which suffer from lengthy syllabus-revision cycles and limited multilingual support (see [279-293]).
The panel also examined sector-specific AI integration. Dr Devinder Singh explained that 6G will embed AI in every network component, shifting from static planning to self-learning, edge-distributed decision-making, and that engineers will need to master machine-learning and adhere to emerging AI standards, many of which have already been drafted by the telecom standards body (see [124-138][140-147]). This vision aligns with the broader view that AI is becoming foundational infrastructure, requiring efficient hardware, standards and a mindset of curiosity and creativity to harness its potential (see [90-94][170-176][153-164]).
Ethical considerations were foregrounded throughout. Alok warned that every AI product must undergo red-team testing and containment, even suggesting that a technology should be “killed” if it behaves undesirably, and he referred to Mustafa Suleiman’s book The Coming Wave, cautioning that without robust safety and security mechanisms AI could become a “wave that drowns us” (see [186-188]). Devinder introduced quantitative bias and fairness indices (0-1 scale) and robustness metrics that can be used by developers, regulators and deployers to ensure trustworthy AI, emphasising that different applications tolerate different levels of bias (see [320-328]). Subodh reinforced the need for standards on fairness and robustness, noting that such guidelines are already publicly available (see [166-168][332]).
The discussion concluded with a set of agreed-upon actions. The STPI “Skill-Up” programme will roll out regional hubs and expand its partner ecosystem beyond the current 18 trainers (see [9-12]). Academia is urged to de-bureaucratise curricula, grant greater autonomy to institutions of eminence, and develop large-scale faculty development programmes and industry-university MOUs (see [237-250][283-292]). The panel advocated broader adoption of AI-driven assessment tools to personalise learning pathways and improve employability (see [215-224][315-316]). Finally, industry mentors will be paired with students on passion projects that address social impact, thereby bridging the gap between theoretical knowledge and practical problem-solving (see [259-276]).
Several issues remained unresolved. The audience asked for concrete AI tools suitable for district-panchayat governance and the role of CSR funding, which the panel did not provide a concrete answer to (see [319]). Detailed road-maps for implementing AI standards in 6G, mechanisms for uniformly upgrading faculty across thousands of state institutions, and operational guidelines for continuous monitoring of bias, fairness and robustness indices were also left open (see [124-138][283-293][320-328]).
The discussion repeatedly emphasized four themes for closing India’s AI talent gap: (1) a T-shaped, interdisciplinary skill set that blends deep domain expertise, AI fluency, critical thinking, risk-taking and ethical judgement; (2) agile, autonomous curriculum reform supported by the NEP and de-bureaucratised processes; (3) sector-specific standards and hardware-aware training, especially for emerging 6G telecom networks; and (4) the deployment of AI-driven assessment and adaptive learning platforms at scale. These converging views underscore the need for coordinated action among government, academia and industry to transform AI from a set of tools into a national infrastructure of intelligence that can drive inclusive economic growth and social impact.
Where do we see a talent gap? What is the requirement in terms of growing this whole ecosystem? Because when we come and we talk about today, it is the era, this is the most exciting time in the industry because AI is transforming everything. AI is transforming the way the businesses are being conducted. AI is transforming the whole workforce also because it’s not about what you are able to do, but it’s about co -exist with the whole AI ecosystem together. So my name is Sibodh and I’m director of SGPA headquarters. I’ve been part of the industry, I’ve been part of the government for almost 27 years. And being in the space of technology, being in the space of working closely within the startup ecosystem, within the academias, there’s always a gap in opportunity which we have witnessed.
And that’s why this particular topic today is very, very close to my heart in terms of how we ensure the industry move forward, how do we ensure that the AI as a technology can bring transformative changes overall. so I am happy that today you know very briefly today’s discussion will align very closely with the national efforts I am sure all of you when you talk about the NDIAI overall theme some of you have witnessed already that there is a lot of activity around the skilling, there is already 10 lakh AI skill in drive which has been initiated there is already a skill India digital program happening this is a new version of skill India altogether within STPI we have focused on you know a program called STPI skill up and I am happy to in fact announce also here that we are going to soon start the multiple regional hubs for training and ensuring that the training across technologies can happen and we have been joined by a lot of our training partners, the current training partner ecosystem is around 18 training partners across India and two of them in fact are here today with us and three of them are here with us and as we move forward, we’ll add more such training partners and collaborators.
We are calling them partners and collaborators because the aim and the objective is all aligned within the ecosystem of skilling up, right? The SIPI skill up becomes that particular program. Let me introduce our speakers. I’m not taking much time. So it’s my privilege to introduce my first speaker, Professor Dr. Alok Pandey, a professor and dean of UP Jindal University, a very senior academic leader with almost three decades of experience, focus across finance, governance, higher education, and I think multiple implementation within the financial technology space. He also comes with a great perspective on the AI. So let me request Professor Dr. Alok Pandey to come on stage and take the space. Please welcome Professor Pandey with a big round of applause.
A limited audience, but ensure that your applause covers the whole hall also. I’ll also like to introduce and welcome Professor Dr. Jawar Singh Professor Dr. Jawar Singh is also a professor from the Indian Institute of Technology Patna, he is also founder of Kuturna Labs and just we were chatting and he has just briefly told about his successful exit so he is not just the professor who is teaching but he is also practicing the same in the form of his own ideas implementation so we are literally and I am sure we are proud to have you Dr. Jawar Singh please welcome you on the dais let me also introduce Dr. Devinder Singh Deputy Director General of TEC this is the Department of Telecommunications in India Dr.
Devinder Singh has spent multiple years in the standards formalization standards ecosystem because you understand the telecom space especially is governed by the standards and these standards are very critical because unless and until because the interoperable ecosystem can only work if each and every device each and every node can be standardized and has to be standardized right So, Dr. Devendra Singh, Sri Devendra Singh represents the government from the Postal Telecom. So, let me welcome with a warm applause from the audience, Dr. Sri Devendra Singh on the dais, please. I’m also honored to join by Dr. Sarabjot. Dr. Sarabjot Singh Anand, he’s a co -founder and chief data scientist of TATRAS, also the co -founder of Sabath Foundation.
I have known, you know, Sarabjot Singh from almost, if I’m not wrong, seven, eight years now. And I’ve seen his passion in the space of AI. It’s not about just what he wants to achieve through his, you know, TATRAS data, but also about how, you know, and I think his work in the space of growing AI talent is well recognized in probably in some regions, especially in the region of Punjab, right? So, Dr. Sarabjot, thank you for being here. I request and welcome you on the dais of pioneer data science. A big round of applause for him. He has also roots in academia at Warwick and Ulcer. He has a very global perspective in this particular space altogether.
Let me introduce our next two speakers or two panelists on this agenda today. Vikas Srivastava, he’s a chief growth strategist of Vincis IT Services Private Limited. Vincis is one of our technology training collaborator and partner of STPI Scalar program. Vikas has almost 16 plus years in enterprise consulting, cloud workforce upskilling. And I think Vikas has a great perspective to share in terms of what is really reskilling requirement today within the whole ecosystem of the AI workforce. So, with a big round of applause, please welcome Vikash Srivastava. last but not the least let us also give a warm welcome to Kunal Gupta managing director of Mount Talent Consulting you know he has been doing talent advisory he runs his own job search portal he understands he works very closely with industry and has a clear perspective what is the industry requirement and where is the gap so with a round of applause Kunal welcome on the dais as well thank you for everyone and let me you know let me probably switch my place as well so it will be easier for us to start the whole discussion hello yes so I think let me quickly start and I will probably start from my immediate left Dr.
Sarabjot you know when we talk about next gen AI AI you And when we talk about next -gen AI as a space, next -gen AI as the whole, from the talent perspective, from the opportunity perspective, what is your perspective? Briefly, we’ll touch upon each one of you on the defining next -gen AI so that the audience understands very clearly what does next -gen AI really means. So over to you.
So to me, there are two camps here, right? One is the people who want to generate the next wave of AI, and then, of course, they’re the ones that have to use AI to be more efficient in their jobs. Now, for both of them, I think what is very, very important is that they have to be critical thinkers more than any technology as such, because there is, you know, a great move towards outsourcing your thinking to AI, and that’s a problem. We need to recognize that AI is not perfect. We need to recognize that there are certain deficiencies in it. and therefore we have to question what we get from that AI. And if we can get people who can critically think about the problem they are trying to solve and then take risks, I think risk -taking is going to be another very, very important aspect and having a foundational understanding of what is possible today with AI and what is not possible today with AI.
Because if we don’t recognize the deficiencies and start to regard AI as an oracle that always tells us the truth, we are going to get into trouble. So these are very, very important aspects apart from of course technology. Thank you.
To Devender Singh, your perspective on the next -gen AI technology in a very brief.
Hello. Next -gen AI, I feel he should have a strong expertise in AI and he should have skills to solve the real -world problems also. And he should adapt to new technologies also. He should be able to work in research. He should be able to work in different sectors also. And above all, I feel he should be aware of the regulations. Thank you. in the sector and in AI also. Thank you.
Thank you. Yes.
Yeah, hello. So to me, actually, the NextGen AI should not only the AI, NextGen should be aware of the AI algorithms, but basically they can make products or solution with customer facing. And they should understand not only the algorithms, but the way those algorithms are mapped onto the hardware. To me, a grounding, solid grounding of hardware, solid grounding of computer science, or even the engineering domain is must, actually. Thank you.
Yes. Professor Alok, sorry for my mistake in pronouncing your name wrong. Yes.
Thank you. I think the next gen AI is largely a T -shaped thing. You need to be domain specialists, deep domain specialists. You need to be fluent in AI skills, whatever software, hardware, etc. you are looking at. And then you should be able to understand red teaming and containment. So, if you have these three, then probably we will be able to solve most of the problems we face in India today.
Please, Kunal.
I think your question is very important. What do I understand or what do we understand with next gen AI? You know, next gen AI is the infrastructure. It’s not for intelligence like you currently have this infrastructure wherein we are able to express our views and they go out to the world. you know next generation of AI is like this infrastructure meant to multiply our intelligence our reasoning our research our values our creativity our judgments and what the future holds for us you know we are going to see a new wave of new materials and for a very long time we haven’t seen any major materials coming apart from the basic alloys that we have been using the process changes which are going to come about in the next generation with the use of the next generation AI the generation of models you know we talk about many things about differentiation in the society from a digital divide to this new edge AI divide but it could also at the same time help us reach out to the inclusive society in general with vernacular languages you know multiplying and extrapolating the reach of what a common normal common man can do earlier they were dependent on languages like English you but with the expansion of the next -gen AI platforms, tools, local vernacular languages wherein you can speak and give instructions to the computer in Hindi, in your local languages, get access to data, knowledge.
Like I said, you know, you could just build anything. We have seen this with a tool called TikTok, you know, a tool about 10 -12 years back which started. And it created a wave of influencers, otherwise a language or a platform meant only for the English and the literate. You know, went on to the masses. So I think next -gen AI, like I said, is just in one word an infrastructure of intelligence, multiplying our ability to think and, you know, make judgments in the future as well. Thank you.
Very well said. It is the infrastructure level intelligence which can be, which has to be created and which defines the next -gen AI. And carrying forward the same thought, I’ll ask Vikas to share his opening remarks on the next -gen AI.
Thank you. So I think most of the important aspect has on you. I think we’ve covered the panel. What I wanted to add is for me, the next -gen talent combines three important things. First is technical mastery. Second is ethical judgment. And third is real -world problem -solving capabilities. So we need people who understand, as I said, you know, we should know, the people should know where AI fits and where AI doesn’t, right? So I think this is the most important thing which I wanted to add. Thank you.
I think for the audience, it is important to understand that when we talk about next -gen AI and we talk about next -gen AI talent and the next -gen AI talent gap, right, there is, we got a clear perspective right from a critical thinking, right, going to the level of not just the, you know, opening up the layers of the AI, but from the perspective that one has to start thinking about the new ways and new layers in which the AI technology is having an impact. But there’s materials which, you know, Whether it is you know the infrastructure intelligence again which we talked about. Whether it is a foundational knowledge and foundational you know algorithm which we talked about.
The next gen AI talent gaps exist everywhere. And accordingly you know I think I will ask Sarabhjotji now to probably talk something on specifically. You know from your perspective of both as TATRAS and Sabud Foundation. You have seen the whole AI evolution. And you have seen the gaps which have been there. And you have tried to fill the gaps already. So my question to you is you know when you talk about the evaluative you know evaluation of the fresh AI talent. How do you what is your approach? Because that approach will lead us you know in terms of ensuring that you know how will this whole space will grow up right. So you are opening Ramon on that here.
Sure thank you. So you know when we look at talent today. What we assess is their problem solving skills. We look at. you know how keen are they on learning themselves so have they taken control of their own agency in learning in the future because what’s happening today and I’ve seen this over the years right a lot of students are very focused because they want to get a job they are focused on learning libraries you know even in 2018 when we started Sabudha Foundation because we found there was a huge gap in AI skilling here in India we found that till we got them to program a neural network they felt they weren’t doing anything right and now of course it’s LLMs everybody wants to learn lang chain and that’s about it but they have to understand the foundations the foundations if they are weak we are going to do interesting things but are not going to do amazing things and so the focus has to be on building a strong foundation increasing their curiosity in terms of what they are doing and getting them to think about how they can be creative in the solutions that they are engineering for their customers.
Now, in Tataras, we work with startups in the US and develop their AI for them. Now, to do that, somebody mentioned domain being very important. And what we are constantly training our folks to say is understand the problem from the customer’s perspective. Right? It’s not just about algorithms. When you create a solution, a successful solution is going to be one that solves the problem. It doesn’t matter what technology you use. And that is a key differentiation between the training that we provide and what is available otherwise in terms of just skilling on libraries. Right.
Thank you. And I think, you know, for all the people outside sitting here, the most important part, I think, as Sarojotji said, and probably any one of you can add also, whenever you feel like, you know, curiosity is one part. Right? Because curiosity to our human mind. adds that element of learning and when the curiosity is there there comes a creativity and once you have this curiosity combined with creativity then only you can understand the customer problem if I am not wrong you can understand the customer ecosystem it’s just about the customer ecosystem from the perspective where you make money but when you talk about social impact because social impact even the people who are getting benefited from the technology they might not be directly paying you but you are creating great amount of social equivalence outside there so it becomes important from their perspective and when we combine these three and we map that with the AI which is such a powerful language right now in terms of technology I think the solutions which you see outside are just a few examples of what really can wonders can be created when you know you bring these three elements together right so I think in similar lines you know I will ask Dr.
Devinder Singhji because he comes from his background on the whole telecom space right and today we talk about native AI telecom infrastructure. When you talk about native AI, they not be just AI ready, but they need to also bring in AI in their own operations. When I say AI readiness, it’s all about the scale, the kind of compute, kind of technology, kind of infrastructure they need to create. But how do they approach from a standards perspective when you see? When you see from the standards perspective, because you see future. You are looking into 6G as a standards. What is the role of an AI in terms of standard creation? And what is the role in terms of technology when standards are getting defined?
So from that angle, your thoughts are on the same.
The present telecom engineers, they are very strong in networking. But the future network that the 6G would be coming, it will be more dependent on the AI. The present technology, 5G, is in that case the AI is add on that. But in 6G, each and every component has got AI inbuilt in that only. So at present, the planning is done in a static way. Components are selected and then the effect is seen. But in 6G, it will be self -learning type of thing. So the engineers would be required to know machine learning. And the present cases, whenever there is some fault and alarm is generated, an engineer is supposed to take corrective action. But in 6G, it would be happening, AI will predict what kind of fault can come and it will take corrective action on its own.
And at present, most of the decisions are taken at the central level only. But in 6G, the intelligence will be distributed at the edge also. So the decision will be taken at a distributed level. So the engineer must be able to, plan everything. Thank you. in considering that the distributed decision will be taken. So in that case, as far as the standards are concerned, standards for 6G are being finalized. They are not final, but it is already decided that each and every component will be having AI. In addition to that, you were talking about the standards. Since in TEC, in the telecom engineering center, we have already published some standards on AI. So the telecom engineers should also be aware of the standards which they are supposed to follow for implementing of an AI.
At present, the telecom engineers are using AI, but in future, the telecom engineers will design and operate and use it. Most of the decisions will be taken by AI. The human will only supervise only. Thank you.
Thank you. I think when we talk about telecom space, I think two or three critical things, which probably, you know, is of interest to the audience, as Devendraji talked about, you know, we talk about the AI, agentic AI and the agentic AI in taking care of the operations part. And I think from a skill perspective, it is important that when you look into the agentic AI ecosystem, you need to go deeper into the particular technology or particular sector, because each industry gives its new challenges and problem even for agentic AI ecosystem, right. And when you talk about the infrastructure readiness, right, it is important that today, the whole telecom sector is one of the sector and I think I’ll come back to you on the on the perspective of how is telecom sector creating robustness.
Right. And I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector Next, I think I’ll touch upon again, you know, I think to Professor Jawar, you know, when you look into the layer of the hardware and below, right?
Because when we talk about the AI and we talk about six layers as has been spoken about across the spectrum, the most promising and most important layer is also about not just applications, but also about the hardware, which is powering up the whole AI, you know, the need and speed
All right. So, I mean, this is quite interesting because very rarely people talk about how those algorithms runs, how those AI models runs actually. So honestly, if I say these models are very expensive, expensive in terms of not cost, but in terms of power, I will say. And cost is obviously associated with it actually. So, if I say a simple, if I take a simple example, the power requirement of if I take a very basic NVIDIA processor, it consumes around 500 to 700 watt. But if I say the same processor, I mean, not, I mean, our human brain is also having a very beautiful processor. I can say it can compute a lot and it just consumes 20 watt power.
So, you can see there is a huge gap between. The processing capabilities, the most instead of the processors that we have and the. most cognitive processes that we all have is there is a lots of gap actually so the gap need to be bridged so there are lots of research is going on in this domain we usually call the neuromorphic computing or brain inspired computing where these algorithms can be mapped at the hardware in a efficient manner another example i can give you the deep seek when it comes and when it first time pops up or surfaced in the market actually the nvidia’s stocks i mean slumped down quite severely and the reason was that their processor was quite efficient actually that was the only thing so people think okay we may do the same thing in a more efficient way so there is a we need people basically they not only think from the algorithm perspective but they also think from the hardware perspective hardware security also i will add here one more term hardware security because the ai can be weaponized and can also be used for neutralization purpose.
So the hardware play a very crucial role. Algorithms are okay. So we need people basically they understand not from the algorithm but all the way down to the hardware implementation. How your implementation is secure, trusted and the reliable. So I hope No, thank you. Thank you.
You touched upon the element of the security and I think when AI comes into play, the cyber just not the security elements of the algorithms but also the important thing which has popped up is the biasness and the robustness and the fairness of the algorithms. So I am sure some of you will talk about that as a gap from a talent perspective and how do we skill and reskill the people who can be used for this particular filling the layers across the layers they can be probably be more coming into the ecosystem. So with that I think I will ask Dr. Alok. when you look into from a university academia perspective today we talk about population scale AI implementation and I think when we talk about population scale AI implementation it is not about the whole critical thinking and the thinking part also need to be changed and hence academia needs to be geared up to create that kind of curiosity and learning from the perspective of students so what is your take from the industry and academia when you see a gap.
How are you gearing up your students from a perspective of scale of AI perspective and from that particular element?
We have large contracts. Say I have to do an M &A valuation, an M &A due diligence. And, you know, competition commission has asked me whether I should go for this merger acquisition or not. I am a lawyer. How do I do it? I can use a generative AI software for that. I can do money laundering prevention, not just spam prevention like, it’s very effective spam prevention by, you know, Airtel and all. But I can do money laundering calculations and identify which transactions work in which manner through a generative AI. So we need to develop products in these lines. The second thing I would say that safety and security of these products. How are we going to look at, you know, the safe usage?
Now, there’s a term which has come up. You know, this is called a coming wave. Mustafa Suleiman has written a book, The Coming Wave, and everybody uses. This is the coming wave. And the wave is going to drown all of us if safety and security is not there. Every young person who uses AI needs to understand what is red teaming and what is containment. I should kill my technology if it doesn’t work in my favor. Right. And finally, domain integration. So AI healthcare, AI law, AI education, AI finance. So all these levels basically need to be understood by educational institutions. If you ask me another question that how do we scale it up, then I’ll of course later speak on that.
But I’ll tell you that we need to really work out an infrastructure. We need to work out on academic strength. We need to have large number of trained faculty members. We need to have MOUs with Western countries. All companies are based in either China or Europe or America. And the universities are generating a lot of trained resources. And Indian universities need to move forward in that direction. So I basically feel that yes, there is a huge gap today. And we need to really answer these gaps through not just viable funding from government, but also from industry.
I tend to. Partly agree in terms of, you know. the length and the breadth of the AI ecosystem has changed dramatically everywhere but you know when we look into the Indian talent right and I strongly believe because I have been in this industry for very long now and kind of you know energy we are seeing in this particular 10 arenas hall and probably the conferences happening other side there is a huge talent which has popped up now and they are generating very good solutions and today India from a solution producer perspective they are not just doing something at the application layer or the agentic layer but they are also looking into the foundational layer and that’s why we have saw the launch of the recent LLMs also right and when we look into let’s say the launch of the Sarvam LLM or other LLMs it’s very clear people are trying to see there is a lot of data available in our country and this data needs to be understood and as I think as you talked about if you take just one particular sector called law and justice right and there is one start one company here Lex Leges and I was interacting with them yesterday they have understood this problem they have created their LLM on particular you know not the large language model but they have approached this problem with the same mentality of LLM right and hence they have been able to ensure exactly what you are describing as a problem you know how do you create a solutions for that and it works on Indian data it works on the Indian contract laws it works on the Indian you know past judgments so that is the need of our now you as an entrepreneur if you have entrepreneurial mindset as from a perspective of audience sitting here or as somebody who wants to get into this particular as a workforce you need to clearly have the idea about each in every domain whether it is health as you talked about or whether it is law and justice has its own set of challenges and problems right and when there are challenges and problems with right skill and right talent you can actually approach and be very successful and we see this as a you know leapfrog moment for each one of you From the industry perspective also So taking that thought forward I’ll ask Kunal Kunal you have been talking about Skill gaps Especially working with students and working professionals From your platform perspective Whatever you are seeing From your own job portal and job placements perspective What is your take again In terms of most commonly seen The certain abilities Which is required Which probably should be seen by each one of them In a short term What are the skills they need to fill in Whether it is Learning to coexist With the LLMs outside there Whether it is Learning to do the coding Whether in the AGI As Professor Alok talked about Or creating new Machine learning algorithms What do you see as a typical problem In a short term Where talent has to be ready for that
I think I see the problem threefold when it comes to skill gap, specifically in a dynamic country like India, wherein we are living across many generations across the country. You know, a generation which is far ahead in the future, a generation which is far behind in terms of development, capability and education as well. The biggest skill gap that I see right now is the application. And more importantly, how do we define a problem? You know, what we do is we have this mentality out of whatever ecosystem that we have built, we just start copying others in terms of this is the trend, so we need to go for this trend without really understanding how to define the problem first.
Define the problem is about 50 % of the solution achieved in itself. Define the problem in any sphere that the person is in. You know, like Dr. Saab said about different usage in different fields, whether it is healthcare, whether it is… whether it is law, whether it is… is agriculture for that matter, which is catering to such a huge population in our country. Who would have thought of hydroproning, you know, producing such huge results without the application of soil, no dependency on weather, and you can create your own environment for creating absolutely green vegetation, you know, in the best of atmosphere without germs and without the usage of application of pesticides as well. So coming back to your question, skill gap again is going to be defined sector specific.
Different sectors are going to have different specific gaps at different specific application levels. When it comes to industry, again, what is the solution that I am providing or we are providing as a company, you know, our aim is to develop an employability intelligence layer. You know, how do we define skill gap, basis, what kind of jobs are coming from the market, basis, the jobs, what is the current skill set of the candidates, we have a scientific gap analysis in terms of what is missing. it’s not that we have a very nice application tracking system we do a recommendation algorithm using a lot of ai the aim is you know in my view the aim is not to exclude people or reject people using ai when it comes to skill gap analysis the aim is to show them that this is what is missing this is what has to be developed it is not rocket science that can’t be developed you take a course of one month three months six months or you do it while working in another job role while coming to your ambitious job role you know it takes time nothing is built in a day but more importantly i think a more bigger gap that i see right now which is going to come as a huge pressure on the educational setups whether it is at a university level or at a school level you know any which ways we keep talking about the fact that india’s syllabus is very uh you know it is not aligned to the industry it is not aligned to the industry it is not aligned to the industry it is not aligned to the industry it is about 20 year old we don’t update our syllabuses you know it takes six committees to take five years, seven years to come up with new curriculums.
By the time the new curriculum is implemented, it has already gone obsolete from an industry perspective. I think in the last six months, the speed of growth of AI that we have seen is going to put the maximum pressure on the policy makers of the country, specifically those catering to one is the core foundational education, the higher skilling education, and more importantly, the industry skilling which has to be needed to ensure that people understand why productivity is needed, how productivity is done. Students need to understand that most industry needs output. We need production. We need results. We are not, you know, industry cannot always bridge the gap. And in India, you know, I’ll have to say it, whether it is an MSME or whether it is large industries, everybody has done their bit in terms of scaling those whom they select.
And, you know, today the success that we see in conferences like these is a lot of people. People who have grown through the industry and how industry has scaled people up. Correct. Colleges will need to ensure that AI education is for all. You know, application of AI for all. Output will increase. Output will lead to, you know, more analysis in terms of how to improve the output production. Production leads to more research. Research is going to lead to more efficiency in production. And it’s a loop. You know, currently the application is going to lead to higher output in my view. Higher output in terms of what an engineer can do in eight hours of work.
What a company can do in terms of per year revenues. Whatever models can do. Whatever processes can do. And based on that, it’s a regular running cycle. We can’t sit in a relaxed manner right now. Specifically in this changing world right now.
I’ll just add to what Kunal has said. We need to de -bureaucratize education today to a great extent. In fact, we brought in this concept of institution of eminence. And I’m happy that I’m part of an institution of eminence. where we can create our own curriculum. You know, curriculum velocity is so high that you can’t give a command to faculty members to teach a particular course, especially in technology. And especially when you’re talking about integrating with a particular domain when the faculty has to work with other specialists, identify something and the needs change frequently. You know what has happened? It’s not just that AI technologies have changed. The consumer and the user has started demanding change.
For example, if you look at the crop insurance, the crop insurance idea basically means that I should have satellite pictures and I should have an understanding as to whether a crop failed or not. And this is done best using AI. And if I need to train my agriculture college students, you know, who study in large agronomy institutions, I need to have a quick delivery of the curriculum. Sadly, we don’t have that. We don’t have expertise on those. Thank you. So if you de -bureaucratize curriculum, allow more autonomy to institutions who are into technology at least, or technology applications, we’ll have a much bigger national good at hand.
And I think the start has already happened with the NEP, if I’m not wrong, right? The whole focus on the national education policy and the initiatives around that has been giving more autonomy and speed towards defining the curriculum. So I tend to see this as a problem which was there. And there’s already a lot of work has happened now, right now. I think if you talk about, I think, when you were at probably a globally level and you were probably, you know, from a Warwick experience, right? You would have seen these changes there. And do you see this coming back to India right now in that similar speed?
I don’t, unfortunately, right? So at Warwick, we actually have the Jaguar Land Rover research labs on campus, right? And we were interacting with them. Even 14 years ago, we were looking at tracking, the cognitive load on a driver. as they drove a vehicle to understand whether we need to take some preventative action before he causes an accident, right? Now, of course, we are saying we don’t need a driver. So times are changing very quickly. I think the one thing when we started Sabud, we realized that curriculum is falling short. Academics are not equipped to deal with the change that’s happening. Even HR folks, when we look at it from an industry perspective, our HR folks are not evolving quick enough to evaluate candidates the right way, right?
So what we did in Sabud was, we said the centerpiece of our training is what we call a passion project, where we get students, we are training them in AI machine learning and technology, but we are getting them to think about how do I solve a problem of social impact? Right? And then we are giving them mentors. who are from Tatras and from other organizations that are actually creating AI solutions for the global north, as they say, right? And so now the students are getting mentorship. And I think the key thing we are missing today, which is shocking for a country our size, we have companies with lots of technologists that have no choice but to keep up with technology innovation.
At the same time, these people have to be trained to give back more. If we can get every person to be evaluated or valued based on how much they give back to others, then we can pair students with mentors in industry and get them to get the skills that no curriculum can give, right? Because you really need problem -solving skills that are existing outside of academia. Of course, academia. Academia has great depth, and therefore they have to be part of that. And so as Subodhji was saying, we’ve got to bring academia and industry closer together and solve this problem. It’s not going to happen from one side alone.
No, I think, thank you for sharing your thought on the I’ll just take you know, professor, sorry, Vikash first and then you on, please go ahead.
Specifically for this, so in that way basically from the curriculum point of view at least, I just want to make this a small caveat actually related to this curriculum updates at least the centrally funded technical institutions are not a problem at all. Even they are free of all those things actually. If I have to start a new course from the next semester, I’m free to run. So such kind of restrictions for curriculum updates, at least as far as I know, CFTIs are not bound to that actually and they are quite okay.
It was not only for CFTIs because India is 1 .4 billion people, right, and majority of it are in tier 2. My basic problem is state technical institutions. The talent which comes from state technological universities, which is the best talent. And these people need scholarships. These people need multilingual support. And these teachers also need training. You know, and, you know, there’s a very large layer for the state institutions because education being both a center as well as state funded thing. And we are in a quagmire where, you know, new regulators are coming in, old regulators are falling and we need to identify how to do it. But my basic thing was not CFTIs. The central funded institutions are much better off.
But still, you know, the amount of manpower you need for developing AGI kind of systems. And it is yet to see just a matter of five years. We’ll see how this hypothesis works, whether we are able to generate something in artificial general intelligence. I think all of us will have to contribute towards this transformational change right from academia to the industry, to the policymakers like us. It becomes important. We understand the speed is not required. But. to develop the solution, speed is required in terms of how the solutions get developed by virtue of doing right things, right?
And I think to Vikash, I think Vikash, you have seen, because you have been coming from the AI learning space, right? So my only question to you is you would have seen the conventional way of doing AI education in past and how that has changed today, right? Are we still looking into conventional classroom mechanism of making the AI learning or as you know, probably what Sarab said, it’s not about learning but practicing it while learning, right? So what’s your input around the same?
So in my view, conventional or traditional trainings, they focus heavily on theory, mathematics, model architecture, you know, those foundations are important but from an industry readiness, we require additional three layer. First is, you know, problem, I know, applied problem solving. So, learner must work on real data set. Or, you know, they should focus on the domain -specific knowledge. Or they should work with the deployment scenarios. Second is, you know, the production exposure. So, knowing how your model move from your notebook environment to real scalable or, you know, secure, you know, systems. And, you know, how the production happens. And last is.
So, I think when we talk about the classroom learning and we talk about the learning about the mathematics. How do you see the, you know, the new tools and technologies being used for training? For example. you know are there any examples probably you can quote some examples we have seen that students are now able to not just see the typical you know learning of the classroom but what other tools and technologies they’re being exposed so the learning gets you know increased this the learning speed of learning becomes faster?
So basically in in our sector you know we are utilizing ai to assess the skill gaps so now there are tools who based on the you know participant profile is able to assess the learning gaps and recommend adaptive learnings so which eventually help you know increasing the employability outcome that’s that’s so so this is how the ai is helping today.
Great i think while we you know we are probably somewhere almost towards the end but we have one more set of questions but just to keep the audience anybody wants to have one quick question please can somebody bring the mic to them? Can somebody please bring the mic to them ? Right so I think we’ll I wanted to go one more round of questions but just to keep the interactiveness because the audience is also limited I don’t want you guys to get bored about what we are speaking so anybody can probably ask one or two questions?
Thank you hello everyone namaskar just quickly you speak in Hindi you tell your name my name is Vikram Tripathi I am from a village in Prayagraj and the upcoming elections are the panchayat elections I am going to participate in them there is a district panchayat election there is a district panchayat member of 25 villages so if I win the election then in the first year the AI tools or softwares which are available which are the three sectors where I should use them and secondly is it possible that private companies, CSR funds Thank you.
One bias index is produced depending upon matrices. For one bias, I can use a number of matrices also. Result of all the matrices are clubbed to find the bias index for one particular parameter. Then, a system can have bias due to many things and different bias indexes are clubbed to find one fairness index. The fairness index ranges from 0 to 1. If it is 1, then it is considered fair. If it is 0, it cannot be used. But in practice, the fairness index will be from 0 to 1. Then it will depend upon the user also. If he wants to have how much fairness in the system. If the system is used to suggest what song would you like to hear, then some bias may be accepted.
If system is supposed to identify whether the person is the soldier is enemy or our own. then no bias can be accepted so that can be used for the by the deployer and those matrices or the framework we have suggested it can be used by deployer the developers also the engineers who are involved in developing those systems those people can also test their models if it is fair or not and it can be used by the regulators also the regulator may say the government may say for such sector the system should be tested and it should have at least this much fairness level similar to fairness we have got one standard for robustness also which can be used to check if the system gives consistent results in different situations
Great and I am sure these standards are available in the public domain they are not draft stages.
Because if we don’t recognize the deficiencies and start to regard AI as an oracle that always tells us the truth, we are going to get into trouble. So these are very, very important aspects apart fro…
EventOnline education | Capacity development | Future of work Will.i.am believes these three qualities are essential for success in any field in the AI era. He emphasizes that audaciousness is needed to c…
EventA critical million-person talent gap exists across the semiconductor ecosystem, spanning from field service engineers to process developers and device engineers The focus should be on developing broa…
EventFuture workforce needs different skills including critical thinking, judgment capabilities, and empathy when working with machines
EventCritical thinking as essential human skill
EventThis comment was insightful because it challenged one of the most fundamental structural assumptions of higher education – the time-based degree system. It suggested that AI acceleration might require…
EventNetwork operators increasingly rely on AI for a wide range of tasks, fromnetwork planning(e.g. using algorithms to identify the best placement for base stations, taking into account issues such as use…
TopicArtificial intelligence and telecommunications complement each other to form the backbone for the intelligence era. Telecom networks are emerging as the primary carriers of AI, while AI itself is beco…
EventThank you, Devashish and GSMA for this particular session. It’s a session of particular interest to me as a user in the digital ecosystem and of course to the entire digital fraternity because if ther…
EventExample of rural Indian farmer using early GPT models to reason over farm subsidies in local language and complete forms Nadella argues that AI models and their outputs are more readily available wor…
EventShekhar emphasised that this transformation necessitates three critical strategies for effective response. First, organisations must support innovative models such as peer-to-peer learning platforms l…
Event-Collaborative Ecosystem Building: The event highlighted partnerships between STPI, National Productivity Council, and other stakeholders in fostering a supportive startup environment that enables Ind…
EventFurthermore, the discussions underscored the importance of establishing frameworks and infrastructures that support distributed training. This approach can help in spreading the computational load and…
EventAnd before we go to Rebecca, just from an India perspective, PM Modiji talked about Manav yesterday and the AI vision. Through there, there was a lot of focus on validity and governance, so standards …
EventRoy Jakobs argues that the healthcare industry must establish self-regulation standards for AI implementation since regulation cannot keep up with technology advancement. Building trust requires fair …
EventBias, discrimination, and fairness: Are biases being propagated with data sets used to train algorithms? How transparent and explainable are the decisions?
BlogInfrastructure | Development | Legal and regulatory Technical Standards and Implementation
Event“of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control every layer of the resources that is there and while foundational resources the …
Event“Sh Subodh Sachan has 27 years of experience across industry and government and moderated the discussion.”
The knowledge base identifies Subodh Sachan as Director of SGPA headquarters with 27 years in industry and government and notes he moderated the panel discussion [S1].
“The STPI “Skill‑Up” programme partners with multiple training organisations across India and will launch several regional training hubs.”
S4 confirms the existence of the SIPI/Skill-Up programme and that it works with partners and collaborators; S93 adds that STPI already operates 70 centres (62 in tier-2/3 cities) which can serve as the regional hubs mentioned [S4] and [S93].
“STPI plans to expand the Skill‑Up network to reach a large number of Indians by 2030.”
S92 states that India has committed to skilling up 10 million people by 2030, providing quantitative context for the programme’s expansion goals [S92].
“Critical thinking is more important than technology; practitioners must question AI outputs and avoid treating AI as an oracle.”
Both S101 and S102 stress the need to validate AI results and invest in critical-thinking skills, echoing the speaker’s warning about outsourcing thinking to AI [S101] and [S102].
“Future AI professionals need a solid grounding in hardware and should be aware of the large energy disparity between current GPUs (≈500‑700 W) and the human brain (≈20 W), prompting research into neuromorphic computing.”
S28 discusses AI-powered chips and the skills required for India’s next-gen workforce, highlighting hardware expertise and energy efficiency as key focus areas, which adds nuance to the speaker’s point [S28].
“Next‑gen AI talent must understand regulatory frameworks governing AI.”
S90 notes that AI is treated as critical infrastructure and emphasizes the need for capacities to articulate regulatory and standards issues, confirming the importance of regulatory awareness for talent [S90].
The panel shows strong consensus on four pillars: (1) problem‑solving and domain‑centric AI skills; (2) the need for agile, autonomous curricula; (3) the imperative of ethical standards, fairness and safety; (4) viewing AI as core infrastructure requiring hardware efficiency and standards; plus a shared belief in AI‑driven skill‑gap assessment tools. These converging views suggest a coordinated path forward involving curriculum reform, industry‑academia collaboration, standards development, and investment in AI‑enabled training ecosystems.
High consensus – most speakers independently arrived at similar conclusions across technical, educational and ethical dimensions, indicating a solid foundation for policy and programmatic action.
The panel exhibits moderate disagreement centered on the priorities for AI talent development (critical thinking vs domain expertise vs ethical and problem‑definition skills) and on the mechanisms for curriculum reform (institutional autonomy versus policy‑level overhaul). There is also a clear split on the importance of hardware knowledge and security versus software‑level safety measures. While all participants share the overarching goal of building a robust AI ecosystem in India, the divergent views on which competencies and institutional levers are most critical could slow coordinated action and policy implementation.
Moderate disagreement with implications for fragmented policy approaches; consensus on the need for AI skill development exists, but differing emphases may lead to parallel initiatives rather than a unified national strategy.
The discussion was shaped by a handful of pivotal insights that moved it beyond a generic talk on AI talent gaps. Early remarks on critical thinking and the hardware‑energy gap broadened the talent definition to include mindset and interdisciplinary expertise. The T‑shaped model and the three‑pillar framework (technical, ethical, problem‑solving) provided concrete structures that participants repeatedly referenced. Kunal Gupta’s view of AI as societal infrastructure and his critique of curriculum bureaucracy introduced a socio‑economic dimension, prompting calls for rapid, multilingual, and industry‑aligned education reforms. Sector‑specific forecasts, especially the 6G AI‑embedded network, anchored the conversation in concrete future skill requirements. Finally, the introduction of fairness and robustness indices gave the debate a measurable, policy‑oriented anchor. Collectively, these comments redirected the dialogue from abstract skill shortages to a nuanced, multi‑layered roadmap encompassing mindset, interdisciplinary knowledge, ethical safeguards, curriculum agility, and sector‑specific standards.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

