NextGen AI Skills Safety and Social Value – technical mastery aligned with ethical standards

20 Feb 2026 15:00h - 16:00h

NextGen AI Skills Safety and Social Value – technical mastery aligned with ethical standards

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel, chaired by Sh. Subodh Sachan, examined the widening AI talent gap in India and how the nation can build a next-generation AI ecosystem to sustain rapid industry transformation [3-5][8][9]. Participants agreed that AI is reshaping business and the workforce, requiring people not only to use AI tools but to coexist with an evolving AI ecosystem [5][6-7]. Dr. Sarabjot emphasized that future AI practitioners must be critical thinkers who question AI outputs, recognize its deficiencies, and be willing to take risks [39-46]. Dr. Devinder Singh added that next-gen AI talent should possess strong AI expertise, research ability, cross-sector experience, and awareness of regulatory frameworks [49-54]. Professor Jawar Singh highlighted the necessity of solid grounding in hardware and computer-science fundamentals to translate algorithms into efficient, secure implementations [57-60]. Professor Alok Pandey described the ideal AI professional as “T-shaped,” combining deep domain knowledge with fluency in AI techniques and skills in red-teaming and containment [63-66]. Kunal Gupta and Vikas Srivastava stressed that beyond technical mastery, ethical judgment and real-world problem-solving are essential components of next-gen AI talent [84-87][71-78]. Dr. Sarabjot further noted that assessing talent should focus on problem-solving ability, self-directed learning, and creativity rather than mere familiarity with libraries [103-110]. Both Kunal and Vikas pointed out that many candidates struggle to define problems correctly, a skill they consider half of any solution [205-208]. The panel identified a systemic lag in academic curricula, calling for de-bureaucratised, fast-moving curricula and greater autonomy for institutions, especially state technical colleges, to keep pace with AI advances [239-244][283-291]. In the telecom sector, Dr. Devinder explained that 6G will embed AI in every component, requiring engineers to master machine learning and adhere to emerging AI standards, with the government already publishing relevant guidelines [124-138][140-147]. Addressing algorithmic bias and robustness, Dr. Devinder outlined quantitative fairness and bias indices that can be used by developers, regulators, and deployers to ensure trustworthy AI systems [320-327]. The discussion concluded that closing the AI talent gap demands coordinated action among academia, industry, and policymakers to foster critical thinking, interdisciplinary expertise, ethical awareness, and agile education reforms, thereby enabling India to leverage AI as an infrastructure of intelligence [71-78][239-244][315].


Keypoints

Major discussion points


The AI talent gap and the competencies needed for “next-gen” AI professionals – Panelists repeatedly stressed that future AI talent must go beyond tool-level knowledge. Critical thinking, the ability to question AI outputs, risk-taking, and a solid grounding in both algorithms and hardware are essential [39-46][57-60][63-66][83-86]. A “T-shaped” profile (deep domain expertise plus fluency in AI and red-team/containment skills) was highlighted as the ideal model [63-66]. Vikas added that technical mastery, ethical judgment and real-world problem-solving are the three pillars of next-gen talent [83-86].


Curriculum reform and the need for faster, more flexible education pathways – Several speakers pointed out that current curricula are too slow and bureaucratic, especially in state technical institutions, and that universities must gain autonomy to create rapid, industry-aligned programs [252-256][279-282][283-293]. Alok called for “de-bureaucratised” curricula, more faculty training, and stronger industry-university MOUs to keep pace with AI’s velocity [237-250]. Sarabjot described a “passion-project” model that pairs students with industry mentors to fill gaps that formal curricula cannot [259-276].


AI as foundational infrastructure that will reshape sectors, especially telecom and vernacular services – Kunal described next-gen AI as an “infrastructure of intelligence” that multiplies human reasoning and enables vernacular-language interfaces [71-78]. Devinder explained how 6G will embed AI in every network component, shifting from static planning to self-learning, edge-distributed decision-making [124-136]. Subodh linked this to the need for sector-specific skill sets and standards for AI-enabled operations [148-152].


Skilling initiatives and ecosystem building – Subodh outlined the STPI “Skill-Up” programme, the creation of regional training hubs, and a partner network of 18 training organisations to deliver AI up-skilling at scale [9-12]. Vikas noted the use of AI-driven adaptive learning tools that assess individual skill gaps and recommend personalised pathways [315-316]. Kunal emphasized a data-driven “employability intelligence layer” that matches market-demanded jobs with candidate capabilities [212-219].


Standards, bias, fairness and evaluation of AI systems and talent – The discussion moved to the importance of regulatory standards, bias indices, and fairness metrics for trustworthy AI deployment [320-327]. Devinder highlighted existing telecom AI standards and the need for engineers to follow them [142-147]. The panel agreed that evaluating talent must include problem-definition ability, ethical awareness, and the capacity to work within these standards [204-209][320-327].


Overall purpose / goal of the discussion


The session was convened to diagnose the current AI talent gap in India, explore what “next-gen” AI expertise should look like, and chart concrete actions-through curriculum reform, industry-academia partnerships, national skilling programmes, and standards development-to build a robust AI ecosystem that can drive economic transformation and inclusive societal impact.


Overall tone and its evolution


The conversation began with a forward-looking, optimistic tone, celebrating AI’s transformative potential and the launch of new skilling initiatives [3-5][9-12]. As the panel delved into specific challenges, the tone shifted to urgent and problem-focused, highlighting gaps in education, industry readiness, and regulatory frameworks [204-209][252-256][320-327]. Throughout, the discourse remained collaborative and respectful, with speakers building on each other’s points and repeatedly calling for joint action across government, academia, and industry.


Speakers

Professor Dr. Jawar Singh – Role/Title: Professor, Indian Institute of Technology Patna; Founder, Kuturna Labs.


Areas of Expertise: AI algorithms, hardware implementation, neuromorphic/brain-inspired computing, AI product development, hardware security. [S1]


Dr. Sarabjot Singh Anand – Role/Title: Co-founder & Chief Data Scientist, TATRAS; Co-founder, Sabath Foundation.


Areas of Expertise: Artificial intelligence, data science, talent development, social-impact AI solutions, AI education and mentorship. [S2]


Vikash Srivastava – Role/Title: Chief Growth Strategist, Vincis IT Services Private Limited.


Areas of Expertise: Enterprise consulting, cloud workforce upskilling, AI talent reskilling, industry-focused AI training. [S3]


Kunal Gupta – Role/Title: Managing Director, Mount Talent Consulting.


Areas of Expertise: Talent advisory, recruitment, AI-driven skill-gap analysis, job-search portal operations, industry-academia talent alignment. [S5]


Professor Dr. Alok Pandey – Role/Title: Professor and Dean, UP Jindal University.


Areas of Expertise: Finance, governance, higher education, fintech, AI applications, curriculum development, academic-industry collaboration. [S7]


Dr. Devinder Singh – Role/Title: Deputy Director General, Department of Telecommunications (TEC).


Areas of Expertise: Telecom standards formalisation, AI integration in telecommunications, 6G technology, AI governance and regulatory frameworks. [S9]


Audience – Role/Title: General participants (e.g., Vikram Tripathi, village resident and aspiring panchayat candidate).


Areas of Expertise: Not specified.


Sh. Subodh Sachan – Role/Title: Director, SGPA Headquarters; Moderator of the session.


Areas of Expertise: AI ecosystem development, skilling initiatives, industry-government liaison, national AI policy implementation. [S14]


Additional speakers:


None identified beyond the listed speakers.


Full session reportComprehensive analysis and detailed insights

The session opened with Sh Subodh Sachan framing the discussion around a widening talent gap in India’s AI ecosystem. He argued that the present era is “the most exciting time in the industry because AI is transforming everything” – from business models to the workforce – and that success now depends on the ability to “co-exist with the whole AI ecosystem together” (see [3-5]). He highlighted his 27-year experience across industry and government and noted that “there is always a gap in opportunity” that must be addressed to sustain the transformative potential of AI (see [8]). To that end, he announced the STPI “Skill-Up” programme, which will soon launch multiple regional training hubs and currently partners with 18 training organisations across India, with plans to expand the network further (see [9-12]).


After a brief introduction of the panel, Sachan introduced the speakers: Professor Dr Alok Pandey (Dean, UP Jindal University) with three decades of experience in finance, governance and fintech; Professor Dr Jawar Singh (IIT Patna, founder of Kuturna Labs); Dr Devinder Singh (Deputy Director General, Department of Telecommunications) with expertise in standards; Dr Sarabjot Singh Anand (co-founder of TATRAS and Sabath Foundation); Vikas Srivastava (Chief Growth Strategist, Vincis IT Services) and Kunal Gupta (MD, Mount Talent Consulting) (see [13-31]).


The first substantive contribution came from Dr Sarabjot, who distinguished two camps in the AI workforce: those who generate the next wave of AI and those who use AI to become more efficient. He stressed that “critical thinkers … are more important than any technology as such” because “there is a great move towards outsourcing your thinking to AI” and warned against treating AI as an oracle, urging practitioners to recognise its deficiencies, question outputs and be willing to take risks (see [39-46]).


Dr Devinder Singh added a complementary perspective, asserting that next-gen AI talent must possess “strong expertise in AI” together with the ability to solve real-world problems, adapt to new technologies, conduct research across sectors and remain aware of regulatory frameworks governing AI (see [49-54]).


Professor Dr Jawar Singh broadened the technical scope by insisting that future AI professionals need a “solid grounding of hardware, solid grounding of computer science, or even the engineering domain” to map algorithms efficiently onto hardware and to ensure security, noting the stark energy gap between a typical NVIDIA processor (500-700 W) and the human brain (≈20 W) and calling for neuromorphic, brain-inspired computing as a research priority (see [57-60][153-164]).


Professor Dr Alok Pandey then presented a concise talent model: a “T-shaped” profile that combines deep domain specialisation, fluency in AI software and hardware, and the capability to perform red-team testing and containment of AI systems (see [63-67]). This model was echoed by Vikas Srivastava, who identified three pillars for next-gen talent – technical mastery, ethical judgement, and real-world problem-solving – and argued that professionals must know where AI fits and where it does not (see [83-87]).


Kunal Gupta described AI as “infrastructure of intelligence” that multiplies human reasoning, creativity and values, highlighted its potential to democratise access through vernacular-language interfaces, and drew a parallel with how TikTok expanded content creation beyond English-speaking users (see [71-78]). He also cited AI-enabled hydroponics as an example of how AI can create high-yield, pesticide-free agriculture without dependence on weather (see [220-227]).


When asked how fresh AI talent should be evaluated, Dr Sarabjot outlined a practical rubric centred on problem-solving ability, self-directed learning, curiosity and creativity, rather than mere familiarity with libraries. He illustrated this with TATRAS’s “passion-project” approach, where students work on real customer problems under mentorship from industry experts, thereby gaining domain insight that “doesn’t matter what technology you use” as long as the solution solves the problem (see [103-110][259-276]).


Vikas Srivastava argued that conventional classroom training, which focuses heavily on theory, must be supplemented with (i) applied problem-solving on real data sets and (ii) production-level exposure that moves models from notebooks to secure, scalable systems; he indicated that a third, as-yet-unspecified layer would also be needed (see [303-311]).


Both Kunal and Vikas highlighted the role of AI-driven tools in scaling upskilling. Kunal described an “employability intelligence layer” that uses AI to perform scientific gap analysis, match market-demanded jobs with candidate profiles and recommend personalised learning pathways, while Vikas noted that adaptive learning platforms now assess individual skill gaps and suggest targeted upskilling, thereby improving employability outcomes (see [212-224][315-316]).


Curriculum reform emerged as a recurrent theme. Sachan linked the discussion to the National Education Policy, noting that it already grants greater autonomy for faster curriculum evolution (see [252-256]). Alok called for “de-bureaucratised” curricula, more autonomy for Institutions of Eminence, and stronger industry-university MOUs to increase faculty capacity and keep pace with AI’s velocity (see [237-250]). Jawar added that centrally funded technical institutes (CFTIs) already enjoy the freedom to launch new courses without delay, suggesting that the bottleneck lies primarily in state-run institutions, which suffer from lengthy syllabus-revision cycles and limited multilingual support (see [279-293]).


The panel also examined sector-specific AI integration. Dr Devinder Singh explained that 6G will embed AI in every network component, shifting from static planning to self-learning, edge-distributed decision-making, and that engineers will need to master machine-learning and adhere to emerging AI standards, many of which have already been drafted by the telecom standards body (see [124-138][140-147]). This vision aligns with the broader view that AI is becoming foundational infrastructure, requiring efficient hardware, standards and a mindset of curiosity and creativity to harness its potential (see [90-94][170-176][153-164]).


Ethical considerations were foregrounded throughout. Alok warned that every AI product must undergo red-team testing and containment, even suggesting that a technology should be “killed” if it behaves undesirably, and he referred to Mustafa Suleiman’s book The Coming Wave, cautioning that without robust safety and security mechanisms AI could become a “wave that drowns us” (see [186-188]). Devinder introduced quantitative bias and fairness indices (0-1 scale) and robustness metrics that can be used by developers, regulators and deployers to ensure trustworthy AI, emphasising that different applications tolerate different levels of bias (see [320-328]). Subodh reinforced the need for standards on fairness and robustness, noting that such guidelines are already publicly available (see [166-168][332]).


The discussion concluded with a set of agreed-upon actions. The STPI “Skill-Up” programme will roll out regional hubs and expand its partner ecosystem beyond the current 18 trainers (see [9-12]). Academia is urged to de-bureaucratise curricula, grant greater autonomy to institutions of eminence, and develop large-scale faculty development programmes and industry-university MOUs (see [237-250][283-292]). The panel advocated broader adoption of AI-driven assessment tools to personalise learning pathways and improve employability (see [215-224][315-316]). Finally, industry mentors will be paired with students on passion projects that address social impact, thereby bridging the gap between theoretical knowledge and practical problem-solving (see [259-276]).


Several issues remained unresolved. The audience asked for concrete AI tools suitable for district-panchayat governance and the role of CSR funding, which the panel did not provide a concrete answer to (see [319]). Detailed road-maps for implementing AI standards in 6G, mechanisms for uniformly upgrading faculty across thousands of state institutions, and operational guidelines for continuous monitoring of bias, fairness and robustness indices were also left open (see [124-138][283-293][320-328]).


The discussion repeatedly emphasized four themes for closing India’s AI talent gap: (1) a T-shaped, interdisciplinary skill set that blends deep domain expertise, AI fluency, critical thinking, risk-taking and ethical judgement; (2) agile, autonomous curriculum reform supported by the NEP and de-bureaucratised processes; (3) sector-specific standards and hardware-aware training, especially for emerging 6G telecom networks; and (4) the deployment of AI-driven assessment and adaptive learning platforms at scale. These converging views underscore the need for coordinated action among government, academia and industry to transform AI from a set of tools into a national infrastructure of intelligence that can drive inclusive economic growth and social impact.


Session transcriptComplete transcript of the session
Sh. Subodh Sachan

Where do we see a talent gap? What is the requirement in terms of growing this whole ecosystem? Because when we come and we talk about today, it is the era, this is the most exciting time in the industry because AI is transforming everything. AI is transforming the way the businesses are being conducted. AI is transforming the whole workforce also because it’s not about what you are able to do, but it’s about co -exist with the whole AI ecosystem together. So my name is Sibodh and I’m director of SGPA headquarters. I’ve been part of the industry, I’ve been part of the government for almost 27 years. And being in the space of technology, being in the space of working closely within the startup ecosystem, within the academias, there’s always a gap in opportunity which we have witnessed.

And that’s why this particular topic today is very, very close to my heart in terms of how we ensure the industry move forward, how do we ensure that the AI as a technology can bring transformative changes overall. so I am happy that today you know very briefly today’s discussion will align very closely with the national efforts I am sure all of you when you talk about the NDIAI overall theme some of you have witnessed already that there is a lot of activity around the skilling, there is already 10 lakh AI skill in drive which has been initiated there is already a skill India digital program happening this is a new version of skill India altogether within STPI we have focused on you know a program called STPI skill up and I am happy to in fact announce also here that we are going to soon start the multiple regional hubs for training and ensuring that the training across technologies can happen and we have been joined by a lot of our training partners, the current training partner ecosystem is around 18 training partners across India and two of them in fact are here today with us and three of them are here with us and as we move forward, we’ll add more such training partners and collaborators.

We are calling them partners and collaborators because the aim and the objective is all aligned within the ecosystem of skilling up, right? The SIPI skill up becomes that particular program. Let me introduce our speakers. I’m not taking much time. So it’s my privilege to introduce my first speaker, Professor Dr. Alok Pandey, a professor and dean of UP Jindal University, a very senior academic leader with almost three decades of experience, focus across finance, governance, higher education, and I think multiple implementation within the financial technology space. He also comes with a great perspective on the AI. So let me request Professor Dr. Alok Pandey to come on stage and take the space. Please welcome Professor Pandey with a big round of applause.

A limited audience, but ensure that your applause covers the whole hall also. I’ll also like to introduce and welcome Professor Dr. Jawar Singh Professor Dr. Jawar Singh is also a professor from the Indian Institute of Technology Patna, he is also founder of Kuturna Labs and just we were chatting and he has just briefly told about his successful exit so he is not just the professor who is teaching but he is also practicing the same in the form of his own ideas implementation so we are literally and I am sure we are proud to have you Dr. Jawar Singh please welcome you on the dais let me also introduce Dr. Devinder Singh Deputy Director General of TEC this is the Department of Telecommunications in India Dr.

Devinder Singh has spent multiple years in the standards formalization standards ecosystem because you understand the telecom space especially is governed by the standards and these standards are very critical because unless and until because the interoperable ecosystem can only work if each and every device each and every node can be standardized and has to be standardized right So, Dr. Devendra Singh, Sri Devendra Singh represents the government from the Postal Telecom. So, let me welcome with a warm applause from the audience, Dr. Sri Devendra Singh on the dais, please. I’m also honored to join by Dr. Sarabjot. Dr. Sarabjot Singh Anand, he’s a co -founder and chief data scientist of TATRAS, also the co -founder of Sabath Foundation.

I have known, you know, Sarabjot Singh from almost, if I’m not wrong, seven, eight years now. And I’ve seen his passion in the space of AI. It’s not about just what he wants to achieve through his, you know, TATRAS data, but also about how, you know, and I think his work in the space of growing AI talent is well recognized in probably in some regions, especially in the region of Punjab, right? So, Dr. Sarabjot, thank you for being here. I request and welcome you on the dais of pioneer data science. A big round of applause for him. He has also roots in academia at Warwick and Ulcer. He has a very global perspective in this particular space altogether.

Let me introduce our next two speakers or two panelists on this agenda today. Vikas Srivastava, he’s a chief growth strategist of Vincis IT Services Private Limited. Vincis is one of our technology training collaborator and partner of STPI Scalar program. Vikas has almost 16 plus years in enterprise consulting, cloud workforce upskilling. And I think Vikas has a great perspective to share in terms of what is really reskilling requirement today within the whole ecosystem of the AI workforce. So, with a big round of applause, please welcome Vikash Srivastava. last but not the least let us also give a warm welcome to Kunal Gupta managing director of Mount Talent Consulting you know he has been doing talent advisory he runs his own job search portal he understands he works very closely with industry and has a clear perspective what is the industry requirement and where is the gap so with a round of applause Kunal welcome on the dais as well thank you for everyone and let me you know let me probably switch my place as well so it will be easier for us to start the whole discussion hello yes so I think let me quickly start and I will probably start from my immediate left Dr.

Sarabjot you know when we talk about next gen AI AI you And when we talk about next -gen AI as a space, next -gen AI as the whole, from the talent perspective, from the opportunity perspective, what is your perspective? Briefly, we’ll touch upon each one of you on the defining next -gen AI so that the audience understands very clearly what does next -gen AI really means. So over to you.

Dr. Sarabjot Singh Anand

So to me, there are two camps here, right? One is the people who want to generate the next wave of AI, and then, of course, they’re the ones that have to use AI to be more efficient in their jobs. Now, for both of them, I think what is very, very important is that they have to be critical thinkers more than any technology as such, because there is, you know, a great move towards outsourcing your thinking to AI, and that’s a problem. We need to recognize that AI is not perfect. We need to recognize that there are certain deficiencies in it. and therefore we have to question what we get from that AI. And if we can get people who can critically think about the problem they are trying to solve and then take risks, I think risk -taking is going to be another very, very important aspect and having a foundational understanding of what is possible today with AI and what is not possible today with AI.

Because if we don’t recognize the deficiencies and start to regard AI as an oracle that always tells us the truth, we are going to get into trouble. So these are very, very important aspects apart from of course technology. Thank you.

Sh. Subodh Sachan

To Devender Singh, your perspective on the next -gen AI technology in a very brief.

Dr. Devinder Singh

Hello. Next -gen AI, I feel he should have a strong expertise in AI and he should have skills to solve the real -world problems also. And he should adapt to new technologies also. He should be able to work in research. He should be able to work in different sectors also. And above all, I feel he should be aware of the regulations. Thank you. in the sector and in AI also. Thank you.

Sh. Subodh Sachan

Thank you. Yes.

Professor Dr. Jawar Singh

Yeah, hello. So to me, actually, the NextGen AI should not only the AI, NextGen should be aware of the AI algorithms, but basically they can make products or solution with customer facing. And they should understand not only the algorithms, but the way those algorithms are mapped onto the hardware. To me, a grounding, solid grounding of hardware, solid grounding of computer science, or even the engineering domain is must, actually. Thank you.

Sh. Subodh Sachan

Yes. Professor Alok, sorry for my mistake in pronouncing your name wrong. Yes.

Professor Dr. Alok Pandey

Thank you. I think the next gen AI is largely a T -shaped thing. You need to be domain specialists, deep domain specialists. You need to be fluent in AI skills, whatever software, hardware, etc. you are looking at. And then you should be able to understand red teaming and containment. So, if you have these three, then probably we will be able to solve most of the problems we face in India today.

Sh. Subodh Sachan

Please, Kunal.

Kunal Gupta

I think your question is very important. What do I understand or what do we understand with next gen AI? You know, next gen AI is the infrastructure. It’s not for intelligence like you currently have this infrastructure wherein we are able to express our views and they go out to the world. you know next generation of AI is like this infrastructure meant to multiply our intelligence our reasoning our research our values our creativity our judgments and what the future holds for us you know we are going to see a new wave of new materials and for a very long time we haven’t seen any major materials coming apart from the basic alloys that we have been using the process changes which are going to come about in the next generation with the use of the next generation AI the generation of models you know we talk about many things about differentiation in the society from a digital divide to this new edge AI divide but it could also at the same time help us reach out to the inclusive society in general with vernacular languages you know multiplying and extrapolating the reach of what a common normal common man can do earlier they were dependent on languages like English you but with the expansion of the next -gen AI platforms, tools, local vernacular languages wherein you can speak and give instructions to the computer in Hindi, in your local languages, get access to data, knowledge.

Like I said, you know, you could just build anything. We have seen this with a tool called TikTok, you know, a tool about 10 -12 years back which started. And it created a wave of influencers, otherwise a language or a platform meant only for the English and the literate. You know, went on to the masses. So I think next -gen AI, like I said, is just in one word an infrastructure of intelligence, multiplying our ability to think and, you know, make judgments in the future as well. Thank you.

Sh. Subodh Sachan

Very well said. It is the infrastructure level intelligence which can be, which has to be created and which defines the next -gen AI. And carrying forward the same thought, I’ll ask Vikas to share his opening remarks on the next -gen AI.

Vikash Srivastava

Thank you. So I think most of the important aspect has on you. I think we’ve covered the panel. What I wanted to add is for me, the next -gen talent combines three important things. First is technical mastery. Second is ethical judgment. And third is real -world problem -solving capabilities. So we need people who understand, as I said, you know, we should know, the people should know where AI fits and where AI doesn’t, right? So I think this is the most important thing which I wanted to add. Thank you.

Sh. Subodh Sachan

I think for the audience, it is important to understand that when we talk about next -gen AI and we talk about next -gen AI talent and the next -gen AI talent gap, right, there is, we got a clear perspective right from a critical thinking, right, going to the level of not just the, you know, opening up the layers of the AI, but from the perspective that one has to start thinking about the new ways and new layers in which the AI technology is having an impact. But there’s materials which, you know, Whether it is you know the infrastructure intelligence again which we talked about. Whether it is a foundational knowledge and foundational you know algorithm which we talked about.

The next gen AI talent gaps exist everywhere. And accordingly you know I think I will ask Sarabhjotji now to probably talk something on specifically. You know from your perspective of both as TATRAS and Sabud Foundation. You have seen the whole AI evolution. And you have seen the gaps which have been there. And you have tried to fill the gaps already. So my question to you is you know when you talk about the evaluative you know evaluation of the fresh AI talent. How do you what is your approach? Because that approach will lead us you know in terms of ensuring that you know how will this whole space will grow up right. So you are opening Ramon on that here.

Dr. Sarabjot Singh Anand

Sure thank you. So you know when we look at talent today. What we assess is their problem solving skills. We look at. you know how keen are they on learning themselves so have they taken control of their own agency in learning in the future because what’s happening today and I’ve seen this over the years right a lot of students are very focused because they want to get a job they are focused on learning libraries you know even in 2018 when we started Sabudha Foundation because we found there was a huge gap in AI skilling here in India we found that till we got them to program a neural network they felt they weren’t doing anything right and now of course it’s LLMs everybody wants to learn lang chain and that’s about it but they have to understand the foundations the foundations if they are weak we are going to do interesting things but are not going to do amazing things and so the focus has to be on building a strong foundation increasing their curiosity in terms of what they are doing and getting them to think about how they can be creative in the solutions that they are engineering for their customers.

Now, in Tataras, we work with startups in the US and develop their AI for them. Now, to do that, somebody mentioned domain being very important. And what we are constantly training our folks to say is understand the problem from the customer’s perspective. Right? It’s not just about algorithms. When you create a solution, a successful solution is going to be one that solves the problem. It doesn’t matter what technology you use. And that is a key differentiation between the training that we provide and what is available otherwise in terms of just skilling on libraries. Right.

Sh. Subodh Sachan

Thank you. And I think, you know, for all the people outside sitting here, the most important part, I think, as Sarojotji said, and probably any one of you can add also, whenever you feel like, you know, curiosity is one part. Right? Because curiosity to our human mind. adds that element of learning and when the curiosity is there there comes a creativity and once you have this curiosity combined with creativity then only you can understand the customer problem if I am not wrong you can understand the customer ecosystem it’s just about the customer ecosystem from the perspective where you make money but when you talk about social impact because social impact even the people who are getting benefited from the technology they might not be directly paying you but you are creating great amount of social equivalence outside there so it becomes important from their perspective and when we combine these three and we map that with the AI which is such a powerful language right now in terms of technology I think the solutions which you see outside are just a few examples of what really can wonders can be created when you know you bring these three elements together right so I think in similar lines you know I will ask Dr.

Devinder Singhji because he comes from his background on the whole telecom space right and today we talk about native AI telecom infrastructure. When you talk about native AI, they not be just AI ready, but they need to also bring in AI in their own operations. When I say AI readiness, it’s all about the scale, the kind of compute, kind of technology, kind of infrastructure they need to create. But how do they approach from a standards perspective when you see? When you see from the standards perspective, because you see future. You are looking into 6G as a standards. What is the role of an AI in terms of standard creation? And what is the role in terms of technology when standards are getting defined?

So from that angle, your thoughts are on the same.

Dr. Devinder Singh

The present telecom engineers, they are very strong in networking. But the future network that the 6G would be coming, it will be more dependent on the AI. The present technology, 5G, is in that case the AI is add on that. But in 6G, each and every component has got AI inbuilt in that only. So at present, the planning is done in a static way. Components are selected and then the effect is seen. But in 6G, it will be self -learning type of thing. So the engineers would be required to know machine learning. And the present cases, whenever there is some fault and alarm is generated, an engineer is supposed to take corrective action. But in 6G, it would be happening, AI will predict what kind of fault can come and it will take corrective action on its own.

And at present, most of the decisions are taken at the central level only. But in 6G, the intelligence will be distributed at the edge also. So the decision will be taken at a distributed level. So the engineer must be able to, plan everything. Thank you. in considering that the distributed decision will be taken. So in that case, as far as the standards are concerned, standards for 6G are being finalized. They are not final, but it is already decided that each and every component will be having AI. In addition to that, you were talking about the standards. Since in TEC, in the telecom engineering center, we have already published some standards on AI. So the telecom engineers should also be aware of the standards which they are supposed to follow for implementing of an AI.

At present, the telecom engineers are using AI, but in future, the telecom engineers will design and operate and use it. Most of the decisions will be taken by AI. The human will only supervise only. Thank you.

Sh. Subodh Sachan

Thank you. I think when we talk about telecom space, I think two or three critical things, which probably, you know, is of interest to the audience, as Devendraji talked about, you know, we talk about the AI, agentic AI and the agentic AI in taking care of the operations part. And I think from a skill perspective, it is important that when you look into the agentic AI ecosystem, you need to go deeper into the particular technology or particular sector, because each industry gives its new challenges and problem even for agentic AI ecosystem, right. And when you talk about the infrastructure readiness, right, it is important that today, the whole telecom sector is one of the sector and I think I’ll come back to you on the on the perspective of how is telecom sector creating robustness.

Right. And I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector Next, I think I’ll touch upon again, you know, I think to Professor Jawar, you know, when you look into the layer of the hardware and below, right?

Because when we talk about the AI and we talk about six layers as has been spoken about across the spectrum, the most promising and most important layer is also about not just applications, but also about the hardware, which is powering up the whole AI, you know, the need and speed

Professor Dr. Jawar Singh

All right. So, I mean, this is quite interesting because very rarely people talk about how those algorithms runs, how those AI models runs actually. So honestly, if I say these models are very expensive, expensive in terms of not cost, but in terms of power, I will say. And cost is obviously associated with it actually. So, if I say a simple, if I take a simple example, the power requirement of if I take a very basic NVIDIA processor, it consumes around 500 to 700 watt. But if I say the same processor, I mean, not, I mean, our human brain is also having a very beautiful processor. I can say it can compute a lot and it just consumes 20 watt power.

So, you can see there is a huge gap between. The processing capabilities, the most instead of the processors that we have and the. most cognitive processes that we all have is there is a lots of gap actually so the gap need to be bridged so there are lots of research is going on in this domain we usually call the neuromorphic computing or brain inspired computing where these algorithms can be mapped at the hardware in a efficient manner another example i can give you the deep seek when it comes and when it first time pops up or surfaced in the market actually the nvidia’s stocks i mean slumped down quite severely and the reason was that their processor was quite efficient actually that was the only thing so people think okay we may do the same thing in a more efficient way so there is a we need people basically they not only think from the algorithm perspective but they also think from the hardware perspective hardware security also i will add here one more term hardware security because the ai can be weaponized and can also be used for neutralization purpose.

So the hardware play a very crucial role. Algorithms are okay. So we need people basically they understand not from the algorithm but all the way down to the hardware implementation. How your implementation is secure, trusted and the reliable. So I hope No, thank you. Thank you.

Sh. Subodh Sachan

You touched upon the element of the security and I think when AI comes into play, the cyber just not the security elements of the algorithms but also the important thing which has popped up is the biasness and the robustness and the fairness of the algorithms. So I am sure some of you will talk about that as a gap from a talent perspective and how do we skill and reskill the people who can be used for this particular filling the layers across the layers they can be probably be more coming into the ecosystem. So with that I think I will ask Dr. Alok. when you look into from a university academia perspective today we talk about population scale AI implementation and I think when we talk about population scale AI implementation it is not about the whole critical thinking and the thinking part also need to be changed and hence academia needs to be geared up to create that kind of curiosity and learning from the perspective of students so what is your take from the industry and academia when you see a gap.

How are you gearing up your students from a perspective of scale of AI perspective and from that particular element?

Professor Dr. Alok Pandey

We have large contracts. Say I have to do an M &A valuation, an M &A due diligence. And, you know, competition commission has asked me whether I should go for this merger acquisition or not. I am a lawyer. How do I do it? I can use a generative AI software for that. I can do money laundering prevention, not just spam prevention like, it’s very effective spam prevention by, you know, Airtel and all. But I can do money laundering calculations and identify which transactions work in which manner through a generative AI. So we need to develop products in these lines. The second thing I would say that safety and security of these products. How are we going to look at, you know, the safe usage?

Now, there’s a term which has come up. You know, this is called a coming wave. Mustafa Suleiman has written a book, The Coming Wave, and everybody uses. This is the coming wave. And the wave is going to drown all of us if safety and security is not there. Every young person who uses AI needs to understand what is red teaming and what is containment. I should kill my technology if it doesn’t work in my favor. Right. And finally, domain integration. So AI healthcare, AI law, AI education, AI finance. So all these levels basically need to be understood by educational institutions. If you ask me another question that how do we scale it up, then I’ll of course later speak on that.

But I’ll tell you that we need to really work out an infrastructure. We need to work out on academic strength. We need to have large number of trained faculty members. We need to have MOUs with Western countries. All companies are based in either China or Europe or America. And the universities are generating a lot of trained resources. And Indian universities need to move forward in that direction. So I basically feel that yes, there is a huge gap today. And we need to really answer these gaps through not just viable funding from government, but also from industry.

Sh. Subodh Sachan

I tend to. Partly agree in terms of, you know. the length and the breadth of the AI ecosystem has changed dramatically everywhere but you know when we look into the Indian talent right and I strongly believe because I have been in this industry for very long now and kind of you know energy we are seeing in this particular 10 arenas hall and probably the conferences happening other side there is a huge talent which has popped up now and they are generating very good solutions and today India from a solution producer perspective they are not just doing something at the application layer or the agentic layer but they are also looking into the foundational layer and that’s why we have saw the launch of the recent LLMs also right and when we look into let’s say the launch of the Sarvam LLM or other LLMs it’s very clear people are trying to see there is a lot of data available in our country and this data needs to be understood and as I think as you talked about if you take just one particular sector called law and justice right and there is one start one company here Lex Leges and I was interacting with them yesterday they have understood this problem they have created their LLM on particular you know not the large language model but they have approached this problem with the same mentality of LLM right and hence they have been able to ensure exactly what you are describing as a problem you know how do you create a solutions for that and it works on Indian data it works on the Indian contract laws it works on the Indian you know past judgments so that is the need of our now you as an entrepreneur if you have entrepreneurial mindset as from a perspective of audience sitting here or as somebody who wants to get into this particular as a workforce you need to clearly have the idea about each in every domain whether it is health as you talked about or whether it is law and justice has its own set of challenges and problems right and when there are challenges and problems with right skill and right talent you can actually approach and be very successful and we see this as a you know leapfrog moment for each one of you From the industry perspective also So taking that thought forward I’ll ask Kunal Kunal you have been talking about Skill gaps Especially working with students and working professionals From your platform perspective Whatever you are seeing From your own job portal and job placements perspective What is your take again In terms of most commonly seen The certain abilities Which is required Which probably should be seen by each one of them In a short term What are the skills they need to fill in Whether it is Learning to coexist With the LLMs outside there Whether it is Learning to do the coding Whether in the AGI As Professor Alok talked about Or creating new Machine learning algorithms What do you see as a typical problem In a short term Where talent has to be ready for that

Kunal Gupta

I think I see the problem threefold when it comes to skill gap, specifically in a dynamic country like India, wherein we are living across many generations across the country. You know, a generation which is far ahead in the future, a generation which is far behind in terms of development, capability and education as well. The biggest skill gap that I see right now is the application. And more importantly, how do we define a problem? You know, what we do is we have this mentality out of whatever ecosystem that we have built, we just start copying others in terms of this is the trend, so we need to go for this trend without really understanding how to define the problem first.

Define the problem is about 50 % of the solution achieved in itself. Define the problem in any sphere that the person is in. You know, like Dr. Saab said about different usage in different fields, whether it is healthcare, whether it is… whether it is law, whether it is… is agriculture for that matter, which is catering to such a huge population in our country. Who would have thought of hydroproning, you know, producing such huge results without the application of soil, no dependency on weather, and you can create your own environment for creating absolutely green vegetation, you know, in the best of atmosphere without germs and without the usage of application of pesticides as well. So coming back to your question, skill gap again is going to be defined sector specific.

Different sectors are going to have different specific gaps at different specific application levels. When it comes to industry, again, what is the solution that I am providing or we are providing as a company, you know, our aim is to develop an employability intelligence layer. You know, how do we define skill gap, basis, what kind of jobs are coming from the market, basis, the jobs, what is the current skill set of the candidates, we have a scientific gap analysis in terms of what is missing. it’s not that we have a very nice application tracking system we do a recommendation algorithm using a lot of ai the aim is you know in my view the aim is not to exclude people or reject people using ai when it comes to skill gap analysis the aim is to show them that this is what is missing this is what has to be developed it is not rocket science that can’t be developed you take a course of one month three months six months or you do it while working in another job role while coming to your ambitious job role you know it takes time nothing is built in a day but more importantly i think a more bigger gap that i see right now which is going to come as a huge pressure on the educational setups whether it is at a university level or at a school level you know any which ways we keep talking about the fact that india’s syllabus is very uh you know it is not aligned to the industry it is not aligned to the industry it is not aligned to the industry it is not aligned to the industry it is about 20 year old we don’t update our syllabuses you know it takes six committees to take five years, seven years to come up with new curriculums.

By the time the new curriculum is implemented, it has already gone obsolete from an industry perspective. I think in the last six months, the speed of growth of AI that we have seen is going to put the maximum pressure on the policy makers of the country, specifically those catering to one is the core foundational education, the higher skilling education, and more importantly, the industry skilling which has to be needed to ensure that people understand why productivity is needed, how productivity is done. Students need to understand that most industry needs output. We need production. We need results. We are not, you know, industry cannot always bridge the gap. And in India, you know, I’ll have to say it, whether it is an MSME or whether it is large industries, everybody has done their bit in terms of scaling those whom they select.

And, you know, today the success that we see in conferences like these is a lot of people. People who have grown through the industry and how industry has scaled people up. Correct. Colleges will need to ensure that AI education is for all. You know, application of AI for all. Output will increase. Output will lead to, you know, more analysis in terms of how to improve the output production. Production leads to more research. Research is going to lead to more efficiency in production. And it’s a loop. You know, currently the application is going to lead to higher output in my view. Higher output in terms of what an engineer can do in eight hours of work.

What a company can do in terms of per year revenues. Whatever models can do. Whatever processes can do. And based on that, it’s a regular running cycle. We can’t sit in a relaxed manner right now. Specifically in this changing world right now.

Professor Dr. Alok Pandey

I’ll just add to what Kunal has said. We need to de -bureaucratize education today to a great extent. In fact, we brought in this concept of institution of eminence. And I’m happy that I’m part of an institution of eminence. where we can create our own curriculum. You know, curriculum velocity is so high that you can’t give a command to faculty members to teach a particular course, especially in technology. And especially when you’re talking about integrating with a particular domain when the faculty has to work with other specialists, identify something and the needs change frequently. You know what has happened? It’s not just that AI technologies have changed. The consumer and the user has started demanding change.

For example, if you look at the crop insurance, the crop insurance idea basically means that I should have satellite pictures and I should have an understanding as to whether a crop failed or not. And this is done best using AI. And if I need to train my agriculture college students, you know, who study in large agronomy institutions, I need to have a quick delivery of the curriculum. Sadly, we don’t have that. We don’t have expertise on those. Thank you. So if you de -bureaucratize curriculum, allow more autonomy to institutions who are into technology at least, or technology applications, we’ll have a much bigger national good at hand.

Sh. Subodh Sachan

And I think the start has already happened with the NEP, if I’m not wrong, right? The whole focus on the national education policy and the initiatives around that has been giving more autonomy and speed towards defining the curriculum. So I tend to see this as a problem which was there. And there’s already a lot of work has happened now, right now. I think if you talk about, I think, when you were at probably a globally level and you were probably, you know, from a Warwick experience, right? You would have seen these changes there. And do you see this coming back to India right now in that similar speed?

Dr. Sarabjot Singh Anand

I don’t, unfortunately, right? So at Warwick, we actually have the Jaguar Land Rover research labs on campus, right? And we were interacting with them. Even 14 years ago, we were looking at tracking, the cognitive load on a driver. as they drove a vehicle to understand whether we need to take some preventative action before he causes an accident, right? Now, of course, we are saying we don’t need a driver. So times are changing very quickly. I think the one thing when we started Sabud, we realized that curriculum is falling short. Academics are not equipped to deal with the change that’s happening. Even HR folks, when we look at it from an industry perspective, our HR folks are not evolving quick enough to evaluate candidates the right way, right?

So what we did in Sabud was, we said the centerpiece of our training is what we call a passion project, where we get students, we are training them in AI machine learning and technology, but we are getting them to think about how do I solve a problem of social impact? Right? And then we are giving them mentors. who are from Tatras and from other organizations that are actually creating AI solutions for the global north, as they say, right? And so now the students are getting mentorship. And I think the key thing we are missing today, which is shocking for a country our size, we have companies with lots of technologists that have no choice but to keep up with technology innovation.

At the same time, these people have to be trained to give back more. If we can get every person to be evaluated or valued based on how much they give back to others, then we can pair students with mentors in industry and get them to get the skills that no curriculum can give, right? Because you really need problem -solving skills that are existing outside of academia. Of course, academia. Academia has great depth, and therefore they have to be part of that. And so as Subodhji was saying, we’ve got to bring academia and industry closer together and solve this problem. It’s not going to happen from one side alone.

Sh. Subodh Sachan

No, I think, thank you for sharing your thought on the I’ll just take you know, professor, sorry, Vikash first and then you on, please go ahead.

Professor Dr. Jawar Singh

Specifically for this, so in that way basically from the curriculum point of view at least, I just want to make this a small caveat actually related to this curriculum updates at least the centrally funded technical institutions are not a problem at all. Even they are free of all those things actually. If I have to start a new course from the next semester, I’m free to run. So such kind of restrictions for curriculum updates, at least as far as I know, CFTIs are not bound to that actually and they are quite okay.

Professor Dr. Alok Pandey

It was not only for CFTIs because India is 1 .4 billion people, right, and majority of it are in tier 2. My basic problem is state technical institutions. The talent which comes from state technological universities, which is the best talent. And these people need scholarships. These people need multilingual support. And these teachers also need training. You know, and, you know, there’s a very large layer for the state institutions because education being both a center as well as state funded thing. And we are in a quagmire where, you know, new regulators are coming in, old regulators are falling and we need to identify how to do it. But my basic thing was not CFTIs. The central funded institutions are much better off.

But still, you know, the amount of manpower you need for developing AGI kind of systems. And it is yet to see just a matter of five years. We’ll see how this hypothesis works, whether we are able to generate something in artificial general intelligence. I think all of us will have to contribute towards this transformational change right from academia to the industry, to the policymakers like us. It becomes important. We understand the speed is not required. But. to develop the solution, speed is required in terms of how the solutions get developed by virtue of doing right things, right?

Sh. Subodh Sachan

And I think to Vikash, I think Vikash, you have seen, because you have been coming from the AI learning space, right? So my only question to you is you would have seen the conventional way of doing AI education in past and how that has changed today, right? Are we still looking into conventional classroom mechanism of making the AI learning or as you know, probably what Sarab said, it’s not about learning but practicing it while learning, right? So what’s your input around the same?

Vikash Srivastava

So in my view, conventional or traditional trainings, they focus heavily on theory, mathematics, model architecture, you know, those foundations are important but from an industry readiness, we require additional three layer. First is, you know, problem, I know, applied problem solving. So, learner must work on real data set. Or, you know, they should focus on the domain -specific knowledge. Or they should work with the deployment scenarios. Second is, you know, the production exposure. So, knowing how your model move from your notebook environment to real scalable or, you know, secure, you know, systems. And, you know, how the production happens. And last is.

Sh. Subodh Sachan

So, I think when we talk about the classroom learning and we talk about the learning about the mathematics. How do you see the, you know, the new tools and technologies being used for training? For example. you know are there any examples probably you can quote some examples we have seen that students are now able to not just see the typical you know learning of the classroom but what other tools and technologies they’re being exposed so the learning gets you know increased this the learning speed of learning becomes faster?

Vikash Srivastava

So basically in in our sector you know we are utilizing ai to assess the skill gaps so now there are tools who based on the you know participant profile is able to assess the learning gaps and recommend adaptive learnings so which eventually help you know increasing the employability outcome that’s that’s so so this is how the ai is helping today.

Sh. Subodh Sachan

Great i think while we you know we are probably somewhere almost towards the end but we have one more set of questions but just to keep the audience anybody wants to have one quick question please can somebody bring the mic to them? Can somebody please bring the mic to them ? Right so I think we’ll I wanted to go one more round of questions but just to keep the interactiveness because the audience is also limited I don’t want you guys to get bored about what we are speaking so anybody can probably ask one or two questions?

Audience

Thank you hello everyone namaskar just quickly you speak in Hindi you tell your name my name is Vikram Tripathi I am from a village in Prayagraj and the upcoming elections are the panchayat elections I am going to participate in them there is a district panchayat election there is a district panchayat member of 25 villages so if I win the election then in the first year the AI tools or softwares which are available which are the three sectors where I should use them and secondly is it possible that private companies, CSR funds Thank you.

Dr. Devinder Singh

One bias index is produced depending upon matrices. For one bias, I can use a number of matrices also. Result of all the matrices are clubbed to find the bias index for one particular parameter. Then, a system can have bias due to many things and different bias indexes are clubbed to find one fairness index. The fairness index ranges from 0 to 1. If it is 1, then it is considered fair. If it is 0, it cannot be used. But in practice, the fairness index will be from 0 to 1. Then it will depend upon the user also. If he wants to have how much fairness in the system. If the system is used to suggest what song would you like to hear, then some bias may be accepted.

If system is supposed to identify whether the person is the soldier is enemy or our own. then no bias can be accepted so that can be used for the by the deployer and those matrices or the framework we have suggested it can be used by deployer the developers also the engineers who are involved in developing those systems those people can also test their models if it is fair or not and it can be used by the regulators also the regulator may say the government may say for such sector the system should be tested and it should have at least this much fairness level similar to fairness we have got one standard for robustness also which can be used to check if the system gives consistent results in different situations

Sh. Subodh Sachan

Great and I am sure these standards are available in the public domain they are not draft stages.

Related ResourcesKnowledge base sources related to the discussion topics (18)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“Sh Subodh Sachan has 27 years of experience across industry and government and moderated the discussion.”

The knowledge base identifies Subodh Sachan as Director of SGPA headquarters with 27 years in industry and government and notes he moderated the panel discussion [S1].

Additional Contextmedium

“The STPI “Skill‑Up” programme partners with multiple training organisations across India and will launch several regional training hubs.”

S4 confirms the existence of the SIPI/Skill-Up programme and that it works with partners and collaborators; S93 adds that STPI already operates 70 centres (62 in tier-2/3 cities) which can serve as the regional hubs mentioned [S4] and [S93].

Additional Contextmedium

“STPI plans to expand the Skill‑Up network to reach a large number of Indians by 2030.”

S92 states that India has committed to skilling up 10 million people by 2030, providing quantitative context for the programme’s expansion goals [S92].

Confirmedhigh

“Critical thinking is more important than technology; practitioners must question AI outputs and avoid treating AI as an oracle.”

Both S101 and S102 stress the need to validate AI results and invest in critical-thinking skills, echoing the speaker’s warning about outsourcing thinking to AI [S101] and [S102].

Additional Contextmedium

“Future AI professionals need a solid grounding in hardware and should be aware of the large energy disparity between current GPUs (≈500‑700 W) and the human brain (≈20 W), prompting research into neuromorphic computing.”

S28 discusses AI-powered chips and the skills required for India’s next-gen workforce, highlighting hardware expertise and energy efficiency as key focus areas, which adds nuance to the speaker’s point [S28].

Confirmedmedium

“Next‑gen AI talent must understand regulatory frameworks governing AI.”

S90 notes that AI is treated as critical infrastructure and emphasizes the need for capacities to articulate regulatory and standards issues, confirming the importance of regulatory awareness for talent [S90].

External Sources (104)
S1
NextGen AI Skills Safety and Social Value – technical mastery aligned with ethical standards — – Dr. Sarabjot Singh Anand- Professor Dr. Jawar Singh – Professor Dr. Alok Pandey- Professor Dr. Jawar Singh
S2
NextGen AI Skills Safety and Social Value – technical mastery aligned with ethical standards — – Dr. Sarabjot Singh Anand- Professor Dr. Alok Pandey – Dr. Sarabjot Singh Anand- Professor Dr. Alok Pandey- Vikash Sri…
S3
NextGen AI Skills Safety and Social Value – technical mastery aligned with ethical standards — So, Dr. Sarabjot, thank you for being here. I request and welcome you on the dais of pioneer data science. A big round o…
S4
https://dig.watch/event/india-ai-impact-summit-2026/nextgen-ai-skills-safety-and-social-value-technical-mastery-aligned-with-ethical-standards — So, Dr. Sarabjot, thank you for being here. I request and welcome you on the dais of pioneer data science. A big round o…
S5
NextGen AI Skills Safety and Social Value – technical mastery aligned with ethical standards — -Kunal Gupta- Managing Director of Mount Talent Consulting, runs talent advisory and job search portal, works closely wi…
S6
AI technology aim to detect emotional distress and depression sooner — A University of Auckland researcher isdeveloping AI toolsto identify early signs of depression in young men. The work fo…
S7
https://dig.watch/event/india-ai-impact-summit-2026/nextgen-ai-skills-safety-and-social-value-technical-mastery-aligned-with-ethical-standards — We are calling them partners and collaborators because the aim and the objective is all aligned within the ecosystem of …
S8
NextGen AI Skills Safety and Social Value – technical mastery aligned with ethical standards — – Dr. Sarabjot Singh Anand- Professor Dr. Jawar Singh – Professor Dr. Alok Pandey- Professor Dr. Jawar Singh Professor…
S9
NextGen AI Skills Safety and Social Value – technical mastery aligned with ethical standards — -Dr. Devinder Singh- Deputy Director General of TEC (Department of Telecommunications), expert in telecom standards form…
S10
Building the Workforce_ AI for Viksit Bharat 2047 — -Dr. Jitendra Singh- Role/Title: Honorable Minister, Minister of State for Personnel, Minister of State for Personal Gri…
S11
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S12
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S13
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S14
NextGen AI Skills Safety and Social Value – technical mastery aligned with ethical standards — -Sh. Subodh Sachan- Director of SGPA headquarters, 27 years in industry and government, works in technology space and st…
S15
How AI Is Transforming Indias Workforce for Global Competitivene — -Pragya- (Role/title not specified, mentioned briefly at the beginning) -Sangeeta Gupta- Panel moderator (role/title no…
S16
Building the Workforce_ AI for Viksit Bharat 2047 — -Moderator- Role/Title: Event moderator, Area of expertise: Not specified -Shubhavi S. Radha Chauhan- Role/Title: Chair…
S17
From Innovation to Impact_ Bringing AI to the Public — Whilst maintaining an optimistic outlook, the discussion acknowledges important limitations and risks. Sharma emphasises…
S18
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — . . . . . . . . . . . . . . one of our keynote speakers, they said autonomous weapons are going to AI -based autonomous …
S19
Artificial intelligence as a driver of digital transformation in industries (HSE University) — The analysis offers a comprehensive examination of artificial intelligence (AI) and its impact on various sectors. One s…
S20
Designing Indias Digital Future AI at the Core 6G at the Edge — The panel discussion revealed that AI-driven applications will fundamentally change network traffic patterns, with uplin…
S21
Trusted Connections_ Ethical AI in Telecom & 6G Networks — Another thing I would also request you to dwell on is that ultimately the AI is going to come on the handsets. So once A…
S22
Opening address of the co-chairs of the AI Governance Dialogue — Tomas Lamanauskas: Thank you, thank you very much Charlotte indeed, and thank you everyone coming here this morning to j…
S23
Brain-inspired networks boost AI performance and cut energy use — Researchers at the University of Surreyhave developeda new method to enhance AI by imitating how the human brain connect…
S24
Agenda item 5: Day 2 Morning session — Chile:Thank you very much, Chairman. I’d like to thank you and welcome all those present and wish us success or work in …
S25
Keynote by Vivek Mahajan CTO Fujitsu India AI Impact Summit — “This will be powered by our Fujitsu Monaca chip, which is a two nanometer chip.”[28]. “In recent past, we’ve announced,…
S26
Knowledge in the Age of AI: World Economic Forum Town Hall Discussion — Critical thinking as essential human skill
S27
AI Transformation in Practice_ Insights from India’s Consulting Leaders — Future workforce needs different skills including critical thinking, judgment capabilities, and empathy when working wit…
S28
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — A critical million-person talent gap exists across the semiconductor ecosystem, spanning from field service engineers to…
S29
Connecting open code with policymakers to development | IGF 2023 WS #500 — The discussion also addresses the need for skilled individuals in government roles. It is argued that attracting talente…
S30
AI (and) education: Convergences between Chinese and European pedagogical practices — This observation prompted Hao Liu to share BIT’s flexible academic system and their consideration of competency-based ev…
S31
Discussion Report: AI as Foundational Infrastructure – A Conversation Between Laurence Fink and Satya Nadella — Economic | Development | Future of work Fink positions AI as having transitioned from a future concept to a present rea…
S32
Telecommunications infrastructure — Network operators increasingly rely on AI for a wide range of tasks, fromnetwork planning(e.g. using algorithms to ident…
S33
Artificial intelligence (AI) – UN Security Council — Furthermore, the discussions underscored the importance of establishing frameworks and infrastructures that support dist…
S34
Building the AI-Ready Future From Infrastructure to Skills — The emphasis on open ecosystems, linguistic diversity, human oversight, and broad adoption provides a framework balancin…
S35
UNESCO Recommendation on the ethics of artificial intelligence — 117. Member States should support collaboration agreements among governments, academic institutions,  vocational educati…
S36
Harmonizing High-Tech: The role of AI standards as an implementation tool — Philippe Metzger:Thank you, Bilel. Maybe to be as succinct as possible, just would like to mention four areas, which I t…
S37
Opening Ceremony — Bogdan-Martin emphasized the importance of establishing trustworthy technical standards to guide AI development and ensu…
S38
Democratizing AI Building Trustworthy Systems for Everyone — “of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control …
S39
Artificial intelligence — Despite their technical nature – or rather because of that – standards have an important role to play in bridging techno…
S40
NextGen AI Skills Safety and Social Value – technical mastery aligned with ethical standards — It was not only for CFTIs because India is 1 .4 billion people, right, and majority of it are in tier 2. My basic proble…
S41
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — – S. Krishnan- Ashwini Vaishnaw- Rangesh Raghavan Focus should be on developing broad talent and understanding rather t…
S42
https://dig.watch/event/india-ai-impact-summit-2026/nextgen-ai-skills-safety-and-social-value-technical-mastery-aligned-with-ethical-standards — Thank you. So I think most of the important aspect has on you. I think we’ve covered the panel. What I wanted to add is …
S43
AUDA-NEPAD White Paper: Regulation and Responsible Adoption of AI in Africa Towards Achievement of AU Agenda 2063 — There is the challenge of the nature of the AI curriculum to develop. This is because the proposed Artificial Intelligen…
S44
Education, Inclusion, Literacy: Musts for Positive AI Future | IGF 2023 Launch / Award Event #27 — Finally, the analysis highlights the need for academics to propose alternatives to address biases in the digital medium….
S45
Introduction — | Term | EU definition …
S46
WS #205 Contextualising Fairness: AI Governance in Asia — – Nidhi Singh: Moderator – Tejaswita Kharel: Project Officer at the Center for Communication Governance at the National…
S47
Open Forum #30 High Level Review of AI Governance Including the Discussion — These key comments fundamentally shaped the discussion by introducing three critical themes that transformed it from a r…
S48
Why science metters in global AI governance — The discussion revealed surprisingly few fundamental disagreements among speakers, with most conflicts arising around im…
S49
Democratizing AI Building Trustworthy Systems for Everyone — “of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control …
S50
Ethics and AI | Part 6 — Even if the Act itself does not make direct reference to “ethics”, it is closely tied to the broader context of ethical …
S51
UNGA/DAY 1/PART 2 — The advancement of AI is outpacing regulation and responsibility, with its control concentrated in a few hands. (UN Secr…
S52
High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative — Doreen Bogdan-Martin: Thank you, and good morning again, ladies and gentlemen. I guess, Latifa, picking up as you were a…
S53
AI and international peace and security: Key issues and relevance for Geneva — Title:Expert Consultation Report on AI and Related Technologies in the MilitaryDescription:This report compiles insights…
S54
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Anne Flanagan: Hello, apologies that I’m not there in person today. I’m in transit at the moment, hence my picture on yo…
S55
How AI Is Transforming Indias Workforce for Global Competitivene — The conversation highlighted urgent needs for educational reform. Aurora emphasised that AI education cannot be confined…
S56
Policymaker’s Guide to International AI Safety Coordination — This comment crystallizes the fundamental tension at the heart of AI governance – the misalignment between market incent…
S57
Comprehensive Report: China’s AI Plus Economy Initiative – A Strategic Discussion on Artificial Intelligence Development and Implementation — Yeah, I think I just want to add some echo to Professor Gong’s comments. I think it’s not necessarily a negative effect,…
S58
DC-IoT Progressing Global Good Practice for the Internet of Things | IGF 2023 — Summary: The analysis of IoT security policies across different countries revealed some significant findings. Firstly, t…
S59
WSIS High-Level Dialogue: Multistakeholder Partnerships Driving Digital Transformation — Lastly, the analysis illuminates the need for legislation orientated toward ensuring the security and privacy of both so…
S60
Table of Contents — Security doctrine is often understood to refer mainly to administrative security, personnel security, and physical secur…
S61
LinkedIn unveils AI-driven features to enhance job hunting and recruitment — LinkedIn isusingAI to streamline the job hunting process, aiming to alleviate the task of job searching for its users. T…
S62
How AI Drives Innovation and Economic Growth — Kremer argues that while there are forces that may widen gaps, AI has significant potential to narrow development dispar…
S63
AI for Safer Workplaces & Smarter Industries Transforming Risk into Real-Time Intelligence — Artificial intelligence | Social and economic development | Capacity development Speaker 4 envisions AI as a tool to qu…
S64
Day 0 Event #183 What Mature Organizations Do Differently for AI Success — Abdullah Alshamrani: Thank you, doctor. So, hopefully, you’ve learned, the foundational AI techniques, which are not…
S65
NextGen AI Skills Safety and Social Value – technical mastery aligned with ethical standards — Because if we don’t recognize the deficiencies and start to regard AI as an oracle that always tells us the truth, we ar…
S66
Optimism for AI – Leading with empathy — Online education | Capacity development | Future of work Will.i.am believes these three qualities are essential for suc…
S67
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — A critical million-person talent gap exists across the semiconductor ecosystem, spanning from field service engineers to…
S68
AI Transformation in Practice_ Insights from India’s Consulting Leaders — Future workforce needs different skills including critical thinking, judgment capabilities, and empathy when working wit…
S69
Knowledge in the Age of AI: World Economic Forum Town Hall Discussion — Critical thinking as essential human skill
S70
AI (and) education: Convergences between Chinese and European pedagogical practices — This comment was insightful because it challenged one of the most fundamental structural assumptions of higher education…
S71
Telecommunications infrastructure — Network operators increasingly rely on AI for a wide range of tasks, fromnetwork planning(e.g. using algorithms to ident…
S72
Trusted Connections_ Ethical AI in Telecom & 6G Networks — Artificial intelligence and telecommunications complement each other to form the backbone for the intelligence era. Tele…
S73
Building Indias Digital and Industrial Future with AI — Thank you, Devashish and GSMA for this particular session. It’s a session of particular interest to me as a user in the …
S74
Discussion Report: AI as Foundational Infrastructure – A Conversation Between Laurence Fink and Satya Nadella — Example of rural Indian farmer using early GPT models to reason over farm subsidies in local language and complete forms…
S75
WSIS Action Lines C4 and C7:E-employment: Emerging technologies in the world of work: Addressing challenges through digital skills — Shekhar emphasised that this transformation necessitates three critical strategies for effective response. First, organi…
S76
Scaling Innovation Building a Robust AI Startup Ecosystem — -Collaborative Ecosystem Building: The event highlighted partnerships between STPI, National Productivity Council, and o…
S77
Artificial intelligence (AI) – UN Security Council — Furthermore, the discussions underscored the importance of establishing frameworks and infrastructures that support dist…
S78
Setting the Rules_ Global AI Standards for Growth and Governance — And before we go to Rebecca, just from an India perspective, PM Modiji talked about Manav yesterday and the AI vision. T…
S79
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Roy Jakobs argues that the healthcare industry must establish self-regulation standards for AI implementation since regu…
S80
AI promises, ethics, and human rights: Time to open Pandora’s box — Bias, discrimination, and fairness: Are biases being propagated with data sets used to train algorithms? How transparent…
S81
AI Governance Dialogue: Steering the future of AI — Infrastructure | Development | Legal and regulatory Technical Standards and Implementation
S82
Democratizing AI Building Trustworthy Systems for Everyone — “of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control …
S83
Powering AI Global Leaders Session AI Impact Summit India — “And what that really means is the technology continues to accelerate.”[14]. “going to become even faster and faster.”[1…
S84
AI and the future of digital global supply chains (UNCTAD) — There is a skills gap in these countries
S85
AI is transforming businesses and industries — I am so excited because next week OpenAI is launchingGPT-4– the next-generation large language model! It is going to be …
S86
Sticking with Start-ups / DAVOS 2025 — Bhatnagar explains how AI is transforming content creation and enabling new business models. He highlights the reduced c…
S87
Powering the Technology Revolution / Davos 2025 — Dan Murphy: ♫ ♫ Welcome to Red Bee Media’s Live Remote Broadcasting Service. I’m from CNBC, I’m CNBC’s Middle E…
S88
The Global Economic Outlook — Panelists emphasized the need to rebuild optimism and trust among populations feeling economically insecure. They discus…
S89
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — Armando José Manzueta-Peña:Well, thank you, Luca, for the presentation. I’m more than thrilled to be present here and to…
S90
AI as critical infrastructure for continuity in public services — “I believe that there is perhaps awareness challenge as well as the capacity challenge, because I think that this whole …
S91
Open Internet Inclusive AI Unlocking Innovation for All — With decades of experience across entrepreneurship, investing, and global technology leadership, Rajan has played a pivo…
S92
https://dig.watch/event/india-ai-impact-summit-2026/welfare-for-all-ensuring-equitable-ai-in-the-worlds-democracies — And then we are coupling that with investments in skilling. So we have made some big -number commitments around how we a…
S93
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — STPI operates 70 centers across India with 62 in tier 2/3 cities and 24 domain-specific centers of entrepreneurship prov…
S94
AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023 — He introduces a panel of experts from different fields
S95
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — -Ashutosh Sharma- Investor in India’s fintech ecosystem, described as one of the leading deployers of finance in fintech…
S96
Launch of the eTrade Readiness Assessment of Peru (UNCTAD) — Another point raised by the speakers is the role of fintech in enhancing financial inclusion and trust in digital transa…
S97
DC-DH: Health Digital Health & Selfcare – Can we replace Doctors in PHCs — Debbie Rogers: I definitely am a proponent of bringing technology into the mix to relieve some of the burden on the he…
S98
Driving Indias AI Future Growth Innovation and Impact — And for this, I’m delighted to welcome two very eminent leaders who are instrumental in shaping the journey, both from p…
S99
Main Topic 3 –  Identification of AI generated content — Despite the difficulties posed by the enforcement of such regulations, the inflexibility of legislation regarding their …
S100
Experts urge broader values in AI development — Since the launch of ChatGPT in late 2023, the private sectorhas led AI innovation. Major players like Microsoft, Google,…
S101
AI and Human Connection: Navigating Trust and Reality in a Fragmented World — He advocates for always validating everything AI produces and encourages experimental use of AI technology to understand…
S102
Artificial General Intelligence and the Future of Responsible Governance — He emphasized that investing in education and critical thinking is as important as investing in computing power, but the…
S103
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan — Both speakers stress the critical importance of ongoing skill development and adaptation to new technologies, though the…
S104
Keynote-Rishad Premji — From experimentation to adoption and from pilots to scaled impact. This shift matters and it matters tremendously becaus…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
Dr. Sarabjot Singh Anand
3 arguments158 words per minute892 words336 seconds
Argument 1
Critical thinking and risk‑taking; recognizing AI limitations
EXPLANATION
Dr. Sarabjot emphasizes that next‑gen AI talent must be strong critical thinkers who can question AI outputs and take calculated risks. Recognizing AI’s imperfections prevents over‑reliance on it as an infallible oracle.
EVIDENCE
He explains that there are two camps – those who generate AI and those who use it – and stresses the need for critical thinking because people tend to outsource their thinking to AI, which is problematic. He notes that AI is not perfect, has deficiencies, and therefore must be questioned, and that risk-taking and a foundational understanding of AI capabilities are essential [40-46].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 notes that Dr. Sarabjot stresses the need for strong critical thinking, risk‑taking and questioning AI outputs because AI is imperfect and should not be treated as an infallible oracle.
MAJOR DISCUSSION POINT
Critical thinking and risk‑taking; recognizing AI limitations
DISAGREED WITH
Professor Dr. Alok Pandey, Vikash Srivastava, Kunal Gupta
Argument 2
Evaluate talent on problem‑solving, self‑directed learning, and foundational strength
EXPLANATION
He describes the criteria used to assess AI talent, focusing on problem‑solving ability, self‑initiative in learning, and a solid foundational knowledge base. These factors are seen as essential for producing capable AI professionals.
EVIDENCE
He states that talent is assessed on problem-solving skills, the eagerness to learn independently, and the strength of foundational knowledge, citing examples from the Sabudh Foundation’s experience with students struggling to program neural networks and later LLMs [103-110].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 describes his assessment framework that prioritises problem‑solving ability, self‑directed learning and a solid foundational knowledge base for AI talent.
MAJOR DISCUSSION POINT
Evaluate talent on problem‑solving, self‑directed learning, and foundational strength
Argument 3
Startup‑oriented AI solutions require understanding customer problems beyond algorithms
EXPLANATION
Dr. Sarabjot argues that AI solutions for startups must start with a deep understanding of the customer’s problem rather than focusing solely on algorithms. This customer‑centric approach leads to more effective and market‑relevant AI products.
EVIDENCE
He explains that at TATRAS they work with US startups, emphasizing that understanding the problem from the customer’s perspective is crucial, and that successful solutions solve the problem regardless of the technology used, distinguishing their training from generic library-focused skilling [106-112].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 reports his view that AI solutions for startups must start with a deep understanding of the customer’s problem rather than focusing solely on algorithms, emphasizing a customer‑centric approach.
MAJOR DISCUSSION POINT
Startup‑oriented AI solutions require understanding customer problems beyond algorithms
AGREED WITH
Kunal Gupta, Vikash Srivastava, Professor Dr. Alok Pandey
D
Dr. Devinder Singh
3 arguments155 words per minute661 words255 seconds
Argument 1
Strong AI expertise, real‑world problem solving, adaptability, regulatory awareness
EXPLANATION
He outlines the qualities needed for next‑gen AI talent: deep AI expertise, ability to solve real‑world problems, adaptability to new technologies, research capability, and awareness of regulations governing AI.
EVIDENCE
He lists that a next-gen AI professional should have strong AI expertise, solve real-world problems, adapt to new technologies, conduct research, work across sectors, and be aware of regulations [49-55].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 summarises Dr. Devinder Singh’s outlined qualities for next‑gen AI professionals: deep AI expertise, ability to solve real‑world problems, adaptability to new technologies, research capability and awareness of AI regulations.
MAJOR DISCUSSION POINT
Strong AI expertise, real‑world problem solving, adaptability, regulatory awareness
Argument 2
6G networks will embed AI in every component; engineers must master ML and new AI standards
EXPLANATION
Dr. Devinder explains that future 6G telecom networks will have AI built into every component, requiring engineers to acquire machine‑learning skills and understand emerging AI standards.
EVIDENCE
He describes how 5G treats AI as an add-on, whereas 6G will have AI in-built, with self-learning components, distributed edge intelligence, and the need for engineers to know ML and follow AI standards that are being finalized [124-146].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S4 discusses distributed decision‑making in 6G and the need for engineers to master ML; S19 and S20 highlight that AI will be embedded in all 6G components and will reshape network traffic; S21 points out security challenges when AI runs on handsets.
MAJOR DISCUSSION POINT
6G networks will embed AI in every component; engineers must master ML and new AI standards
AGREED WITH
Professor Dr. Jawar Singh, Sh. Subodh Sachan, Professor Dr. Alok Pandey
Argument 3
Fairness and bias indices; robustness standards for AI systems; role of regulators
EXPLANATION
He presents a quantitative approach to measuring AI bias and fairness, proposing indices that range from 0 to 1, and stresses that regulators should set minimum fairness thresholds and robustness standards.
EVIDENCE
He details the creation of a bias index using multiple matrices, aggregation into a fairness index (0-1 scale), and notes that different applications tolerate different bias levels; he also mentions a robustness standard for consistent results, suggesting regulators can mandate minimum levels [320-328].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 presents a bias index, a composite fairness index (0‑1 scale) and a robustness standard, and recommends that regulators set minimum thresholds for fairness and robustness.
MAJOR DISCUSSION POINT
Fairness and bias indices; robustness standards for AI systems; role of regulators
AGREED WITH
Professor Dr. Alok Pandey, Vikash Srivastava, Sh. Subodh Sachan
P
Professor Dr. Jawar Singh
4 arguments141 words per minute539 words227 seconds
Argument 1
Deep understanding of algorithms together with hardware mapping and security
EXPLANATION
Prof. Jawar stresses that next‑gen AI talent must not only master algorithms but also understand how those algorithms map onto hardware and ensure hardware security, as AI can be weaponised.
EVIDENCE
He states that AI professionals should know algorithms, how they run on hardware, and that hardware security is critical to prevent weaponisation, highlighting the need for grounding in hardware, neuromorphic computing, and secure implementation [57-60][160-164].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 stresses the importance of knowing how AI algorithms map onto hardware and ensuring hardware security; S23 adds context on brain‑inspired hardware efficiency; S4 notes the high power consumption of AI models, underscoring the need for efficient hardware mapping.
MAJOR DISCUSSION POINT
Deep understanding of algorithms together with hardware mapping and security
DISAGREED WITH
Professor Dr. Alok Pandey, Dr. Sarabjot Singh Anand
Argument 2
Centrally funded technical institutes already enjoy curriculum flexibility
EXPLANATION
He points out that centrally funded technical institutions (CFTIs) are not bound by the bureaucratic curriculum approval process and can introduce new courses quickly.
EVIDENCE
He explains that CFTIs can start a new course from the next semester without restrictions, indicating they have curriculum autonomy [279-283].
MAJOR DISCUSSION POINT
Centrally funded technical institutes already enjoy curriculum flexibility
AGREED WITH
Sh. Subodh Sachan, Professor Dr. Alok Pandey
Argument 3
Neuromorphic/brain‑inspired computing and hardware efficiency are crucial for future AI models
EXPLANATION
Prof. Jawar highlights the importance of energy‑efficient, brain‑inspired hardware (neuromorphic computing) to bridge the gap between human brain power consumption and current AI processors.
EVIDENCE
He compares a basic NVIDIA processor (500-700 W) with the human brain (≈20 W), notes the large efficiency gap, and describes ongoing research in neuromorphic computing to map algorithms efficiently onto hardware, also mentioning hardware security concerns [153-164].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S23 details neuromorphic computing and topographical sparse mapping as approaches to achieve energy‑efficient AI, directly supporting his argument.
MAJOR DISCUSSION POINT
Neuromorphic/brain‑inspired computing and hardware efficiency are crucial for future AI models
AGREED WITH
Dr. Devinder Singh, Sh. Subodh Sachan, Professor Dr. Alok Pandey
Argument 4
Hardware security as a critical layer to prevent weaponisation of AI
EXPLANATION
He argues that securing the hardware layer is essential because AI can be weaponised if hardware implementations are not trustworthy.
EVIDENCE
He explicitly mentions that hardware security is crucial because AI can be weaponised and used for neutralisation, urging a focus on secure, trusted hardware implementations [160-164].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S18 warns about AI‑driven autonomous weapons and stresses the need for secure hardware to prevent weaponisation; S1 also highlights hardware security as essential for trusted AI implementations.
MAJOR DISCUSSION POINT
Hardware security as a critical layer to prevent weaponisation of AI
P
Professor Dr. Alok Pandey
4 arguments172 words per minute891 words310 seconds
Argument 1
T‑shaped profile: deep domain expertise, AI fluency, red‑team/containment skills
EXPLANATION
Prof. Alok describes the ideal next‑gen AI professional as T‑shaped: deep expertise in a specific domain, fluency across AI tools and technologies, and the ability to conduct red‑team testing and containment.
EVIDENCE
He states that the next-gen AI talent should be deep domain specialists, fluent in AI software/hardware, and capable of red-team and containment activities [63-67].
MAJOR DISCUSSION POINT
T‑shaped profile: deep domain expertise, AI fluency, red‑team/containment skills
AGREED WITH
Dr. Sarabjot Singh Anand, Kunal Gupta, Vikash Srivastava
DISAGREED WITH
Dr. Sarabjot Singh Anand, Vikash Srivastava, Kunal Gupta
Argument 2
Need for large‑scale faculty development, industry‑university MOUs, and funding support
EXPLANATION
He argues that scaling AI education requires expanding faculty numbers, establishing industry‑university collaborations, and securing both government and industry funding.
EVIDENCE
He mentions the need for large-scale faculty development, MOUs with Western countries, and funding from government and industry to bridge the talent gap [170-200].
MAJOR DISCUSSION POINT
Need for large‑scale faculty development, industry‑university MOUs, and funding support
Argument 3
De‑bureaucratise curricula; grant autonomy to institutions of eminence for rapid updates
EXPLANATION
Prof. Alok calls for reducing bureaucratic hurdles in curriculum design, allowing institutions of eminence to create and update courses swiftly to keep pace with AI advances.
EVIDENCE
He describes the concept of ‘institution of eminence’, the need for high curriculum velocity, and the inability to command faculty to teach specific courses due to rapid AI changes, urging de-bureaucratisation [237-244].
MAJOR DISCUSSION POINT
De‑bureaucratise curricula; grant autonomy to institutions of eminence for rapid updates
AGREED WITH
Sh. Subodh Sachan, Professor Dr. Jawar Singh
DISAGREED WITH
Kunal Gupta
Argument 4
Importance of red‑team testing, containment, and safety in AI product development
EXPLANATION
He stresses that AI products must undergo red‑team testing and have containment mechanisms to ensure safety, even suggesting that a technology should be killed if it behaves undesirably.
EVIDENCE
He notes that every young AI user must understand red-team and containment, and that one should ‘kill’ technology if it does not work in their favor, highlighting safety concerns [186-188].
MAJOR DISCUSSION POINT
Importance of red‑team testing, containment, and safety in AI product development
AGREED WITH
Dr. Devinder Singh, Vikash Srivastava, Sh. Subodh Sachan
K
Kunal Gupta
4 arguments181 words per minute1228 words406 seconds
Argument 1
Emphasis on application focus and clear problem definition, sector‑specific skills
EXPLANATION
Kunal argues that the biggest skill gap lies in applying AI, specifically in defining problems clearly, and that skill requirements differ across sectors.
EVIDENCE
He states that the biggest gap is application, that defining the problem accounts for 50 % of the solution, and that each sector (healthcare, law, agriculture) has specific gaps, citing examples like hydroponics and crop insurance [202-214].
MAJOR DISCUSSION POINT
Emphasis on application focus and clear problem definition, sector‑specific skills
AGREED WITH
Dr. Sarabjot Singh Anand, Vikash Srivastava, Professor Dr. Alok Pandey
Argument 2
Identify gaps in application skills; use AI‑driven gap analysis and personalized learning paths
EXPLANATION
He describes using AI to assess individual skill gaps, generate recommendations, and provide adaptive learning pathways to close those gaps.
EVIDENCE
He explains that their platform uses AI to assess participant profiles, identify missing skills, and recommend adaptive learning, thereby improving employability [214-224].
MAJOR DISCUSSION POINT
Identify gaps in application skills; use AI‑driven gap analysis and personalized learning paths
AGREED WITH
Vikash Srivastava, Sh. Subodh Sachan
Argument 3
Current curricula are outdated and misaligned with industry; need faster, industry‑responsive revisions
EXPLANATION
Kunal points out that Indian curricula take years to update, rendering them obsolete by the time they are implemented, and calls for rapid, industry‑aligned curriculum reforms.
EVIDENCE
He notes that syllabus updates require multiple committees and take five-seven years, causing a mismatch with industry needs, especially given the rapid AI growth in the last six months [212-218].
MAJOR DISCUSSION POINT
Current curricula are outdated and misaligned with industry; need faster, industry‑responsive revisions
Argument 4
Sector‑specific AI use‑cases (e.g., hydroponics, crop insurance) illustrate need for domain‑focused talent
EXPLANATION
He provides concrete examples where AI application requires domain knowledge, such as hydroponic farming and crop insurance, underscoring the need for talent that understands specific industry contexts.
EVIDENCE
He cites hydroponics as a sector-specific AI application that can produce high yields without pesticides, and crop insurance that relies on satellite imagery and AI for assessment, illustrating domain-focused talent needs [210-214].
MAJOR DISCUSSION POINT
Sector‑specific AI use‑cases (e.g., hydroponics, crop insurance) illustrate need for domain‑focused talent
V
Vikash Srivastava
4 arguments132 words per minute262 words118 seconds
Argument 1
Blend of technical mastery, ethical judgment, and real‑world problem‑solving ability
EXPLANATION
Vikash proposes that next‑gen AI talent should combine strong technical skills, ethical decision‑making, and the ability to solve real‑world problems.
EVIDENCE
He lists three important components: technical mastery, ethical judgment, and real-world problem-solving capabilities [83-88].
MAJOR DISCUSSION POINT
Blend of technical mastery, ethical judgment, and real‑world problem‑solving ability
AGREED WITH
Professor Dr. Alok Pandey, Dr. Devinder Singh, Sh. Subodh Sachan
DISAGREED WITH
Dr. Sarabjot Singh Anand, Professor Dr. Alok Pandey, Kunal Gupta
Argument 2
Move beyond theory: applied problem solving, production exposure, deployment experience
EXPLANATION
He argues that traditional AI training focuses on theory, whereas industry‑ready talent needs hands‑on problem solving, exposure to production environments, and experience deploying models at scale.
EVIDENCE
He contrasts conventional training (theory, mathematics) with three additional layers: applied problem solving with real data, production exposure (moving models from notebooks to secure systems), and deployment scenarios [303-311].
MAJOR DISCUSSION POINT
Move beyond theory: applied problem solving, production exposure, deployment experience
AGREED WITH
Dr. Sarabjot Singh Anand, Kunal Gupta, Professor Dr. Alok Pandey
Argument 3
Production‑level exposure: moving models from notebooks to secure, scalable systems
EXPLANATION
He highlights the necessity for AI practitioners to understand how to transition models from development notebooks to secure, scalable production environments.
EVIDENCE
He specifically mentions the need to know how a model moves from a notebook environment to a real, scalable, secure system as part of production exposure [308-311].
MAJOR DISCUSSION POINT
Production‑level exposure: moving models from notebooks to secure, scalable systems
Argument 4
Adaptive learning tools driven by AI assess participant profiles and tailor curricula in real time
EXPLANATION
Vikash notes that AI can be used to evaluate learners’ profiles, identify gaps, and recommend adaptive learning paths, thereby enhancing employability outcomes.
EVIDENCE
He states that AI-based tools assess skill gaps based on participant profiles and recommend adaptive learning, improving employability [315].
MAJOR DISCUSSION POINT
Adaptive learning tools driven by AI assess participant profiles and tailor curricula in real time
AGREED WITH
Kunal Gupta, Sh. Subodh Sachan
S
Sh. Subodh Sachan
5 arguments177 words per minute3433 words1160 seconds
Argument 1
Infrastructure‑level AI mindset; curiosity and creativity as drivers
EXPLANATION
He frames next‑gen AI as an infrastructure that multiplies human intelligence, emphasizing that curiosity and creativity are essential to harness this infrastructure effectively.
EVIDENCE
He describes AI as infrastructure-level intelligence that multiplies reasoning, creativity, and judgment, and links curiosity and creativity to the ability to understand customer problems and create impactful solutions [90-94].
MAJOR DISCUSSION POINT
Infrastructure‑level AI mindset; curiosity and creativity as drivers
AGREED WITH
Dr. Devinder Singh, Professor Dr. Jawar Singh, Professor Dr. Alok Pandey
Argument 2
STPI “Skill‑Up” program, regional training hubs, partnership network of 18+ trainers
EXPLANATION
He outlines the STPI initiative that establishes regional AI training hubs and collaborates with over 18 training partners to upskill the workforce.
EVIDENCE
He announces the launch of multiple regional hubs for training, mentions a network of 18 training partners across India, and describes the STPI Skill-Up program as the vehicle for skilling up [9-12].
MAJOR DISCUSSION POINT
STPI “Skill‑Up” program, regional training hubs, partnership network of 18+ trainers
AGREED WITH
Vikash Srivastava, Kunal Gupta
Argument 3
National Education Policy (NEP) provides greater autonomy, supporting faster curriculum evolution
EXPLANATION
He notes that the NEP has already begun granting institutions more autonomy, which should accelerate curriculum updates to match AI advancements.
EVIDENCE
He references the NEP as having already given more autonomy to institutions, facilitating faster curriculum evolution, and asks whether global experiences are being reflected in India [252-258].
MAJOR DISCUSSION POINT
National Education Policy (NEP) provides greater autonomy, supporting faster curriculum evolution
AGREED WITH
Professor Dr. Alok Pandey, Professor Dr. Jawar Singh
Argument 4
Need for standards to ensure AI fairness and robustness across applications
EXPLANATION
He highlights that AI systems must be evaluated for bias, robustness, and fairness, and that standards are required to guide developers, regulators, and users.
EVIDENCE
He mentions the emerging gaps in cyber-security, bias, robustness, and fairness, and calls for standards to fill these talent gaps and guide ecosystem development [166-168].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 outlines fairness and robustness indices and calls for standards to guide developers and regulators, aligning with his call for standardized evaluation of AI systems.
MAJOR DISCUSSION POINT
Need for standards to ensure AI fairness and robustness across applications
AGREED WITH
Professor Dr. Alok Pandey, Dr. Devinder Singh, Vikash Srivastava
Argument 5
STPI’s “Skill‑Up” initiative creates regional hubs and collaborates with multiple training partners
EXPLANATION
He reiterates the STPI Skill‑Up effort, emphasizing its role in building AI talent through regional hubs and a broad partner ecosystem.
EVIDENCE
He again references the STPI Skill-Up program, regional hubs, and the network of 18 training partners as a cornerstone of AI skilling in India [9-12].
MAJOR DISCUSSION POINT
STPI’s “Skill‑Up” initiative creates regional hubs and collaborates with multiple training partners
A
Audience
1 argument59 words per minute98 words98 seconds
Argument 1
Audience query on practical AI tools for local governance and CSR funding
EXPLANATION
An audience member asks which AI tools could be applied in three sectors for local governance and whether private companies or CSR funds could support such initiatives.
EVIDENCE
The audience member, Vikram Tripathi, asks for guidance on three sectors where AI tools can be used in district panchayat work and whether private companies or CSR funds can support these efforts [319].
MAJOR DISCUSSION POINT
Audience query on practical AI tools for local governance and CSR funding
Agreements
Agreement Points
AI talent must prioritize problem‑solving and domain/customer understanding over pure algorithmic focus
Speakers: Dr. Sarabjot Singh Anand, Kunal Gupta, Vikash Srivastava, Professor Dr. Alok Pandey
Startup‑oriented AI solutions require understanding customer problems beyond algorithms Emphasis on application focus and clear problem definition, sector‑specific skills Move beyond theory: applied problem solving, production exposure, deployment experience T‑shaped profile: deep domain expertise, AI fluency, red‑team/containment skills
All four speakers stress that next-gen AI professionals should first understand the real problem and the domain context before applying algorithms, emphasizing applied problem-solving, sector-specific knowledge and a T-shaped skill set [106-112][202-214][303-311][63-67].
POLICY CONTEXT (KNOWLEDGE BASE)
This emphasis mirrors calls for broader AI talent development that prioritize critical thinking and real-world problem solving over narrow technical skills, as highlighted in industry discussions on AI talent development [S41] and panel remarks stressing technical mastery combined with ethical judgment and problem-solving capabilities [S42].
Curricula and training programmes must become agile, with institutional autonomy to update rapidly
Speakers: Sh. Subodh Sachan, Professor Dr. Alok Pandey, Professor Dr. Jawar Singh
National Education Policy (NEP) provides greater autonomy, supporting faster curriculum evolution De‑bureaucratise curricula; grant autonomy to institutions of eminence for rapid updates Centrally funded technical institutes already enjoy curriculum flexibility
Subodh highlights NEP-driven autonomy, Alok calls for de-bureaucratised curricula, and Jawar notes that CFTIs can introduce new courses without delay, showing a shared view that curriculum flexibility is essential [252-258][237-244][279-283].
POLICY CONTEXT (KNOWLEDGE BASE)
The need for agile, de-bureaucratised curricula is echoed in recent policy analyses urging rapid curriculum redesign and stronger industry-academia collaboration to keep pace with AI advances [S55] and in recommendations for flexible AI education pathways [S41].
Ethical, safety and fairness standards are essential for responsible AI deployment
Speakers: Professor Dr. Alok Pandey, Dr. Devinder Singh, Vikash Srivastava, Sh. Subodh Sachan
Importance of red‑team testing, containment, and safety in AI product development Fairness and bias indices; robustness standards for AI systems; role of regulators Blend of technical mastery, ethical judgment, and real‑world problem‑solving ability Need for standards to ensure AI fairness and robustness across applications
Alok stresses red-team and safety, Devinder proposes bias/fairness indices and regulator roles, Vikash adds ethical judgment to the skill mix, and Subodh calls for standards on fairness and robustness, indicating consensus on ethical safeguards [186-188][320-328][83-88][166-168].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with global AI governance frameworks such as the EU Ethics Guidelines for Trustworthy AI and UN calls for universal guardrails on safety, fairness and accountability [S50][S51], as well as academic work highlighting the need to address bias and fairness in educational technologies [S44].
AI is becoming a foundational infrastructure, requiring hardware efficiency, standards and sector‑specific integration (e.g., 6G)
Speakers: Dr. Devinder Singh, Professor Dr. Jawar Singh, Sh. Subodh Sachan, Professor Dr. Alok Pandey
6G networks will embed AI in every component; engineers must master ML and new AI standards Neuromorphic/brain‑inspired computing and hardware efficiency are crucial for future AI models Infrastructure‑level AI mindset; curiosity and creativity as drivers AI as infrastructure of intelligence multiplying our ability
Devinder describes AI-embedded 6G, Jawar highlights energy-efficient neuromorphic hardware, Subodh frames AI as an infrastructure that multiplies intelligence, and Alok refers to AI as an infrastructure-level intelligence, all agreeing on the infrastructural nature of next-gen AI [124-146][153-164][90-94][170-176].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy discussions on AI-powered chips and the importance of hardware efficiency for next-gen AI workloads support this view [S41], while broader infrastructure security considerations stress the need for standards across hardware and software layers [S59].
AI‑driven tools can assess skill gaps and personalize learning, supporting upskilling at scale
Speakers: Vikash Srivastava, Kunal Gupta, Sh. Subodh Sachan
Adaptive learning tools driven by AI assess participant profiles and tailor curricula in real time Identify gaps in application skills; use AI‑driven gap analysis and personalized learning paths STPI “Skill‑Up” program, regional training hubs, partnership network of 18+ trainers
Vikash mentions AI-based skill-gap assessment, Kunal describes AI-driven gap analysis and personalized pathways, and Subodh outlines the STPI Skill-Up ecosystem, showing agreement on leveraging AI for scalable upskilling [315][214-224][9-12].
POLICY CONTEXT (KNOWLEDGE BASE)
Reports on AI-enabled workforce development describe AI systems that evaluate individual backgrounds and tailor skill-building pathways, underscoring their role in large-scale upskilling initiatives [S63][S61].
Similar Viewpoints
All emphasize that AI expertise must be coupled with strong problem‑definition and domain knowledge, moving beyond pure technical theory to deliver real‑world solutions [106-112][202-214][303-311][63-67].
Speakers: Dr. Sarabjot Singh Anand, Kunal Gupta, Vikash Srivastava, Professor Dr. Alok Pandey
Startup‑oriented AI solutions require understanding customer problems beyond algorithms Emphasis on application focus and clear problem definition, sector‑specific skills Move beyond theory: applied problem solving, production exposure, deployment experience T‑shaped profile: deep domain expertise, AI fluency, red‑team/containment skills
All agree that institutional autonomy and reduced bureaucracy are needed for curricula to keep pace with rapid AI advances [252-258][237-244][279-283].
Speakers: Sh. Subodh Sachan, Professor Dr. Alok Pandey, Professor Dr. Jawar Singh
National Education Policy (NEP) provides greater autonomy, supporting faster curriculum evolution De‑bureaucratise curricula; grant autonomy to institutions of eminence for rapid updates Centrally funded technical institutes already enjoy curriculum flexibility
Consensus that ethical safeguards, fairness metrics and formal standards are critical for trustworthy AI deployment [186-188][320-328][83-88][166-168].
Speakers: Professor Dr. Alok Pandey, Dr. Devinder Singh, Vikash Srivastava, Sh. Subodh Sachan
Importance of red‑team testing, containment, and safety in AI product development Fairness and bias indices; robustness standards for AI systems; role of regulators Blend of technical mastery, ethical judgment, and real‑world problem‑solving ability Need for standards to ensure AI fairness and robustness across applications
All view AI as a foundational infrastructure that requires efficient hardware, standards and a mindset of curiosity/creativity to harness its potential [124-146][153-164][90-94][170-176].
Speakers: Dr. Devinder Singh, Professor Dr. Jawar Singh, Sh. Subodh Sachan, Professor Dr. Alok Pandey
6G networks will embed AI in every component; engineers must master ML and new AI standards Neuromorphic/brain‑inspired computing and hardware efficiency are crucial for future AI models Infrastructure‑level AI mindset; curiosity and creativity as drivers AI as infrastructure of intelligence multiplying our ability
Agreement that AI‑enabled assessment and regional training networks can efficiently close skill gaps at scale [315][214-224][9-12].
Speakers: Vikash Srivastava, Kunal Gupta, Sh. Subodh Sachan
Adaptive learning tools driven by AI assess participant profiles and tailor curricula in real time Identify gaps in application skills; use AI‑driven gap analysis and personalized learning paths STPI “Skill‑Up” program, regional training hubs, partnership network of 18+ trainers
Unexpected Consensus
Industry‑driven call for faster curriculum updates aligns with academic push for de‑bureaucratised curricula
Speakers: Kunal Gupta, Professor Dr. Alok Pandey
Current curricula are outdated and misaligned with industry; need faster, industry‑responsive revisions De‑bureaucratise curricula; grant autonomy to institutions of eminence for rapid updates
Kunal, representing a private skilling platform, stresses that curricula lag behind industry needs, while Alok, from academia, proposes de-bureaucratising curricula to allow rapid updates. Their convergence bridges the typical industry-academia divide, highlighting a shared urgency for curriculum agility [212-218][237-244].
POLICY CONTEXT (KNOWLEDGE BASE)
Industry-academia partnership models advocating swift curriculum reforms have been highlighted as critical for integrating AI across disciplines and reducing bureaucratic lag [S55][S41].
Overall Assessment

The panel shows strong consensus on four pillars: (1) problem‑solving and domain‑centric AI skills; (2) the need for agile, autonomous curricula; (3) the imperative of ethical standards, fairness and safety; (4) viewing AI as core infrastructure requiring hardware efficiency and standards; plus a shared belief in AI‑driven skill‑gap assessment tools. These converging views suggest a coordinated path forward involving curriculum reform, industry‑academia collaboration, standards development, and investment in AI‑enabled training ecosystems.

High consensus – most speakers independently arrived at similar conclusions across technical, educational and ethical dimensions, indicating a solid foundation for policy and programmatic action.

Differences
Different Viewpoints
Where the bottleneck lies in curriculum reform and who should drive rapid AI curriculum updates
Speakers: Professor Dr. Alok Pandey, Professor Dr. Jawar Singh, Sh. Subodh Sachan
De‑bureaucratise curricula; grant autonomy to institutions of eminence for rapid updates Centrally funded technical institutes already enjoy curriculum flexibility and can start new courses without restrictions NEP already provides greater autonomy, but question remains whether global experiences are being reflected in India
Alok argues that bureaucratic hurdles prevent fast curriculum changes and calls for de-bureaucratisation and autonomy for institutions of eminence [237-244]. Jawar counters that centrally funded institutes already have the freedom to introduce new courses quickly, implying no major bottleneck there [279-283]. Subodh notes that the National Education Policy has already granted more autonomy, asking if international best-practices are being adopted [252-258]. The speakers thus disagree on whether curriculum rigidity is a systemic problem and on which institutions should lead reform.
What core competency should define next‑gen AI talent
Speakers: Dr. Sarabjot Singh Anand, Professor Dr. Alok Pandey, Vikash Srivastava, Kunal Gupta
Critical thinking and risk‑taking; recognizing AI limitations T‑shaped profile: deep domain expertise, AI fluency, red‑team/containment skills Blend of technical mastery, ethical judgment, and real‑world problem‑solving ability Emphasis on clear problem definition and sector‑specific application skills
Sarabjot stresses critical thinking, risk-taking and awareness of AI’s imperfections as essential [40-46]. Alok proposes a T-shaped talent model combining deep domain knowledge, AI fluency and red-team/containment capabilities [63-67]. Vikash highlights the need for technical mastery together with ethical judgment and the ability to solve real-world problems [83-88]. Kunal argues that the biggest gap is the ability to define problems and apply AI in sector-specific contexts [202-214]. Each speaker foregrounds a different priority, revealing disagreement on which skill set should be the cornerstone of AI talent development.
POLICY CONTEXT (KNOWLEDGE BASE)
Panel insights identify three pillars-technical mastery, ethical judgment, and real-world problem-solving-as the defining competencies for future AI professionals [S42].
Relative importance of hardware knowledge versus algorithmic/software focus in AI talent
Speakers: Professor Dr. Jawar Singh, Professor Dr. Alok Pandey, Dr. Sarabjot Singh Anand
Deep understanding of algorithms together with hardware mapping and security Focus on domain integration, red‑team testing and safety rather than hardware specifics Emphasis on critical thinking and problem‑solving over hardware considerations
Jawar stresses that next-gen AI professionals must understand how algorithms map onto hardware, energy-efficient neuromorphic computing and hardware security [57-60][153-164]. Alok’s view centres on domain expertise, AI fluency and red-team/containment without explicit hardware emphasis [63-67][186-188]. Sarabjot focuses on critical thinking and risk-taking, not mentioning hardware at all [40-46]. The speakers thus disagree on the priority of hardware knowledge in AI talent development.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on talent composition reference the growing significance of hardware efficiency and AI-specific chip design alongside traditional algorithmic expertise, as discussed in AI-powered chip strategy documents [S41] and security policy briefs on hardware/software integration [S59].
Approach to establishing AI curriculum standards and updates
Speakers: Professor Dr. Alok Pandey, Kunal Gupta
De‑bureaucratise curricula; grant autonomy to institutions of eminence for rapid updates Current curricula are outdated and misaligned; need faster, industry‑responsive revisions and policy reforms
Alok calls for reducing bureaucratic constraints and giving institutions of eminence the freedom to swiftly redesign curricula [237-244]. Kunal points out that syllabus revisions take years, making them obsolete, and urges rapid, industry-aligned curriculum reforms [212-218]. Both agree on the need for faster curriculum change but propose different mechanisms-institutional autonomy versus systemic policy overhaul.
POLICY CONTEXT (KNOWLEDGE BASE)
Proposals for AI curriculum frameworks that balance agility with standardisation have been put forward by bodies such as AUDA-NEPAD and national AI strategy reports, emphasizing the need for coordinated standards and rapid updates [S43][S55].
Unexpected Differences
Different security focus: hardware security versus software/red‑team containment
Speakers: Professor Dr. Jawar Singh, Professor Dr. Alok Pandey
Hardware security as a critical layer to prevent weaponisation of AI Importance of red‑team testing, containment, and safety in AI product development
Jawar highlights the need for secure, trusted hardware implementations to prevent AI weaponisation [160-164], whereas Alok concentrates on software-level safety measures such as red-team testing and containment, even suggesting killing a technology that misbehaves [186-188]. The disagreement is unexpected because both address security but focus on different layers (hardware vs software).
POLICY CONTEXT (KNOWLEDGE BASE)
Policy analyses differentiate between hardware-centric security doctrines and software/red-team containment strategies, highlighting the necessity of addressing both layers in AI system protection [S59][S60].
Introduction of bias and fairness indices by Dr. Devinder Singh, not addressed by other panelists
Speakers: Dr. Devinder Singh, Other panelists
Fairness and bias indices; robustness standards for AI systems; role of regulators Various other perspectives on talent, curriculum, hardware, etc., without explicit mention of bias/fairness metrics
Devinder proposes quantitative bias and fairness indices and calls for regulatory thresholds [320-328], a focus absent from the other speakers’ contributions, making this a surprising point of divergence within the discussion on AI standards and ethics.
POLICY CONTEXT (KNOWLEDGE BASE)
The call for bias and fairness metrics aligns with ongoing scholarly work on fairness indices in AI education and governance, as documented in fairness-focused workshops and reports [S44][S46][S48].
Overall Assessment

The panel exhibits moderate disagreement centered on the priorities for AI talent development (critical thinking vs domain expertise vs ethical and problem‑definition skills) and on the mechanisms for curriculum reform (institutional autonomy versus policy‑level overhaul). There is also a clear split on the importance of hardware knowledge and security versus software‑level safety measures. While all participants share the overarching goal of building a robust AI ecosystem in India, the divergent views on which competencies and institutional levers are most critical could slow coordinated action and policy implementation.

Moderate disagreement with implications for fragmented policy approaches; consensus on the need for AI skill development exists, but differing emphases may lead to parallel initiatives rather than a unified national strategy.

Partial Agreements
Both recognise a significant gap in AI education capacity and call for systemic changes—Alok emphasizes expanding faculty, MOUs and funding to scale up education [170-200], while Kunal highlights the need to overhaul curricula quickly to match industry demands [212-218]. They share the goal of strengthening AI education but differ on the primary lever (faculty/institutional capacity vs curriculum policy reform).
Speakers: Professor Dr. Alok Pandey, Kunal Gupta
Need for large‑scale faculty development, industry‑university MOUs, and funding support Current curricula are outdated and misaligned; need faster, industry‑responsive revisions
Both agree that AI talent must go beyond pure technical knowledge. Sarabjot stresses critical thinking and awareness of AI’s limits [40-46], while Vikash adds that ethical judgment and real‑world problem‑solving are essential alongside technical mastery [83-88]. They share the goal of producing well‑rounded professionals but differ on the emphasis—cognitive/critical skills versus a triad of technical, ethical and applied abilities.
Speakers: Dr. Sarabjot Singh Anand, Vikash Srivastava
Critical thinking, risk‑taking and recognizing AI limitations Blend of technical mastery, ethical judgment, and real‑world problem‑solving ability
Takeaways
Key takeaways
Next‑gen AI talent must combine deep domain expertise, AI fluency, critical thinking, risk‑taking, ethical judgment and the ability to solve real‑world problems. A T‑shaped skill profile (deep domain knowledge + broad AI capabilities + red‑team/containment skills) is essential for addressing India’s AI challenges. Curriculum and training must shift from theory‑heavy, static models to applied problem‑solving, production‑level deployment, and continuous, adaptive learning. Current curricula, especially in state‑run institutions, are outdated and misaligned with industry needs; rapid, autonomous curriculum updates are required. Sector‑specific AI adoption (e.g., telecom/6G, neuromorphic hardware, agriculture, law) demands tailored talent pipelines and standards. Ethics, bias, fairness, robustness and hardware security are integral parts of AI development and must be embedded in training and standards. STPI’s “Skill‑Up” programme will create regional training hubs, expand the partner network (currently 18+ partners), and coordinate large‑scale skilling initiatives. AI‑driven skill‑gap analysis tools can personalize learning paths, improve employability, and support industry‑university collaboration. Collaboration between academia, industry, and government (MOUs, funding, mentorship programmes) is critical to bridge the talent gap.
Resolutions and action items
Launch of multiple STPI regional training hubs under the “Skill‑Up” programme (announced by Sh. Subodh Sachan). Expansion of the training‑partner ecosystem beyond the current 18 partners, with a call for additional collaborators. Commitment to de‑bureaucratise curricula by granting greater autonomy to Institutions of Eminence and leveraging NEP provisions. Proposal for large‑scale faculty development and industry‑university MOUs to accelerate curriculum velocity (raised by Prof. Dr. Alok Pandey). Adoption of AI‑powered skill‑gap assessment and adaptive learning platforms for personalized upskilling (suggested by Kunal Gupta and Vikash Srivastava). Encouragement for industry mentors to guide students on passion projects that address social impact (initiative described by Dr. Sarabjot Singh Anand).
Unresolved issues
Specific AI tools and platforms suitable for local governance (e.g., panchayat level) and mechanisms for CSR funding were raised by the audience but not answered. Detailed implementation roadmap for AI standards in 6G telecom networks, including timelines and responsible bodies, remains unclear. How to uniformly upgrade faculty capabilities across thousands of state technical institutions, given resource constraints, was not concretely addressed. Mechanisms for continuous monitoring of bias, fairness and robustness indices in deployed AI systems were mentioned but lack actionable guidelines. The extent to which existing regulatory frameworks will adapt to rapid AI advances, especially for hardware security, remains open.
Suggested compromises
Balance rapid curriculum updates with quality control by de‑bureaucratising processes while still using NEP’s autonomy framework. Use AI‑driven skill‑gap analysis to guide learning without excluding candidates; the tool serves as a diagnostic, not a gatekeeper. Combine human oversight with AI‑driven decision‑making in telecom (engineers supervise AI actions rather than replace them). Blend academic depth with industry mentorship: students receive foundational knowledge from universities and practical problem‑solving experience from industry partners.
Thought Provoking Comments
AI is not perfect. We need to recognize its deficiencies, question its outputs, and cultivate critical thinking and risk‑taking rather than just learning the technology.
Highlights the essential mindset shift required for responsible AI use, emphasizing human judgment over blind reliance on AI.
Set the tone for the discussion on talent, prompting other speakers to stress critical thinking, problem‑solving, and ethical awareness in AI education and hiring.
Speaker: Dr. Sarabjot Singh Anand
The power gap between GPUs (500‑700 W) and the human brain (≈20 W) shows we need neuromorphic or brain‑inspired computing, and hardware security is crucial because AI can be weaponised.
Introduces a hardware‑centric perspective that many participants had not considered, linking energy efficiency, emerging neuromorphic research, and security.
Shifted the conversation from software‑only talent to a broader ecosystem that includes hardware expertise, leading to later remarks on standards and the need for interdisciplinary training.
Speaker: Professor Dr. Jawar Singh
Next‑gen AI talent should be T‑shaped: deep domain expertise, fluency in AI tools, and ability to perform red‑team testing and containment.
Combines technical depth, cross‑domain breadth, and security testing into a concise talent model, foregrounding safety and governance.
Prompted other panelists to discuss ethical judgment, red‑team practices, and the importance of integrating security into curricula and industry training.
Speaker: Professor Dr. Alok Pandey
Next‑gen AI is an infrastructure of intelligence that multiplies our reasoning, creativity and values, enabling vernacular language interaction and democratizing access—much like TikTok did for content creation.
Frames AI as a societal infrastructure rather than a product, linking technology to inclusion, language diversity, and economic empowerment.
Expanded the dialogue from skill gaps to broader social impact, influencing later comments on curriculum relevance, multilingual support, and the role of AI in public services.
Speaker: Kunal Gupta
When evaluating fresh AI talent we look at problem‑solving ability, self‑directed learning, strong foundations, and the capacity to understand the customer’s problem—not just library knowledge.
Provides a concrete, practice‑oriented assessment framework that moves beyond theoretical knowledge to real‑world applicability.
Guided the discussion toward practical training methods, such as passion projects and mentorship, and reinforced the need for industry‑academia collaboration.
Speaker: Dr. Sarabjot Singh Anand
The next‑gen talent mix must include technical mastery, ethical judgment, and real‑world problem‑solving capabilities; people need to know where AI fits and where it doesn’t.
Distills the talent requirement into three actionable pillars, emphasizing ethical awareness alongside technical skills.
Reinforced earlier points about critical thinking and ethics, and led to deeper conversation about curriculum design that embeds these three dimensions.
Speaker: Vikash Srivastava
The biggest skill gap is the ability to define a problem; many copy trends without understanding the underlying need. Curriculum is bureaucratic and outdated, needing rapid, sector‑specific updates and multilingual support.
Diagnoses a root cause of talent mismatch—problem definition—and critiques systemic educational inertia, calling for agile, inclusive curricula.
Triggered a series of responses about de‑bureaucratising education, the role of the National Education Policy, and the need for state‑level institutional reforms.
Speaker: Kunal Gupta
In 6G every component will have AI built‑in; decisions will be distributed to the edge, requiring engineers to know machine learning and to follow new AI standards. Human role will shift to supervision.
Projects a concrete future telecom architecture where AI is integral, highlighting new skill sets and standardisation challenges.
Steered the conversation toward sector‑specific talent needs, prompting further discussion on standards, robustness, and the evolving role of telecom engineers.
Speaker: Dr. Devinder Singh
We can compute a fairness index (0‑1) to quantify bias; different sectors tolerate different bias levels. Similar metrics exist for robustness, and they can guide regulators, developers, and deployers.
Offers a practical metric‑based approach to bias and fairness, linking technical evaluation to policy and regulatory decisions.
Added a measurable dimension to earlier abstract discussions on ethics and bias, leading to acknowledgement of standards availability and encouraging concrete implementation.
Speaker: Dr. Devinder Singh (audience response)
Overall Assessment

The discussion was shaped by a handful of pivotal insights that moved it beyond a generic talk on AI talent gaps. Early remarks on critical thinking and the hardware‑energy gap broadened the talent definition to include mindset and interdisciplinary expertise. The T‑shaped model and the three‑pillar framework (technical, ethical, problem‑solving) provided concrete structures that participants repeatedly referenced. Kunal Gupta’s view of AI as societal infrastructure and his critique of curriculum bureaucracy introduced a socio‑economic dimension, prompting calls for rapid, multilingual, and industry‑aligned education reforms. Sector‑specific forecasts, especially the 6G AI‑embedded network, anchored the conversation in concrete future skill requirements. Finally, the introduction of fairness and robustness indices gave the debate a measurable, policy‑oriented anchor. Collectively, these comments redirected the dialogue from abstract skill shortages to a nuanced, multi‑layered roadmap encompassing mindset, interdisciplinary knowledge, ethical safeguards, curriculum agility, and sector‑specific standards.

Follow-up Questions
Which three sectors should AI tools be applied to in a district panchayat, and can private companies’ CSR funds support this?
Understanding practical AI use‑cases at the local governance level and funding mechanisms is essential for early adoption and impact.
Speaker: Vikram Tripathi (audience)
What standards are needed for AI fairness, robustness, and bias, especially in telecom and other sectors, and how should they be implemented?
Clear, publicly available standards are required to ensure trustworthy AI systems and to guide developers, regulators, and users.
Speaker: Dr. Devinder Singh, Sh. Subodh Sachan
How can curriculum development be de‑bureaucratized and accelerated, particularly in state technical institutions, to keep pace with rapid AI advances?
Slow syllabus updates hinder the ability to produce AI‑ready graduates; faster, more autonomous curriculum processes are needed.
Speaker: Prof. Alok Pandey, Prof. Jawar Singh, Kunal Gupta
What is an effective approach to evaluate fresh AI talent, focusing on problem‑solving, curiosity, and domain understanding?
A robust assessment framework will help identify and nurture talent that can address real‑world AI challenges.
Speaker: Sh. Subodh Sachan, Dr. Sarabjot Singh Anand
How should AI education incorporate hardware awareness, neuromorphic computing, and hardware security to bridge the algorithm‑hardware gap?
Future AI solutions depend on efficient hardware; training must cover hardware‑software co‑design and security.
Speaker: Prof. Jawar Singh
How can concepts of AI safety, red‑team­ing, and containment be integrated into university curricula across domains?
Embedding safety and containment practices in education will prepare graduates to develop responsible AI systems.
Speaker: Prof. Alok Pandey
What mechanisms can align AI education with industry skill needs given the lag in syllabus updates, especially in state institutions?
Bridging the gap between academia and industry ensures graduates possess relevant, employable skills.
Speaker: Kunal Gupta, Prof. Alok Pandey
How can AI education be made inclusive for all languages and regions, supporting vernacular language capabilities?
Inclusive AI tools in local languages broaden access and reduce digital divide.
Speaker: Kunal Gupta
What are the requirements for AI‑native 6G standards, and how should engineers be trained to develop and operate such systems?
AI‑driven 6G networks will need new standards and a workforce skilled in machine learning integrated with telecom engineering.
Speaker: Dr. Devinder Singh, Sh. Subodh Sachan
What models of industry‑academia mentorship can effectively develop AI talent with real‑world problem‑solving skills?
Mentorship bridges theoretical knowledge and practical application, accelerating talent readiness.
Speaker: Dr. Sarabjot Singh Anand, Sh. Subodh Sachan
How can AI‑driven tools be used for skill‑gap analysis and adaptive learning recommendations to improve employability?
AI can personalize training pathways, identifying gaps and recommending targeted upskilling.
Speaker: Vikash Srivastava
How should AI safety and security be ensured in domain‑specific applications such as healthcare, law, and finance?
Domain‑specific risks require tailored safety frameworks and governance.
Speaker: Prof. Alok Pandey
How can Indian‑specific large language models be developed for sectors like law, agriculture, and others, using local data?
Tailored LLMs can address unique regulatory, linguistic, and data characteristics of Indian domains, enhancing relevance and adoption.
Speaker: Sh. Subodh Sachan, Dr. Sarabjot Singh Anand

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.