Building the AI-Ready Future From Infrastructure to Skills
20 Feb 2026 11:00h - 12:00h
Building the AI-Ready Future From Infrastructure to Skills
Summary
The panel opened with Thomas Zacharia framing the session as a discussion on “building AI readiness from compute to capability,” stressing that AI extends far beyond GPUs and includes the full stack from PCs to edge devices [6-9][11-12]. He introduced the U.S. Department of Energy’s Genesis Initiative, noting that the DOE spends roughly a trillion dollars annually on R&D, of which 20-30 % is government-funded, and that the program aims to use AI to accelerate scientific discovery, energy, and national security research [18-22][23-30]. Zacharia explained that the initiative seeks to federate compute and data across national labs, cloud-enabled labs, and public-private partnerships while embedding security, governance, and composable standards into the infrastructure [34-38][46-48].
He highlighted AMD’s contribution through the American Science Cloud, which will run on an MI355 cluster and a Helios rack delivering 2.9 exaflops of AI compute for 220 kW, illustrating the company’s push for high-performance, energy-efficient hardware [48-54][92-94]. Zacharia also stressed the importance of open ecosystems, open-source software, and open standards to enable startups and innovators to build on AMD hardware without vendor lock-in [70-73][76-81].
Paneerselvam M of the METI Startup Hub described India’s sovereign AI strategy as a layered effort that requires clear intent, curiosity, and implementation, and he positioned startups as “AI natives” that can improve the nation’s AI readiness quotient for SMEs and larger enterprises [106-108][110-112][113-114]. He reiterated the need for a human-in-the-loop approach while acknowledging that the balance may evolve as agentic AI matures [106][108].
Timothy Robson shifted to the software perspective, noting that AMD’s early supercomputing work, such as the Finnish Lumi system with 12 000 GPUs, enabled multilingual LLM training before ChatGPT’s release [135-144][138-143]. He argued that open-source frameworks like PyTorch and the emerging Triton compiler allow developers to run models on any hardware, supporting AMD’s “day-zero” support for new models such as Quen3 Codex and DeepSeek without additional integration effort [156-162][204-211]. Robson also promoted the AMD Developer Cloud, which offers free compute hours, pre-built Docker containers, and accelerator-cloud programs to help startups move from proof-of-concept to production while keeping total-cost-of-ownership low [187-196][200-203].
Gilles Garcia added that AI is moving to the edge and “physical AI” for robotics, autonomous networks, and industrial applications, requiring specialized accelerators that are smaller and more power-efficient than traditional GPUs [230-238][240-244]. He cited the Gene01 humanoid built on AMD technology as an example of how compact accelerators can enable real-time perception and actuation without cloud dependence [239-241].
In his closing remarks, Zacharia urged participants to stay curious, to explore both high-performance and low-power AI solutions, and to collaborate across academia, startups, and industry to drive societal change [247-250].
Keypoints
Major discussion points
– AMD’s holistic “compute-to-capability” roadmap for sovereign AI – Thomas Zacharia outlines a vision that goes beyond GPUs to a full stack of AI hardware, software and cloud services, emphasizing public-private partnerships such as the U.S. Department of Energy’s Genesis Initiative and the American Science Cloud, and showcasing AMD’s exascale achievements (e.g., the MI355-based cluster and the 2.9 exaflop Helios rack) [6-12][15-21][34-38][46-48][53-56][61-66][70-73][84-94][95-99].
– Government-driven AI research and national security priorities – The talk stresses that AI acceleration is a strategic national function, linking DOE’s three pillars (discovery science, energy, national security) with the need to federate compute and data across labs, academia and industry, and to embed security-by-design in public-private collaborations [16-23][27-33][40-48][49-52].
– Start-ups as the engine of AI readiness and implementation in India – Paneerselvam M highlights the critical role of AI-native start-ups in translating sovereign AI strategies into tangible value for SMEs, stressing curiosity, clear intent, and the need to broaden AI benefits beyond large corporates [106-112][113-114].
– Open-source software ecosystem and “day-zero” support as the enabler of AI adoption – Timothy Robson stresses that success now hinges on open, vendor-agnostic software stacks (PyTorch, JAX, Triton) and AMD’s developer resources (Developer Cloud, Docker containers, day-zero model support) that let users run new models on AMD hardware without lock-in [124-131][152-159][206-218][221-229].
– Physical AI and edge computing for industry and robotics – Gilles Garcia points to the shift of AI workloads from data-center GPUs to low-power, edge-optimized accelerators for robotics, autonomous vehicles, and industrial systems, underscoring AMD’s “AI anywhere” strategy and the need for dedicated, reliable hardware at the far edge [230-239][242-245].
Overall purpose / goal of the discussion
The session aimed to present and promote a comprehensive, government-backed AI ecosystem-spanning sovereign research, high-performance compute, open-source software, and start-up innovation-to accelerate scientific discovery, national security, and economic growth, while forging a collaborative partnership between AMD, the Indian government (METI), and the broader AI community.
Overall tone
The conversation is consistently upbeat, forward-looking, and collaborative. It begins with a formal congratulatory opening, moves into an enthusiastic technical showcase of AMD’s capabilities, shifts to a policy-focused discussion of sovereign AI, then adopts a pragmatic, supportive tone toward start-ups and ecosystem builders, and concludes with a motivational call to stay curious and explore the emerging opportunities. The tone remains optimistic throughout, with occasional shifts from high-level strategic framing to detailed, hands-on encouragement for developers and entrepreneurs.
Speakers
– Thomas Zacharia
– Area of expertise: AI strategy, high-performance computing, supercomputing deployment, national AI readiness
– Role/Title: AMD senior executive (speaker)
– Timothy Robson
– Area of expertise: Hardware engineering, software development for AI infrastructure, vendor-agnostic AI frameworks
– Role/Title: Hardware engineer turned software specialist at AMD [S1]
– Gilles Garcia
– Area of expertise: Physical AI, edge AI for communications, robotics, and industrial applications
– Role/Title: AMD speaker on physical AI
– Paneerselvam M
– Area of expertise: Startup ecosystem, innovation management, government-industry partnerships in India
– Role/Title: CEO, METI Startup Hub, Ministry of Electronics and IT, Government of India [S4][S5]
– Moderator
– Area of expertise:
– Role/Title: Conference moderator
Additional speakers:
– (none identified beyond the listed speakers)
The session opened with Thomas Zacharia congratulating the audience on behalf of AMD’s 30 000 employees worldwide, including the 10 000 based in India, and outlining the panel’s purpose – “building AI readiness from compute to capability” [1-6]. He warned against equating AI solely with GPUs, emphasizing that AI spans a full stack from AI-enabled PCs through core data-centre infrastructure to edge deployments [7-10]. Zacharia announced that he would address the sovereign side of AI while his colleague Timothy Robson would cover the enterprise perspective [12-14].
Zacharia introduced the U.S. Department of Energy’s Genesis Initiative, a public-private programme launched under the Trump administration to accelerate scientific discovery, energy research and national security through AI [15-23]. He noted that the United States spends roughly a trillion dollars a year on R&D, 20-30 % of which is government-funded, and that the return on investment is diminishing unless AI bridges the gap between hypothesis and outcome [18-22]. The DOE’s three pillars-discovery science, energy, and national security-are supported by a network of 17 national labs, such as Oak Ridge, whose historic role in the Manhattan Project underscores the agency’s dual focus on energy and broader scientific outcomes [23-30][31-33]. Zacharia argued that modern research must shift from hypothesis-driven experiments to rapid AI-augmented analysis, thereby reducing cost and time while enhancing global collaboration [34-40].
He also highlighted that any federated compute and data platform must be built security-by-design, incorporating confidential-computing capabilities to protect sensitive research and national-security workloads [251]. Zacharia called for composable standards and security-by-design (including confidential computing) to enable trustworthy public-private partnerships [252].
In line with this vision, AMD is contributing the American Science Cloud, a cloud-enabled research platform built on an AMD MI355 GPU accelerator cluster that will host the first exascale-class AI workload – a Helios rack delivering 2.9 exaflops of FP4 AI compute at 220 kW [48-49][92-94]. Zacharia recalled his three-decade career in supercomputing, noting AMD’s history of delivering world-fastest systems, such as the 30 000 NVIDIA GPUs deployed when CUDA was still a novelty [50-56]. He stressed that ambitious, energy-efficient projects are possible because governments fund risky, large-scale hardware development [53-56][86-88].
Beyond hardware, Zacharia underscored the necessity of an open ecosystem and robust governance. He advocated for open-source standards and composable infrastructure that prevent vendor lock-in, noting AMD’s commitment to open hardware and software standards that enable innovators to build on any part of the stack [70-73][76-81]. Governance, he clarified, does not equate to regulation; it requires a human-in-the-loop to validate autonomous AI outputs before they are acted upon, safeguarding scientific integrity and national-security concerns [62-68]. He described an autonomous loop comprising roughly 100 000 GPUs powering 100 000 agents, illustrating the scale of the envisioned compute fabric [253].
Paneerselvam M, CEO of the METI Startup Hub, presented India’s sovereign AI strategy as a layered, five-tier architecture driven by clear intent, curiosity and concrete implementation [106-108]. He highlighted the summit’s massive response-267 000 registrations in five days-as evidence of nationwide curiosity and the desire to embed AI across health, education, skilling and other government functions [110-112]. Paneerselvam positioned start-ups as “AI-natives” that can raise the AI-readiness quotient of SMEs, arguing that the government’s role is to provide an enabling environment so AI benefits are not confined to large corporates [113-114].
Timothy Robson shifted focus to software, observing that the launch of ChatGPT on 30 November 2022 dramatically accelerated AI adoption and made an open ecosystem indispensable [115-119][123-124]. He recounted AMD’s early involvement in the Finnish Lumi supercomputer, which used 12 000 GPUs to train multilingual large-language models before ChatGPT existed, demonstrating that large-scale, multilingual AI can be built with public-sector foresight [135-144][138-143]. Robson stressed that open-source frameworks such as PyTorch, JAX and the Triton compiler allow developers to run models on any hardware, and that AMD’s day-zero support guarantees code runs on AMD out of the box, optimized and validated [156-159][220-221]. He defined day-zero support as “code runs on AMD out of the box, guaranteed and optimized” [254]. Examples of day-zero support include Quen3 Codex, DeepSeek, and Baidu’s Paddle models [255][256]. This support is delivered through AMD’s SG9 runtime, which provides full compatibility with models such as DeepSeek [257]. To lower barriers for start-ups, Robson promoted the AMD Developer Cloud, offering 50-100 free GPU hours, pre-built Docker containers and a seamless path from proof-of-concept to production while maintaining a low total-cost-of-ownership [187-196][197-203]. He distinguished “Neo clouds”-smaller, nimble providers-from hyperscalers, noting their relevance for Indian start-ups seeking flexible, low-cost compute [258].
Gilles Garcia broadened the discussion to “physical AI” at the far edge, arguing that AI workloads are moving from data-centre GPUs to specialised, low-power accelerators embedded in robots, autonomous vehicles and industrial plants [230-238]. He cited the Gene01 humanoid-the first robot built on AMD technology-as proof that edge AI can deliver real-time perception, touch and actuation without reliance on cloud connectivity [239-241]. Garcia suggested that India’s burgeoning start-up ecosystem is well-placed to adopt these edge solutions, leveraging AMD’s portfolio of compact accelerators that combine high performance with minimal power consumption [242-245].
Across the panel, several points of agreement emerged. All speakers championed an open ecosystem and open standards as essential to avoid vendor lock-in and to foster rapid innovation [70-73][124-128][156-158][235-236][239-241]. They also concurred that start-ups, being AI-native, are pivotal for scaling AI adoption and improving the AI-readiness quotient of SMEs [69-71][106-108][178-186][187-196]. Finally, both Zacharia and Paneerselvam affirmed the need for sovereign, public-private AI infrastructures that integrate government, academia and industry to serve national priorities [16-48][106-113].
Nevertheless, nuanced disagreements surfaced. Zacharia’s vision of a centrally federated national AI cloud (the American Science Cloud) contrasts with Robson’s promotion of lightweight, developer-focused cloud resources for start-ups, and with Garcia’s emphasis on decentralised edge accelerators [16-48][187-196][230-235]. A second tension arose between Zacharia’s insistence on human-in-the-loop governance for autonomous agents [62-65] and Robson’s focus on rapid, open-source deployment that does not foreground mandatory oversight [210-218]. Finally, while Zacharia warned against over-indexing AI on GPUs [7-10], Garcia advocated for specialised, non-GPU edge accelerators, highlighting differing hardware investment priorities [7-10][230-235].
Key take-aways: (i) sovereign AI requires public-private partnerships such as the DOE Genesis Initiative and India’s five-layer model to federate compute, data and secure cloud-enabled labs; (ii) start-ups are essential “AI-natives” that can raise the AI-readiness quotient of SMEs; (iii) AMD is delivering low-cost, ready-to-use compute (Helios rack, Developer Cloud, free GPU hours) and day-zero model support to accelerate adoption; (iv) an open, vendor-agnostic software stack (PyTorch, JAX, Triton, Primus) is critical to avoid lock-in [92-94][262]; (v) governance must retain a human-in-the-loop to ensure safe, responsible AI; and (vi) physical AI at the edge demands specialised, low-power accelerators such as those showcased in the Gene01 humanoid.
Action items: AMD will continue to supply high-performance and edge-optimised hardware, maintain open-source toolchains (including the Primus ecosystem) and day-zero support for emerging Indian-language models, and expand the Developer Cloud for start-ups; the METI Startup Hub will deepen its partnership with AMD to accelerate AI uptake among Indian SMEs; both parties will advocate for policies that blend large-scale national compute investments with inclusive, low-cost resources for innovators. Unresolved issues remain around concrete mechanisms for federating compute across labs, detailed governance frameworks for autonomous agents, timelines for India’s sovereign AI architecture, and funding models for large-scale public-private initiatives.
In closing, the speakers collectively emphasized that building a balanced AI ecosystem-spanning compute, software and edge, underpinned by open standards, security-by-design, and inclusive access-is essential to realise the transformative potential of AI for society and industry [247-250].
So congratulations to all of you. You should be proud. And I just want to say that on behalf of the 30 ,000 AMDers worldwide, and particularly 10 ,000 in India, I just want to congratulate you and thank you for this opportunity to have this discussion. Since we are a small group, I think we’ll keep it informal. And I want to make sure that somebody please keep track of time so that I do justice to my colleagues here and the dais. The topic that I’ve been asked to talk about is sort of building AI readiness from compute to capability. In AI, in the field of AI these days, there seems to be an over -indexing of AI and GPUs. When in reality, AI is much broader.
GPU is obviously a significant part. It’s a part of the core infrastructure. But what we do at AMD is to really provide a full suite of AI capability from AI on AI PCs to core infrastructure to all the way out to the edge. And I have my colleague Tim from AMD, so we decided that we’re going to tag team. And so I’m going to focus perhaps a little bit on the sovereign side, and then Tim can focus on the enterprise side. That’s okay with you. So let’s just talk about sovereign AI in practice and exploring the motivators. So this particular slide is something that is created by the Department of Energy in the United States as part of a new initiative that was kicked off by the Trump administration called the Genesis Initiative.
and I had a role to play in terms of trying to support and address in crafting this initiative and the framing is very simple. If you look at the top line, I don’t know whether this has a pointer, it’s okay. Okay, so the top line, the white line is funding in the United States for R &D. Today, the United States spends about a trillion dollars a year in R &D. That’s just my involvement. Not all of that is government spending. It’s roughly about, say, 20 to 30 % U .S. government and the rest industry. The bottom line is what we consider research. Output efficiencies. So the problems are getting harder. it is getting more challenging and even though we are trying to tackle really important problems the sense is that throwing money is not having the same rate of return and this slide basically asks the question how do we reduce the gap and at least the thesis for the Genesis mission is that you can use AI as a way to accelerate scientific discovery the Genesis mission has three areas of importance for people who don’t know about the US Department of Energy the US Department of Energy is the nation’s largest physical science agency so it has it operates through 17 national labs and some of the earliest ones, like the one Oak Ridge National Laboratory, which I used to lead before joining AMD, came to being during the Manhattan Project.
And Manhattan Project, about 65 % of the entire funding of Manhattan Project was at Oak Ridge National Laboratory. And in addition to, you know, in fact, I think the Prime Minister mentioned that, about nuclear energy, both the destructive aspect as well as the significant outcomes that came out of that from nuclear medicine to nuclear navy to nuclear energy. These all came, you can trace back to Manhattan Project. So U .S. Department of Energy is not only responsible for energy, but it’s really a science organization. It’s got three priorities. One is just discovery science. The second is energy. And the third is national security. and national security. America has a really interesting thing, a way of keeping the nuclear arsenal away from the military in the sense that it is the U .S.
Department of Energy and not U .S. Department of Defense or Department of War that is responsible for the nuclear arsenal. And the three lab directors, Los Alamos, Livermore, and Sandia, has to certify each year that the arsenal is ready for the President of the United States. So this is a piece of the hypothesis. If you think about research, you can look at the left side. It starts with hypothesis, then you conduct experiments, get the data. And today, you take the data, use AI, machine learning, et cetera, you get analysis. What you’re trying to do is to make this much faster so that you can have science outcomes coming out. That’s it. do it in a reduced cost because you cannot throw more and more money at this problem and enhance global collaboration.
I think there is a genuine interest on the part of the U .S. that this whole premise is not just a U .S. issue. And so I think that it’s likely announcements that suggest that countries like Japan and Europe and UK and others may be part of this overall approach to drive sovereign AI for those aspects of AI deployment and scaling that is uniquely a government or state function. So as I mentioned broadly, scientific discovery, energy and national security. But if you look at the scientific discovery further to the next step, then you will see healthcare, education, skilling, all these things. Fundamentally, a government function. And this is not an easy task because if you think about how these institutions’ research is done, I mentioned large fraction of it in the private sector, a lot of it is done in academia funded by government, and then of course in national labs in the United States, India has its own set of national labs, academia, etc.
So what you need to do is take a look to see how do you integrate all this data? At least the U .S. Department of Energy operates these large, multi -billion dollar light sources, neutron sources, specialized scientific experiments. You need to be able to incorporate all these things, so you have to federate the compute and data, you have to have cloud -enabled lab operations, which is not how things are done today. Security and governance by design, especially when you’re thinking about public -private partnerships, even in the enterprise commercial scale, you want to make sure that you have secure computing, you have confidential computing, you can maintain integrity, but also if you think about national security, you have additional layer, and then you want composable standards versus infrastructure.
So this particular program was kicked out by Secretary Wright, well the President of the United States, then Secretary Wright, and the last fourth quarter of last year, and the first announcement was done with Lisa Su, our CEO, because one of the things that they wanted to do was a unique public -private partnership, and so the core infrastructure, which is currently called American Science Cloud, this program is just being stood up, is going to be run on an MI355 cluster, which is what this entire program that is aimed at driving innovation is going to be run on. And so we are really excited to be a part of this. initially US and soon an international effort to drive innovation in those areas that are uniquely government function.
I’ve had a ringside seat in computing for the last 30 years and been responsible for a lot of supercomputing deployment, a dozen or so. The last four or five of them were number one systems in the top 500, each first of a kind. This is another important thing. Innovation, if you think about AI, AI didn’t happen magically with NVIDIA or AMD. It happened because US government took the risk to invest in first of a kind systems. So we were the first to deploy 30 ,000 NVIDIA GPUs when people thought that CUDA was a four -letter word. Now everybody thinks that this is this amazing software, but change comes hard to people. And so I just want… I want you to know that…
particularly all of you who are youngsters things are going to evolve. If you think that AI is just like the Prime Minister said, it’s just the early stages. So you have to be open and you have to be part of this drive for effective, scalable and impactful AI. Then deep learning came where this mixed precision computation then generative AI and last year was really authentic AI and some of us think that this year we’re going to focus increasingly on governance. Governance does not mean regulation there is a role for regulation by governments but governance is that how do you want if you want to have agentic systems driving, accelerating innovation you want to make sure that the output has a person in the loop.
The one way to simply think about it is that if you are researchers here, if you have a professor who’s got a dozen students who are doing research you don’t let the students just go publish things. There is the professor’s responsibility, there is the peer review committee, etc. So you want that human in the loop before you can update and let this thing to drive innovation while it also allows it to do things that AI does best. So this is how we think about compute to capability, a model of national AI readiness. We want its rest on talent, talent and readiness of talent, giving people access to compute and models. Research enablement is key because you want people to operate AI in an environment where you’re questioning things and innovating all the time as opposed to assuming that what we in the industry is providing you is the only solution.
So I think… If you look at countries that are leading in AI, there is a very strong R &D and innovation foundation that is allowing you to lead because there are people who are questioning every time somebody says something. to make sure that it is validated, it’s continuing to innovate. Start up an innovation lab because you want to take these ideas and start new companies because many of these new innovation and new technologies may be led by people with new ideas and opportunities and of course ultimately enterprise and public sector adoption. We strongly believe, and again I heard Prime Minister Modi say, open ecosystem and open source, open platforms. These things, if you think about iOS and Android, India I find has a lot of penetration of Android systems because inherently open systems allows you to innovate without getting locked into vendors.
And so we at AMD have a commitment to make both our hardware infrastructure and our software infrastructure to be based on open standards so that you can innovate. Around this, any part of this infrastructure. can be part of a new startup or new company adding to that. That is also an important way for India to become part of the supply chain and the semiconductor ecosystem, because you don’t have to start with an attempt to go in for two or three nanometers. You can actually do amazing work and be part of leading -edge technology at different form factors. So I mentioned a little bit about how we think about agendic flows and AI scan work. This is simply the way you think about it.
The inner loop is an autonomous loop where AI and agendic AI does things, what it can do fast, it can operate. If you have 100 ,000 GPUs, you have 100 ,000 agents tackling this problem and it can actually go through the hypothesis -driven experiments and systems. So you can do simulation, campaign scale coordination, machine speed execution, etc. But we do not allow it to update. the outcome until a human in the loop has had the opportunity to validate to make sure that we don’t have unintended consequences. Now, how do you build this thing? So this is, if you haven’t gone to the AMD booth, I would encourage you to do. This is my only plug in this presentation. We spent a ton of money to bring this Helios rack here just so that you can have a sense of what is, not what this particular rack can do, but giving a glimpse of what is possible the next year and the year after.
So we, in 2007, myself and two of my colleagues started what is called the Exascale program. And the challenge was to deliver an Exascale system for under 20 megawatts. Because if you had just scaled the capability in 2007, it would have taken three to four gigawatts. And we knew that the government was not going to… sign, $4 billion for power, just electricity alone to run the computer. So we were motivated to drive that. And we delivered that first exascale system, Valfrontier at Utrich, for less than 20 megawatts. Everybody thought it was crazy, it cannot be done, but there are some things, when you put audacious goals, people rally around and then deliver. This particular rack, in one rack, there are 72 GPUs that will deliver 2 .9 exaflops of AI compute, which is FP4, not FP64, just to be very clear.
But for AI capability, you get 2 .9 exaflops of compute capability for 220 kilowatts. Right? That, even for somebody who’s been in this field for a long time, it’s just mind -blowing. this is where we are headed AI is the fastest adoption of any technology that humanity has introduced we’ve gone from 1 million active users to 1 billion in a matter of just a couple of years and we are headed to 5 billion users so there is a lot of opportunity to innovate in this field and all of us are going to continue to create these opportunities as Lisa said, we are entering the Yara scale so already people are thinking about the next 1000 so let me just say you can get to Zeta scale by just taking 300 of those racks and putting together and then it’s another 3x so I would say in the next 10 years maybe we would be at this 10 ,000 factor so the kind of problems that you are thinking about should not be constrained by what you can do today by the time you figure out the solution for an important problem compute will be there.
That is what we in the industry like to promise you. And I think advancing national economies, these are one of the things that people might you would be forgiven if you thought that does AMD do these things and how prevalent are our compute capabilities. I think Tim is going to tell you that our GPUs and our systems are in every hyperscaler globally and when it comes to HPC and national priority missions, AMD is the leader. If you listen to President Macron, he referenced Alice Recop, which is the first AI factory that the French government announced, the CEA announced, which is based on AMD MI430X, which is a variant of the MI450 on the right that you see outside.
I will close by saying that a shared path forward is really what we are looking for. I know India is in the early stages and we are really delighted to actually have this conversation. Thank you very much.
I’d like to invite our next speaker, Paneerselvam M, CEO of the METI Startup Hub at Ministry of Electronics and IT, Government of India. Dr. Paneerselvam M is a distinguished leader with over two decades of expertise in innovation, management, strategic growth and market development. He’s been instrumental in advancing India’s startup ecosystem and fostering impactful partnerships between the government, industry and entrepreneurs. In his
drawing insights out of this data and then comes the interface layer where most of it is going to be really driven by agents, by agentic AI and of course as Thomas mentioned there is always going to be a human in the loop perspective but as we progress this is going to change as well. So you know the two fundamental things that I want to share, one is the entire transformation in the readiness space for AI is an opportunity for you know certain change and intent needs to be very very clear and then comes the curiosity to learn about this little bit more for each business owner and then comes the implementation part of it and then start -ups have a very very critical role to facilitate this because you are coming in almost as AI natives, working with an understanding of this and you can really go out and demonstrate value.
and help implement the entire readiness, improve the readiness quotient or improve the readiness quotient for small and medium enterprises and ensure that, you know, this is a broad -based growth opportunity for businesses across the country and not limited to just a few of them, right? So there is huge potential and I think enough has been spoken. The summit itself is a proof of the kind of curiosity. We have had 267 ,000, you know, registrations, people who have registered in the last five days. Unexpected, overwhelming response to some extent that we couldn’t really handle it, right? At the same time, it gives us immense pride and excitement for the amount of curiosity and excitement for the amount of curiosity and excitement for the amount of curiosity and the youngsters in India, across India.
travel here from the length and breadth of country to understand what is AI going to be and how this is going to impact and what the opportunities are and that is itself is a fantastic starting point and and as I said you you know there’s a lot of happens this is in Indian sovereign models coming the tech the five layers the infrastructure the design you know all the layers are being worked upon in the Indian context and we are ready as a nation we are ready as a government to facilitate this this real disruptive transformative technology but the truth is it’s you know it’s also important for it has to populate into all the layers of the society beat just not just limit itself to large and large corporates but also to to small and medium enterprises and of course it is already populated with the large and medium enterprises and of course it is already populated well into the d2c to the individual users and it’s much much beyond the beyond the chat, GBTs of the world.
So with that, I think I once again take the opportunity to thank the entire team from AMD and we have had some interesting conversation and I look forward for the continued partnership with AMD and METI Startup Hub because in our perspective, corporates have a huge role to play in the success of the startups. Thank you.
Thank you. There’s a couple of things that I want you guys to think about as I go through my talk. 30th of November, 2022. The world changed. ChatGPT was launched. And I’m willing to bet that everyone in this room, myself included, what we thought we knew about AI changed. Two years ago, what we thought even after ChatGPT changed. A year ago, what we thought changed. And what I’m hoping is as you leave here, other things would have changed in the last 45 minutes of listening to these talks. Okay, so I’m going to skip through the reason why we need to go through and need compute. But I think one thing that is very, very, very important is things are moving so fast.
And things are moving in a way that we cannot predict that the only way that anybody is going to be successful is an open ecosystem. And, you know, both of these themes. And I think the other gentleman before me have alluded to this as well. and I’m going to take you through specifically around software. I mean, everything to do with AI really, I’m a hardware guy, I used to design chips, but everything today is software, right? And I was talking to one of my colleagues and I said, okay, so I’m going to India, I’m going to do all this, we’re going to go through. And I said, is it really going to be the, you know, are they going to understand it?
And he said, Tim, India is software. This is what we do. He said, you’re going to be in front of the best people in the world that are going to understand what you want to talk about. So I’m really going to focus on the software side of that. And one of the things that I wanted to do, understanding that we had our esteemed colleague from MITEI here, is we do have lots and lots of experience in this space. And one of the things that I want to highlight is some work that we did with Lumi in Finland. Now, why is this important? So within Europe, almost all the languages are Indo -European, right? If you know a little bit of Greek, if you know a little bit of Latin, if you know a little bit of one of the languages, there’s 27 countries in Europe.
so let’s call it 27 languages and then you have Finland Finland is a Uralic language nothing to do with any other language in Europe absolutely different construct, different base different absolutely everything and so what we found working with the guys in Finland is they were coming to us because they put in this Lumi supercomputer and they said okay so we have a small country in Europe, 5 million native speakers and we have to take all of this work that’s been done English, big codex Spanish, big codex, Hindi, English big codex of all of that to do your training, suddenly you have a language of 5 million people how do you get that language into your LLM model so that it becomes useful now I’m probably going to get the pronunciation really really wrong here okay but I did actually use chatGPT to look at the 22 Indian languages right so if we look at Bodo, Konkani, Dogri, Sindhi, Nepali there’s less than 5 million people that speak those languages so how do you get an Indian LLM that caters for everybody as we’ve seen from President Modi AI for all of that kind of thing and this is the kind of area that with Mighty this is where we would like to work with you guys and be able to bring some benefit of the work that we’ve been able to do now remember the first day 30th of November 2022 this machine was inaugurated so it was put together all of the systems were put together it was all brought up the chips were made years before this machine was inaugurated my birthday 13th of June 2022 6 months before ChatGPT came out so this machine with 12 ,000 GPUs that had the foresight from the Finnish government was using AMD technology to run AI before ChatGPT came out.
So a lot of people that think that a lot of the stuff from AI has come from a specific area. This again, think of our way of thinking. We were there and we have the ability. We actually did the Bloom 176 billion parameter model. It was an open model made for European languages. So again, we would love to bring this knowledge and use with the Indian ecosystem to make this successful for everybody. I’m not going to spend a lot of time on hyperscalers. They’re obviously an important part of the market. It’s where a lot of the capabilities go into. We’re there. We have tens of thousands of GPUs. We actually have, as Thomas mentioned, we have the Helios system coming here.
Please go and take a look at it. If you like Harvard, it’s an interesting piece of kit. But really the idea here is whether you’re in a hyperscaler… or whether you’re in any other area, there is an ability to have a wider ecosystem. And again, inference, so AMD specifically, it’s not really an AMD pitch, but there was an idea in the market that AMD was inference only. That dates from Q1 2024. That’s two years old. So again, we have to kind of change that thinking, right? That’s older thinking. We actually now, again, completely open source. There’s a Primus ecosystem or a Primus tool for the open source, which enables you to be able to do all of the training that you need to do for all of your Indic languages or for your use cases, which again is completely open.
Enterprise AI. This one I think is an interesting one. I know when I started going out to customers and going out to enterprise customers, the difference in customer knowledge on what AI was, was amazing. You go into one customer and they say, okay, so this is our use case and we’re seeing these kinds of sizes of matrices, so we’re doing these optimizations. And then you go into another customer and you say, what are you doing around AI? And the guy goes, oh yeah, we’re doing Gen AI. Okay, great, yeah, what are you doing with Gen AI? We’re using LLMs. Okay, great, so using LLMs, what do you think? LLMs. And they had no idea, right?
It’s just, we have to do something with AI. And that has changed over the last 18 months and chatbot was something that most people said, okay, that makes sense, I understand chatbot, we can fine tune the model, we can do an internal AI system within the company. And now we’re starting to see with the agentic workflows this entire plethora of different use cases coming through. And so how then do you take it from a research institute or people that actually get onto your accelerator, whether that’s a GPU or a TPU or an FPGA or whatever else? and get it to a stage where actually people within a corporation can use it. And so this is something that has been understood.
And again, no lock -in, open, everything here is something that can be used without having to tie you into one particular area. And actually, I’ll come on to it a little bit later as well. It’s also something that I’ve been very impressed with, with the infrastructure that MITEI have put into place. In this case, with the public -private partnership, you have GPUs, you have TPUs, you have Inferentia, you have all of the different types of accelerators available to you within the Indian ecosystem that MITEI have made available to you. I’ll come on to that a little bit later more. But again, the idea here is that whatever the ecosystem is, or whatever the compute that you’re using, you’re able to go from an area where, whether it’s in the cloud or whether it’s non -prem, you have an ability to be able to give your employees within your enterprise an ability to be able to use that AI assistant or tool.
Neo clouds so these are the kind of what we call the smaller clouds, you know, they’re not the hyperscalers they’re a little bit more nimble they are a little bit more available to doing things a little bit different a lot of these guys are doing sort of bare metal and managed Kubernetes services, but it is coming to areas where they’re becoming like APIs, token factories there’s an ability for these guys to be able to provide you with compute quickly easily and at reasonable pricing to enable you in whatever it is you’re trying to do we find these are the first movers in the market and again in the same way that we’re integrated and working with the hyperscalers, we have these relationships with the Neo clouds and actually we’re working with quite a few of the guys here in India as well to make that available for you as well, so the whole idea again here is there is that compute that’s available please go out and understand the benefits or the trade -offs between the different types of Kubernetes services that you have out there and get the right solution for you guys.
Now, I’m assuming that most people here are going to be startups. And again, startup is an interesting area, right? So you have a startup, you know what you want to do, you absolutely are laser focused on getting your MVP out there, getting in front of customers, how do you generate some value, how do you generate some revenue? Although that these days is less and less important, it seems, as people get funding even sometimes before a product. But one of the things that you guys have to be sure of is that the compute that you have and the capabilities that you have are capable for the products that you actually have to then go and put into position.
And so this is an area where we understand that proof of concept, it’s very important. And again, I was chatting with the CEO of Mighty here before, it’s something he was saying, you know, POC to PO. You know, you have to be able to make sure that you understand the technology and how you can take that to market before you can actually go and invest. So we have a couple of different ways that we can help here within the ecosystem. You could actually go on there right now, there’s the AMD Developer Cloud. You can get, I think it’s 50 or 100 hours of free compute. You want to go on, how does AMD work, you know. It’s always going to be dependent on use case and what you’re trying to do.
But there is a huge TCO advantage, which of course is important for startups. Get onto the Dev Cloud, get it working. We actually provide Docker containers, so that’s everything put into a single Docker. So you can download a Docker and run it, so you don’t have to spend your time and your energy installing all of the software, putting everything together, get everything working. We’ve done all of that for you. Take the Docker down, get your model off of Hugging Face, get your weights off of Hugging Face. Use your own model and do something else. Whatever there is that’s in there, in the open source ecosystem is there and it’s going to work. Give it a go.
Give it a play. And then of course from that we can… can take you into our accelerator cloud a little bit more sort of hands -on, making sure we understand what you’re doing, helping, guiding, and assisting you in moving yourself forward there. And then, of course, we have the relationships in with the industry, you know, try and buys, being able to get you access to the computer, being able to get you the right solution at the right kind of price. So this is something also that I really want to highlight. So day zero support of models. Now, we announced this. So Quen3 Codex came out last week, day zero support on AMD. Baidu came out with one of their paddle models this week, day zero support on AMD.
What does day zero support mean? Well, it means that it’s not the first time we’ve seen this code. It runs on AMD. It’s guaranteed. It’s optimized. you know a lot of people think that to run something in AI you need a specific GPU the whole point is with day zero support absolutely false right again with Lumi pre -chat GPT in 2022 we were building LLMs for effectively an Indic type language languages right and so the ability is there if there’s a new model coming out you want to run it you want to test that you want to see how it works for you guys then that is there and runs out of the box and you know again if we look at this line in the middle you know PyTorch if you look at the history of PyTorch you know there were lots of signatories on PyTorch to make sure that was available for everybody AMD was one of them this mainly comes out of Microsoft and Meta who did not want to be closed in to a single supplier so actually what you’re doing with PyTorch is you’re writing Python code right you’re not writing vendor specific code it’s an open ecosystem that’s the whole point right you don’t want to be tied in you know it’s gonna slice for innovation it’s going to increase So PyTorch came out and that is the basis of 99 % of all of the customers I talk to, right?
They’re all writing Python under PyTorch. JAX is then coming forward. Triton, this is a Python -like language which is specific for gem optimization. Again, if you’re getting to that area where you’re actually seeing the gem sizes that are coming through from your operations and want to do gem -level operations, then Triton enables you to do that at the compiler level. So then you can be completely agnostic of what the underlying hardware is. The ecosystem and the underlying compute becomes kind of abstracted away because Triton enables you to run on anybody. It’s just a compiler for the new architecture. If we look at these models on the bottom here, President Modi this week has announced the first 12 Indian languages.
I can’t wait to get you guys here. right, fully supported day zero support, you know just to give you an example here, DeepSeek of course when DeepSeek came out, they did some things a little bit special multi -head latent attention was new we had day zero support with DeepSeek why? Because we’re one of the main contributors to SG9 there’s not a tie -in to an inference engine here, it’s an open ecosystem so we were able to come out of the door with better TCO, better performance, better cost and full support through SG9 on that DeepSeek model which was you know, the leader of its time because of our complete commitment to the open ecosystem just to give you an idea, again you’re walking out of here in 45 minutes with changed ideas, this is what we’re going for here I did have two minutes I now have five, I don’t know who bought me extra time but I owe you a beer Okay, so really actually that’s kind of the end of the pitch here.
One thing I would say is we do have a booth here at 5 .10. I’m sorry, I’m going to do a little bit of an AMD plug at the end here. But do come by and see us. You know, we actually have some of the neoclads there. We have some model creators, vendors, some ecosystem partners there. You know, come see, come change your mind. Come see what’s available within an ecosystem with the compute that’s available for you guys. Okay, thank you.
So first of all I’m Gilles Garcia I’m French so we can talk about LLMs for French language if you want so I’m French, I’m based in France but I’m covering worldwide and I’m focusing on physical AI for the communications and robotics and industrial so we have been talking a lot about AI and most of the people are thinking AI means GPUs, big cloud and what we are seeing is a big shift, that’s another change that we are seeing, change management, so I’m the change management first but changing is we are seeing the AI moving into the edge and moving into the far edge which is industrial, robotics vehicles as well as the networks and so for that you need to have different type of beast, GPUs is one aspect of it but you need to have very profound different technology that AMD has as part of the bread portfolio that we have, these technologies need to be able to send to the market and we need to have a lot of that are able to send the data to the market and we need to have a lot of and we need to have a lot of that are able to send the data to the market and we need to have a lot of that are able to send the data to the market that are able to send the data to the market act, react in a so quick manner that there is no time to go back to the cloud for that.
And so these technologies need to be, of course, that will be inference, but need to be able to take decisions and act very safely, reliability, reliable, without having to rely on the cloud. And so that’s a new change that we’re seeing at AMD on the physical AI, which will become very, very important for us, is how do we take what we have learned in the cloud, and how do we make it available in the physical AI? Software is a big thing. Full stack, meaning hardware and software being able to deliver to the customer solutions, is what AMD is aiming for. And so our CEO, Lisa Su, was saying, it’s AI anywhere. And one size does not fit all.
Meaning that if you want to address a robot you can put a GPU into it, it will burn to hell. So you need to have a very dedicated accelerator with a high software stack, open source, that will be able to have this robot perceiving, visualize, act, touch and be able to act accordingly to what his purpose will be. At CES early January, Lisa Su brings on stage Gene01, which was the first humanoid built on AMD technology. That’s just impressive. Everything has been done by a startup in Italy to make this humanoid being able to sense, visualize, touch when somebody is touching it and when it’s touching something to act and react very rapidly without having to rely on the centralized source.
So I will not be longer than that. Physical AI is probably something that India, by the way, will have a lot of things to act into. Because GPUs are there already where physical AI is something that you will have to create. A lot of things related to medical, related to autonomous networks, autonomous cars, autonomous plants, industrial, and that’s where I think India will start, with all the startups and capability to use accelerators that are much smaller than what GPUs are, and this is all available today in the EMD portfolio. So I will stop here, encourage you to come to the EMD booth, and we can continue the discussion. Thank you.
Well, so we gave you a lot of information on AI, gave you four different accents, I think the French guy probably carries today. But my one message is that stay curious, as all of us have said, things are going to change and continue to change at a rapid pace. And, you know, people talk about so many thousands of GPUs, that will not be the main thing. and I think that’s something that we need to because you will find that we there’s a whole lot of interest in trying to provide you with even more powerful GPUs for their infrastructure while at the same time provides you very lightweight low power at the edge and so I think stay curious look from the from a start -up community point of view for a research point of view but academic point of view look for really interesting problems challenges to deliver the infrastructure that you need because ultimately this applications with where it is going to change society and life that’s all thank you very much Thank you.
Thank you. Thank you.
This discussion focused on building AI readiness and capabilities, featuring speakers from AMD and the Indian government’s METI Startup Hub at what appears to be an AI summit in India. Thomas Zacharia…
Event-Thomas Zacharia(Dr. Thomas Zakaria): Senior Vice President for Strategic Technical Partnerships and Public Policy at AMD, Inc.; previously led Oak Ridge National Laboratory where he oversaw deploymen…
EventAs the conversation concluded, it was clear that the intersection of AI and national security presents a complex landscape of challenges and opportunities. The need for thoughtful regulation, internat…
EventPublic-private partnerships play a key role in these collaborations. Public-private partnerships were considered crucial in effectively addressing these challenges. The potential benefits of open-sou…
EventI don’t think it’s necessarily about security. You know, it’s really about saying how many of those have real use cases? While the real use cases exist, how many of them are able to monetize or are ab…
Event“This is essentially what we provide for startups.”<a href=”https://dig.watch/event/india-ai-impact-summit-2026/building-the-ai-ready-future-from-infrastructure-to-skills?diplo-deep-link-text=So+this+…
EventAnd I want this. The most important thing that I want people to understand is… just because, and I think that the, you know, I would love that not just us, but many other people come and show that w…
EventDaniele Turra: Sure, actually, I was about to introduce some of the points that might help in that sense in this following question. But in a way, I agree on the fact that open source is a tool, …
EventEconomic | Infrastructure | Development PayPal chose to use open source protocols because it attracts the best talent to work on protocols and avoids the problem of picking winners and losers in tech…
EventThis historical analogy brilliantly challenges the current centralized AI paradigm by suggesting that today’s massive data centers might be equivalent to IBM’s ‘five computers’ – technically correct b…
EventAMD used CES 2026 to positionAI as a default featureof personal and commercial computing. The company said AI is no longer limited to premium systems. Instead, it is being built directly into processo…
UpdatesThe convergence of AI and 6G will create a distributed computing fabric that extends far beyond traditional network boundaries. The panelists outlined a tiered approach to AI processing that optimizes…
Event“Zacharia introduced the U.S. Department of Energy’s Genesis Initiative, a public‑private programme launched under the Trump administration to accelerate scientific discovery, energy research and national security through AI.”
The knowledge base confirms the existence of a DOE‑led Genesis Mission that mobilises all 17 DOE national laboratories and partners such as Google DeepMind, but it does not specify that the programme was launched under the Trump administration; the launch timing is not detailed in the sources.
“The DOE’s three pillars—discovery science, energy, and national security—are supported by a network of 17 national labs, such as Oak Ridge, whose historic role in the Manhattan Project underscores the agency’s dual focus on energy and broader scientific outcomes.”
The knowledge base notes that the Genesis Mission involves 17 DOE national laboratories [S11] and that Oak Ridge National Laboratory played a major role in the Manhattan Project, receiving about 65 % of its funding [S5], confirming both the lab count and Oak Ridge’s historic significance.
“Any federated compute and data platform must be built security‑by‑design, incorporating confidential‑computing capabilities to protect sensitive research and national‑security workloads.”
Sources highlight the importance of secure-by-design ICT procurement and note existing gaps in security-standard implementation [S87], and they also reference confidential-computing features in new hardware offerings such as Fujitsu’s servers [S89], providing additional context for the security-by-design claim.
The speakers converge on four main themes: (1) an open, standards‑based ecosystem; (2) the pivotal role of startups as AI natives; (3) the necessity of sovereign, public‑private AI infrastructure; and (4) the requirement for a diversified hardware stack beyond GPUs, especially for edge deployments.
High consensus across technical, policy and economic dimensions, indicating a shared vision that AI readiness depends on openness, inclusive innovation ecosystems, coordinated national strategies, and hardware diversity. This broad alignment strengthens the case for collaborative initiatives that combine government policy, industry resources, and startup agility to accelerate AI adoption.
The discussion reveals several points of tension: the appropriate scale and deployment model for AI infrastructure (centralized national clouds vs decentralized startup‑focused compute and edge accelerators), the balance between strict human‑in‑the‑loop governance and rapid open‑source deployment, and differing emphases on hardware priorities (GPUs vs specialized edge accelerators). While participants converge on openness, the importance of startups, and the need for sovereign AI frameworks, they diverge on how best to achieve these goals.
Moderate – disagreements are strategic rather than ideological, focusing on implementation pathways. They suggest that policy makers must reconcile large‑scale national investments with mechanisms that empower startups and edge deployments, and must embed governance safeguards without stifling the speed of innovation.
The discussion was driven by a series of pivotal comments that repeatedly broadened the scope from a narrow GPU‑centric view to a holistic AI‑readiness ecosystem. Thomas Zacharia’s opening remarks and the Genesis Initiative framing anchored the conversation in national‑level strategy, while his points on government‑driven innovation and governance introduced policy and ethical dimensions. Paneerselvam’s emphasis on startups added a socio‑economic layer, and Timothy’s focus on the rapid post‑ChatGPT shift and day‑zero support supplied concrete, actionable examples of an open, developer‑friendly ecosystem. Gilles Garcia’s edge‑AI insight further diversified the technical narrative, prompting a final call from Thomas to stay curious and balance data‑center power with edge efficiency. Collectively, these comments redirected the dialogue multiple times, deepened analysis, and aligned participants around the need for open standards, public‑private collaboration, and inclusive growth across hardware, software, and societal dimensions.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

