Building the AI-Ready Future From Infrastructure to Skills

20 Feb 2026 11:00h - 12:00h

Building the AI-Ready Future From Infrastructure to Skills

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened with Thomas Zacharia framing the session as a discussion on “building AI readiness from compute to capability,” stressing that AI extends far beyond GPUs and includes the full stack from PCs to edge devices [6-9][11-12]. He introduced the U.S. Department of Energy’s Genesis Initiative, noting that the DOE spends roughly a trillion dollars annually on R&D, of which 20-30 % is government-funded, and that the program aims to use AI to accelerate scientific discovery, energy, and national security research [18-22][23-30]. Zacharia explained that the initiative seeks to federate compute and data across national labs, cloud-enabled labs, and public-private partnerships while embedding security, governance, and composable standards into the infrastructure [34-38][46-48].


He highlighted AMD’s contribution through the American Science Cloud, which will run on an MI355 cluster and a Helios rack delivering 2.9 exaflops of AI compute for 220 kW, illustrating the company’s push for high-performance, energy-efficient hardware [48-54][92-94]. Zacharia also stressed the importance of open ecosystems, open-source software, and open standards to enable startups and innovators to build on AMD hardware without vendor lock-in [70-73][76-81].


Paneerselvam M of the METI Startup Hub described India’s sovereign AI strategy as a layered effort that requires clear intent, curiosity, and implementation, and he positioned startups as “AI natives” that can improve the nation’s AI readiness quotient for SMEs and larger enterprises [106-108][110-112][113-114]. He reiterated the need for a human-in-the-loop approach while acknowledging that the balance may evolve as agentic AI matures [106][108].


Timothy Robson shifted to the software perspective, noting that AMD’s early supercomputing work, such as the Finnish Lumi system with 12 000 GPUs, enabled multilingual LLM training before ChatGPT’s release [135-144][138-143]. He argued that open-source frameworks like PyTorch and the emerging Triton compiler allow developers to run models on any hardware, supporting AMD’s “day-zero” support for new models such as Quen3 Codex and DeepSeek without additional integration effort [156-162][204-211]. Robson also promoted the AMD Developer Cloud, which offers free compute hours, pre-built Docker containers, and accelerator-cloud programs to help startups move from proof-of-concept to production while keeping total-cost-of-ownership low [187-196][200-203].


Gilles Garcia added that AI is moving to the edge and “physical AI” for robotics, autonomous networks, and industrial applications, requiring specialized accelerators that are smaller and more power-efficient than traditional GPUs [230-238][240-244]. He cited the Gene01 humanoid built on AMD technology as an example of how compact accelerators can enable real-time perception and actuation without cloud dependence [239-241].


In his closing remarks, Zacharia urged participants to stay curious, to explore both high-performance and low-power AI solutions, and to collaborate across academia, startups, and industry to drive societal change [247-250].


Keypoints


Major discussion points


AMD’s holistic “compute-to-capability” roadmap for sovereign AI – Thomas Zacharia outlines a vision that goes beyond GPUs to a full stack of AI hardware, software and cloud services, emphasizing public-private partnerships such as the U.S. Department of Energy’s Genesis Initiative and the American Science Cloud, and showcasing AMD’s exascale achievements (e.g., the MI355-based cluster and the 2.9 exaflop Helios rack) [6-12][15-21][34-38][46-48][53-56][61-66][70-73][84-94][95-99].


Government-driven AI research and national security priorities – The talk stresses that AI acceleration is a strategic national function, linking DOE’s three pillars (discovery science, energy, national security) with the need to federate compute and data across labs, academia and industry, and to embed security-by-design in public-private collaborations [16-23][27-33][40-48][49-52].


Start-ups as the engine of AI readiness and implementation in India – Paneerselvam M highlights the critical role of AI-native start-ups in translating sovereign AI strategies into tangible value for SMEs, stressing curiosity, clear intent, and the need to broaden AI benefits beyond large corporates [106-112][113-114].


Open-source software ecosystem and “day-zero” support as the enabler of AI adoption – Timothy Robson stresses that success now hinges on open, vendor-agnostic software stacks (PyTorch, JAX, Triton) and AMD’s developer resources (Developer Cloud, Docker containers, day-zero model support) that let users run new models on AMD hardware without lock-in [124-131][152-159][206-218][221-229].


Physical AI and edge computing for industry and robotics – Gilles Garcia points to the shift of AI workloads from data-center GPUs to low-power, edge-optimized accelerators for robotics, autonomous vehicles, and industrial systems, underscoring AMD’s “AI anywhere” strategy and the need for dedicated, reliable hardware at the far edge [230-239][242-245].


Overall purpose / goal of the discussion


The session aimed to present and promote a comprehensive, government-backed AI ecosystem-spanning sovereign research, high-performance compute, open-source software, and start-up innovation-to accelerate scientific discovery, national security, and economic growth, while forging a collaborative partnership between AMD, the Indian government (METI), and the broader AI community.


Overall tone


The conversation is consistently upbeat, forward-looking, and collaborative. It begins with a formal congratulatory opening, moves into an enthusiastic technical showcase of AMD’s capabilities, shifts to a policy-focused discussion of sovereign AI, then adopts a pragmatic, supportive tone toward start-ups and ecosystem builders, and concludes with a motivational call to stay curious and explore the emerging opportunities. The tone remains optimistic throughout, with occasional shifts from high-level strategic framing to detailed, hands-on encouragement for developers and entrepreneurs.


Speakers

Thomas Zacharia


– Area of expertise: AI strategy, high-performance computing, supercomputing deployment, national AI readiness


– Role/Title: AMD senior executive (speaker)


Timothy Robson


– Area of expertise: Hardware engineering, software development for AI infrastructure, vendor-agnostic AI frameworks


– Role/Title: Hardware engineer turned software specialist at AMD [S1]


Gilles Garcia


– Area of expertise: Physical AI, edge AI for communications, robotics, and industrial applications


– Role/Title: AMD speaker on physical AI


Paneerselvam M


– Area of expertise: Startup ecosystem, innovation management, government-industry partnerships in India


– Role/Title: CEO, METI Startup Hub, Ministry of Electronics and IT, Government of India [S4][S5]


Moderator


– Area of expertise:


– Role/Title: Conference moderator


Additional speakers:


(none identified beyond the listed speakers)


Full session reportComprehensive analysis and detailed insights

The session opened with Thomas Zacharia congratulating the audience on behalf of AMD’s 30 000 employees worldwide, including the 10 000 based in India, and outlining the panel’s purpose – “building AI readiness from compute to capability” [1-6]. He warned against equating AI solely with GPUs, emphasizing that AI spans a full stack from AI-enabled PCs through core data-centre infrastructure to edge deployments [7-10]. Zacharia announced that he would address the sovereign side of AI while his colleague Timothy Robson would cover the enterprise perspective [12-14].


Zacharia introduced the U.S. Department of Energy’s Genesis Initiative, a public-private programme launched under the Trump administration to accelerate scientific discovery, energy research and national security through AI [15-23]. He noted that the United States spends roughly a trillion dollars a year on R&D, 20-30 % of which is government-funded, and that the return on investment is diminishing unless AI bridges the gap between hypothesis and outcome [18-22]. The DOE’s three pillars-discovery science, energy, and national security-are supported by a network of 17 national labs, such as Oak Ridge, whose historic role in the Manhattan Project underscores the agency’s dual focus on energy and broader scientific outcomes [23-30][31-33]. Zacharia argued that modern research must shift from hypothesis-driven experiments to rapid AI-augmented analysis, thereby reducing cost and time while enhancing global collaboration [34-40].


He also highlighted that any federated compute and data platform must be built security-by-design, incorporating confidential-computing capabilities to protect sensitive research and national-security workloads [251]. Zacharia called for composable standards and security-by-design (including confidential computing) to enable trustworthy public-private partnerships [252].


In line with this vision, AMD is contributing the American Science Cloud, a cloud-enabled research platform built on an AMD MI355 GPU accelerator cluster that will host the first exascale-class AI workload – a Helios rack delivering 2.9 exaflops of FP4 AI compute at 220 kW [48-49][92-94]. Zacharia recalled his three-decade career in supercomputing, noting AMD’s history of delivering world-fastest systems, such as the 30 000 NVIDIA GPUs deployed when CUDA was still a novelty [50-56]. He stressed that ambitious, energy-efficient projects are possible because governments fund risky, large-scale hardware development [53-56][86-88].


Beyond hardware, Zacharia underscored the necessity of an open ecosystem and robust governance. He advocated for open-source standards and composable infrastructure that prevent vendor lock-in, noting AMD’s commitment to open hardware and software standards that enable innovators to build on any part of the stack [70-73][76-81]. Governance, he clarified, does not equate to regulation; it requires a human-in-the-loop to validate autonomous AI outputs before they are acted upon, safeguarding scientific integrity and national-security concerns [62-68]. He described an autonomous loop comprising roughly 100 000 GPUs powering 100 000 agents, illustrating the scale of the envisioned compute fabric [253].


Paneerselvam M, CEO of the METI Startup Hub, presented India’s sovereign AI strategy as a layered, five-tier architecture driven by clear intent, curiosity and concrete implementation [106-108]. He highlighted the summit’s massive response-267 000 registrations in five days-as evidence of nationwide curiosity and the desire to embed AI across health, education, skilling and other government functions [110-112]. Paneerselvam positioned start-ups as “AI-natives” that can raise the AI-readiness quotient of SMEs, arguing that the government’s role is to provide an enabling environment so AI benefits are not confined to large corporates [113-114].


Timothy Robson shifted focus to software, observing that the launch of ChatGPT on 30 November 2022 dramatically accelerated AI adoption and made an open ecosystem indispensable [115-119][123-124]. He recounted AMD’s early involvement in the Finnish Lumi supercomputer, which used 12 000 GPUs to train multilingual large-language models before ChatGPT existed, demonstrating that large-scale, multilingual AI can be built with public-sector foresight [135-144][138-143]. Robson stressed that open-source frameworks such as PyTorch, JAX and the Triton compiler allow developers to run models on any hardware, and that AMD’s day-zero support guarantees code runs on AMD out of the box, optimized and validated [156-159][220-221]. He defined day-zero support as “code runs on AMD out of the box, guaranteed and optimized” [254]. Examples of day-zero support include Quen3 Codex, DeepSeek, and Baidu’s Paddle models [255][256]. This support is delivered through AMD’s SG9 runtime, which provides full compatibility with models such as DeepSeek [257]. To lower barriers for start-ups, Robson promoted the AMD Developer Cloud, offering 50-100 free GPU hours, pre-built Docker containers and a seamless path from proof-of-concept to production while maintaining a low total-cost-of-ownership [187-196][197-203]. He distinguished “Neo clouds”-smaller, nimble providers-from hyperscalers, noting their relevance for Indian start-ups seeking flexible, low-cost compute [258].


Gilles Garcia broadened the discussion to “physical AI” at the far edge, arguing that AI workloads are moving from data-centre GPUs to specialised, low-power accelerators embedded in robots, autonomous vehicles and industrial plants [230-238]. He cited the Gene01 humanoid-the first robot built on AMD technology-as proof that edge AI can deliver real-time perception, touch and actuation without reliance on cloud connectivity [239-241]. Garcia suggested that India’s burgeoning start-up ecosystem is well-placed to adopt these edge solutions, leveraging AMD’s portfolio of compact accelerators that combine high performance with minimal power consumption [242-245].


Across the panel, several points of agreement emerged. All speakers championed an open ecosystem and open standards as essential to avoid vendor lock-in and to foster rapid innovation [70-73][124-128][156-158][235-236][239-241]. They also concurred that start-ups, being AI-native, are pivotal for scaling AI adoption and improving the AI-readiness quotient of SMEs[69-71][106-108][178-186][187-196]. Finally, both Zacharia and Paneerselvam affirmed the need for sovereign, public-private AI infrastructures that integrate government, academia and industry to serve national priorities [16-48][106-113].


Nevertheless, nuanced disagreements surfaced. Zacharia’s vision of a centrally federated national AI cloud (the American Science Cloud) contrasts with Robson’s promotion of lightweight, developer-focused cloud resources for start-ups, and with Garcia’s emphasis on decentralised edge accelerators [16-48][187-196][230-235]. A second tension arose between Zacharia’s insistence on human-in-the-loop governance for autonomous agents [62-65] and Robson’s focus on rapid, open-source deployment that does not foreground mandatory oversight [210-218]. Finally, while Zacharia warned against over-indexing AI on GPUs [7-10], Garcia advocated for specialised, non-GPU edge accelerators, highlighting differing hardware investment priorities [7-10][230-235].


Key take-aways: (i) sovereign AI requires public-private partnerships such as the DOE Genesis Initiative and India’s five-layer model to federate compute, data and secure cloud-enabled labs; (ii) start-ups are essential “AI-natives” that can raise the AI-readiness quotient of SMEs; (iii) AMD is delivering low-cost, ready-to-use compute (Helios rack, Developer Cloud, free GPU hours) and day-zero model support to accelerate adoption; (iv) an open, vendor-agnostic software stack (PyTorch, JAX, Triton, Primus) is critical to avoid lock-in [92-94][262]; (v) governance must retain a human-in-the-loop to ensure safe, responsible AI; and (vi) physical AI at the edge demands specialised, low-power accelerators such as those showcased in the Gene01 humanoid.


Action items: AMD will continue to supply high-performance and edge-optimised hardware, maintain open-source toolchains (including the Primus ecosystem) and day-zero support for emerging Indian-language models, and expand the Developer Cloud for start-ups; the METI Startup Hub will deepen its partnership with AMD to accelerate AI uptake among Indian SMEs; both parties will advocate for policies that blend large-scale national compute investments with inclusive, low-cost resources for innovators. Unresolved issues remain around concrete mechanisms for federating compute across labs, detailed governance frameworks for autonomous agents, timelines for India’s sovereign AI architecture, and funding models for large-scale public-private initiatives.


In closing, the speakers collectively emphasized that building a balanced AI ecosystem-spanning compute, software and edge, underpinned by open standards, security-by-design, and inclusive access-is essential to realise the transformative potential of AI for society and industry [247-250].


Session transcriptComplete transcript of the session
Thomas Zacharia

So congratulations to all of you. You should be proud. And I just want to say that on behalf of the 30 ,000 AMDers worldwide, and particularly 10 ,000 in India, I just want to congratulate you and thank you for this opportunity to have this discussion. Since we are a small group, I think we’ll keep it informal. And I want to make sure that somebody please keep track of time so that I do justice to my colleagues here and the dais. The topic that I’ve been asked to talk about is sort of building AI readiness from compute to capability. In AI, in the field of AI these days, there seems to be an over -indexing of AI and GPUs. When in reality, AI is much broader.

GPU is obviously a significant part. It’s a part of the core infrastructure. But what we do at AMD is to really provide a full suite of AI capability from AI on AI PCs to core infrastructure to all the way out to the edge. And I have my colleague Tim from AMD, so we decided that we’re going to tag team. And so I’m going to focus perhaps a little bit on the sovereign side, and then Tim can focus on the enterprise side. That’s okay with you. So let’s just talk about sovereign AI in practice and exploring the motivators. So this particular slide is something that is created by the Department of Energy in the United States as part of a new initiative that was kicked off by the Trump administration called the Genesis Initiative.

and I had a role to play in terms of trying to support and address in crafting this initiative and the framing is very simple. If you look at the top line, I don’t know whether this has a pointer, it’s okay. Okay, so the top line, the white line is funding in the United States for R &D. Today, the United States spends about a trillion dollars a year in R &D. That’s just my involvement. Not all of that is government spending. It’s roughly about, say, 20 to 30 % U .S. government and the rest industry. The bottom line is what we consider research. Output efficiencies. So the problems are getting harder. it is getting more challenging and even though we are trying to tackle really important problems the sense is that throwing money is not having the same rate of return and this slide basically asks the question how do we reduce the gap and at least the thesis for the Genesis mission is that you can use AI as a way to accelerate scientific discovery the Genesis mission has three areas of importance for people who don’t know about the US Department of Energy the US Department of Energy is the nation’s largest physical science agency so it has it operates through 17 national labs and some of the earliest ones, like the one Oak Ridge National Laboratory, which I used to lead before joining AMD, came to being during the Manhattan Project.

And Manhattan Project, about 65 % of the entire funding of Manhattan Project was at Oak Ridge National Laboratory. And in addition to, you know, in fact, I think the Prime Minister mentioned that, about nuclear energy, both the destructive aspect as well as the significant outcomes that came out of that from nuclear medicine to nuclear navy to nuclear energy. These all came, you can trace back to Manhattan Project. So U .S. Department of Energy is not only responsible for energy, but it’s really a science organization. It’s got three priorities. One is just discovery science. The second is energy. And the third is national security. and national security. America has a really interesting thing, a way of keeping the nuclear arsenal away from the military in the sense that it is the U .S.

Department of Energy and not U .S. Department of Defense or Department of War that is responsible for the nuclear arsenal. And the three lab directors, Los Alamos, Livermore, and Sandia, has to certify each year that the arsenal is ready for the President of the United States. So this is a piece of the hypothesis. If you think about research, you can look at the left side. It starts with hypothesis, then you conduct experiments, get the data. And today, you take the data, use AI, machine learning, et cetera, you get analysis. What you’re trying to do is to make this much faster so that you can have science outcomes coming out. That’s it. do it in a reduced cost because you cannot throw more and more money at this problem and enhance global collaboration.

I think there is a genuine interest on the part of the U .S. that this whole premise is not just a U .S. issue. And so I think that it’s likely announcements that suggest that countries like Japan and Europe and UK and others may be part of this overall approach to drive sovereign AI for those aspects of AI deployment and scaling that is uniquely a government or state function. So as I mentioned broadly, scientific discovery, energy and national security. But if you look at the scientific discovery further to the next step, then you will see healthcare, education, skilling, all these things. Fundamentally, a government function. And this is not an easy task because if you think about how these institutions’ research is done, I mentioned large fraction of it in the private sector, a lot of it is done in academia funded by government, and then of course in national labs in the United States, India has its own set of national labs, academia, etc.

So what you need to do is take a look to see how do you integrate all this data? At least the U .S. Department of Energy operates these large, multi -billion dollar light sources, neutron sources, specialized scientific experiments. You need to be able to incorporate all these things, so you have to federate the compute and data, you have to have cloud -enabled lab operations, which is not how things are done today. Security and governance by design, especially when you’re thinking about public -private partnerships, even in the enterprise commercial scale, you want to make sure that you have secure computing, you have confidential computing, you can maintain integrity, but also if you think about national security, you have additional layer, and then you want composable standards versus infrastructure.

So this particular program was kicked out by Secretary Wright, well the President of the United States, then Secretary Wright, and the last fourth quarter of last year, and the first announcement was done with Lisa Su, our CEO, because one of the things that they wanted to do was a unique public -private partnership, and so the core infrastructure, which is currently called American Science Cloud, this program is just being stood up, is going to be run on an MI355 cluster, which is what this entire program that is aimed at driving innovation is going to be run on. And so we are really excited to be a part of this. initially US and soon an international effort to drive innovation in those areas that are uniquely government function.

I’ve had a ringside seat in computing for the last 30 years and been responsible for a lot of supercomputing deployment, a dozen or so. The last four or five of them were number one systems in the top 500, each first of a kind. This is another important thing. Innovation, if you think about AI, AI didn’t happen magically with NVIDIA or AMD. It happened because US government took the risk to invest in first of a kind systems. So we were the first to deploy 30 ,000 NVIDIA GPUs when people thought that CUDA was a four -letter word. Now everybody thinks that this is this amazing software, but change comes hard to people. And so I just want… I want you to know that…

particularly all of you who are youngsters things are going to evolve. If you think that AI is just like the Prime Minister said, it’s just the early stages. So you have to be open and you have to be part of this drive for effective, scalable and impactful AI. Then deep learning came where this mixed precision computation then generative AI and last year was really authentic AI and some of us think that this year we’re going to focus increasingly on governance. Governance does not mean regulation there is a role for regulation by governments but governance is that how do you want if you want to have agentic systems driving, accelerating innovation you want to make sure that the output has a person in the loop.

The one way to simply think about it is that if you are researchers here, if you have a professor who’s got a dozen students who are doing research you don’t let the students just go publish things. There is the professor’s responsibility, there is the peer review committee, etc. So you want that human in the loop before you can update and let this thing to drive innovation while it also allows it to do things that AI does best. So this is how we think about compute to capability, a model of national AI readiness. We want its rest on talent, talent and readiness of talent, giving people access to compute and models. Research enablement is key because you want people to operate AI in an environment where you’re questioning things and innovating all the time as opposed to assuming that what we in the industry is providing you is the only solution.

So I think… If you look at countries that are leading in AI, there is a very strong R &D and innovation foundation that is allowing you to lead because there are people who are questioning every time somebody says something. to make sure that it is validated, it’s continuing to innovate. Start up an innovation lab because you want to take these ideas and start new companies because many of these new innovation and new technologies may be led by people with new ideas and opportunities and of course ultimately enterprise and public sector adoption. We strongly believe, and again I heard Prime Minister Modi say, open ecosystem and open source, open platforms. These things, if you think about iOS and Android, India I find has a lot of penetration of Android systems because inherently open systems allows you to innovate without getting locked into vendors.

And so we at AMD have a commitment to make both our hardware infrastructure and our software infrastructure to be based on open standards so that you can innovate. Around this, any part of this infrastructure. can be part of a new startup or new company adding to that. That is also an important way for India to become part of the supply chain and the semiconductor ecosystem, because you don’t have to start with an attempt to go in for two or three nanometers. You can actually do amazing work and be part of leading -edge technology at different form factors. So I mentioned a little bit about how we think about agendic flows and AI scan work. This is simply the way you think about it.

The inner loop is an autonomous loop where AI and agendic AI does things, what it can do fast, it can operate. If you have 100 ,000 GPUs, you have 100 ,000 agents tackling this problem and it can actually go through the hypothesis -driven experiments and systems. So you can do simulation, campaign scale coordination, machine speed execution, etc. But we do not allow it to update. the outcome until a human in the loop has had the opportunity to validate to make sure that we don’t have unintended consequences. Now, how do you build this thing? So this is, if you haven’t gone to the AMD booth, I would encourage you to do. This is my only plug in this presentation. We spent a ton of money to bring this Helios rack here just so that you can have a sense of what is, not what this particular rack can do, but giving a glimpse of what is possible the next year and the year after.

So we, in 2007, myself and two of my colleagues started what is called the Exascale program. And the challenge was to deliver an Exascale system for under 20 megawatts. Because if you had just scaled the capability in 2007, it would have taken three to four gigawatts. And we knew that the government was not going to… sign, $4 billion for power, just electricity alone to run the computer. So we were motivated to drive that. And we delivered that first exascale system, Valfrontier at Utrich, for less than 20 megawatts. Everybody thought it was crazy, it cannot be done, but there are some things, when you put audacious goals, people rally around and then deliver. This particular rack, in one rack, there are 72 GPUs that will deliver 2 .9 exaflops of AI compute, which is FP4, not FP64, just to be very clear.

But for AI capability, you get 2 .9 exaflops of compute capability for 220 kilowatts. Right? That, even for somebody who’s been in this field for a long time, it’s just mind -blowing. this is where we are headed AI is the fastest adoption of any technology that humanity has introduced we’ve gone from 1 million active users to 1 billion in a matter of just a couple of years and we are headed to 5 billion users so there is a lot of opportunity to innovate in this field and all of us are going to continue to create these opportunities as Lisa said, we are entering the Yara scale so already people are thinking about the next 1000 so let me just say you can get to Zeta scale by just taking 300 of those racks and putting together and then it’s another 3x so I would say in the next 10 years maybe we would be at this 10 ,000 factor so the kind of problems that you are thinking about should not be constrained by what you can do today by the time you figure out the solution for an important problem compute will be there.

That is what we in the industry like to promise you. And I think advancing national economies, these are one of the things that people might you would be forgiven if you thought that does AMD do these things and how prevalent are our compute capabilities. I think Tim is going to tell you that our GPUs and our systems are in every hyperscaler globally and when it comes to HPC and national priority missions, AMD is the leader. If you listen to President Macron, he referenced Alice Recop, which is the first AI factory that the French government announced, the CEA announced, which is based on AMD MI430X, which is a variant of the MI450 on the right that you see outside.

I will close by saying that a shared path forward is really what we are looking for. I know India is in the early stages and we are really delighted to actually have this conversation. Thank you very much.

Moderator

I’d like to invite our next speaker, Paneerselvam M, CEO of the METI Startup Hub at Ministry of Electronics and IT, Government of India. Dr. Paneerselvam M is a distinguished leader with over two decades of expertise in innovation, management, strategic growth and market development. He’s been instrumental in advancing India’s startup ecosystem and fostering impactful partnerships between the government, industry and entrepreneurs. In his

Paneerselvam M

drawing insights out of this data and then comes the interface layer where most of it is going to be really driven by agents, by agentic AI and of course as Thomas mentioned there is always going to be a human in the loop perspective but as we progress this is going to change as well. So you know the two fundamental things that I want to share, one is the entire transformation in the readiness space for AI is an opportunity for you know certain change and intent needs to be very very clear and then comes the curiosity to learn about this little bit more for each business owner and then comes the implementation part of it and then start -ups have a very very critical role to facilitate this because you are coming in almost as AI natives, working with an understanding of this and you can really go out and demonstrate value.

and help implement the entire readiness, improve the readiness quotient or improve the readiness quotient for small and medium enterprises and ensure that, you know, this is a broad -based growth opportunity for businesses across the country and not limited to just a few of them, right? So there is huge potential and I think enough has been spoken. The summit itself is a proof of the kind of curiosity. We have had 267 ,000, you know, registrations, people who have registered in the last five days. Unexpected, overwhelming response to some extent that we couldn’t really handle it, right? At the same time, it gives us immense pride and excitement for the amount of curiosity and excitement for the amount of curiosity and excitement for the amount of curiosity and the youngsters in India, across India.

travel here from the length and breadth of country to understand what is AI going to be and how this is going to impact and what the opportunities are and that is itself is a fantastic starting point and and as I said you you know there’s a lot of happens this is in Indian sovereign models coming the tech the five layers the infrastructure the design you know all the layers are being worked upon in the Indian context and we are ready as a nation we are ready as a government to facilitate this this real disruptive transformative technology but the truth is it’s you know it’s also important for it has to populate into all the layers of the society beat just not just limit itself to large and large corporates but also to to small and medium enterprises and of course it is already populated with the large and medium enterprises and of course it is already populated well into the d2c to the individual users and it’s much much beyond the beyond the chat, GBTs of the world.

So with that, I think I once again take the opportunity to thank the entire team from AMD and we have had some interesting conversation and I look forward for the continued partnership with AMD and METI Startup Hub because in our perspective, corporates have a huge role to play in the success of the startups. Thank you.

Timothy Robson

Thank you. There’s a couple of things that I want you guys to think about as I go through my talk. 30th of November, 2022. The world changed. ChatGPT was launched. And I’m willing to bet that everyone in this room, myself included, what we thought we knew about AI changed. Two years ago, what we thought even after ChatGPT changed. A year ago, what we thought changed. And what I’m hoping is as you leave here, other things would have changed in the last 45 minutes of listening to these talks. Okay, so I’m going to skip through the reason why we need to go through and need compute. But I think one thing that is very, very, very important is things are moving so fast.

And things are moving in a way that we cannot predict that the only way that anybody is going to be successful is an open ecosystem. And, you know, both of these themes. And I think the other gentleman before me have alluded to this as well. and I’m going to take you through specifically around software. I mean, everything to do with AI really, I’m a hardware guy, I used to design chips, but everything today is software, right? And I was talking to one of my colleagues and I said, okay, so I’m going to India, I’m going to do all this, we’re going to go through. And I said, is it really going to be the, you know, are they going to understand it?

And he said, Tim, India is software. This is what we do. He said, you’re going to be in front of the best people in the world that are going to understand what you want to talk about. So I’m really going to focus on the software side of that. And one of the things that I wanted to do, understanding that we had our esteemed colleague from MITEI here, is we do have lots and lots of experience in this space. And one of the things that I want to highlight is some work that we did with Lumi in Finland. Now, why is this important? So within Europe, almost all the languages are Indo -European, right? If you know a little bit of Greek, if you know a little bit of Latin, if you know a little bit of one of the languages, there’s 27 countries in Europe.

so let’s call it 27 languages and then you have Finland Finland is a Uralic language nothing to do with any other language in Europe absolutely different construct, different base different absolutely everything and so what we found working with the guys in Finland is they were coming to us because they put in this Lumi supercomputer and they said okay so we have a small country in Europe, 5 million native speakers and we have to take all of this work that’s been done English, big codex Spanish, big codex, Hindi, English big codex of all of that to do your training, suddenly you have a language of 5 million people how do you get that language into your LLM model so that it becomes useful now I’m probably going to get the pronunciation really really wrong here okay but I did actually use chatGPT to look at the 22 Indian languages right so if we look at Bodo, Konkani, Dogri, Sindhi, Nepali there’s less than 5 million people that speak those languages so how do you get an Indian LLM that caters for everybody as we’ve seen from President Modi AI for all of that kind of thing and this is the kind of area that with Mighty this is where we would like to work with you guys and be able to bring some benefit of the work that we’ve been able to do now remember the first day 30th of November 2022 this machine was inaugurated so it was put together all of the systems were put together it was all brought up the chips were made years before this machine was inaugurated my birthday 13th of June 2022 6 months before ChatGPT came out so this machine with 12 ,000 GPUs that had the foresight from the Finnish government was using AMD technology to run AI before ChatGPT came out.

So a lot of people that think that a lot of the stuff from AI has come from a specific area. This again, think of our way of thinking. We were there and we have the ability. We actually did the Bloom 176 billion parameter model. It was an open model made for European languages. So again, we would love to bring this knowledge and use with the Indian ecosystem to make this successful for everybody. I’m not going to spend a lot of time on hyperscalers. They’re obviously an important part of the market. It’s where a lot of the capabilities go into. We’re there. We have tens of thousands of GPUs. We actually have, as Thomas mentioned, we have the Helios system coming here.

Please go and take a look at it. If you like Harvard, it’s an interesting piece of kit. But really the idea here is whether you’re in a hyperscaler… or whether you’re in any other area, there is an ability to have a wider ecosystem. And again, inference, so AMD specifically, it’s not really an AMD pitch, but there was an idea in the market that AMD was inference only. That dates from Q1 2024. That’s two years old. So again, we have to kind of change that thinking, right? That’s older thinking. We actually now, again, completely open source. There’s a Primus ecosystem or a Primus tool for the open source, which enables you to be able to do all of the training that you need to do for all of your Indic languages or for your use cases, which again is completely open.

Enterprise AI. This one I think is an interesting one. I know when I started going out to customers and going out to enterprise customers, the difference in customer knowledge on what AI was, was amazing. You go into one customer and they say, okay, so this is our use case and we’re seeing these kinds of sizes of matrices, so we’re doing these optimizations. And then you go into another customer and you say, what are you doing around AI? And the guy goes, oh yeah, we’re doing Gen AI. Okay, great, yeah, what are you doing with Gen AI? We’re using LLMs. Okay, great, so using LLMs, what do you think? LLMs. And they had no idea, right?

It’s just, we have to do something with AI. And that has changed over the last 18 months and chatbot was something that most people said, okay, that makes sense, I understand chatbot, we can fine tune the model, we can do an internal AI system within the company. And now we’re starting to see with the agentic workflows this entire plethora of different use cases coming through. And so how then do you take it from a research institute or people that actually get onto your accelerator, whether that’s a GPU or a TPU or an FPGA or whatever else? and get it to a stage where actually people within a corporation can use it. And so this is something that has been understood.

And again, no lock -in, open, everything here is something that can be used without having to tie you into one particular area. And actually, I’ll come on to it a little bit later as well. It’s also something that I’ve been very impressed with, with the infrastructure that MITEI have put into place. In this case, with the public -private partnership, you have GPUs, you have TPUs, you have Inferentia, you have all of the different types of accelerators available to you within the Indian ecosystem that MITEI have made available to you. I’ll come on to that a little bit later more. But again, the idea here is that whatever the ecosystem is, or whatever the compute that you’re using, you’re able to go from an area where, whether it’s in the cloud or whether it’s non -prem, you have an ability to be able to give your employees within your enterprise an ability to be able to use that AI assistant or tool.

Neo clouds so these are the kind of what we call the smaller clouds, you know, they’re not the hyperscalers they’re a little bit more nimble they are a little bit more available to doing things a little bit different a lot of these guys are doing sort of bare metal and managed Kubernetes services, but it is coming to areas where they’re becoming like APIs, token factories there’s an ability for these guys to be able to provide you with compute quickly easily and at reasonable pricing to enable you in whatever it is you’re trying to do we find these are the first movers in the market and again in the same way that we’re integrated and working with the hyperscalers, we have these relationships with the Neo clouds and actually we’re working with quite a few of the guys here in India as well to make that available for you as well, so the whole idea again here is there is that compute that’s available please go out and understand the benefits or the trade -offs between the different types of Kubernetes services that you have out there and get the right solution for you guys.

Now, I’m assuming that most people here are going to be startups. And again, startup is an interesting area, right? So you have a startup, you know what you want to do, you absolutely are laser focused on getting your MVP out there, getting in front of customers, how do you generate some value, how do you generate some revenue? Although that these days is less and less important, it seems, as people get funding even sometimes before a product. But one of the things that you guys have to be sure of is that the compute that you have and the capabilities that you have are capable for the products that you actually have to then go and put into position.

And so this is an area where we understand that proof of concept, it’s very important. And again, I was chatting with the CEO of Mighty here before, it’s something he was saying, you know, POC to PO. You know, you have to be able to make sure that you understand the technology and how you can take that to market before you can actually go and invest. So we have a couple of different ways that we can help here within the ecosystem. You could actually go on there right now, there’s the AMD Developer Cloud. You can get, I think it’s 50 or 100 hours of free compute. You want to go on, how does AMD work, you know. It’s always going to be dependent on use case and what you’re trying to do.

But there is a huge TCO advantage, which of course is important for startups. Get onto the Dev Cloud, get it working. We actually provide Docker containers, so that’s everything put into a single Docker. So you can download a Docker and run it, so you don’t have to spend your time and your energy installing all of the software, putting everything together, get everything working. We’ve done all of that for you. Take the Docker down, get your model off of Hugging Face, get your weights off of Hugging Face. Use your own model and do something else. Whatever there is that’s in there, in the open source ecosystem is there and it’s going to work. Give it a go.

Give it a play. And then of course from that we can… can take you into our accelerator cloud a little bit more sort of hands -on, making sure we understand what you’re doing, helping, guiding, and assisting you in moving yourself forward there. And then, of course, we have the relationships in with the industry, you know, try and buys, being able to get you access to the computer, being able to get you the right solution at the right kind of price. So this is something also that I really want to highlight. So day zero support of models. Now, we announced this. So Quen3 Codex came out last week, day zero support on AMD. Baidu came out with one of their paddle models this week, day zero support on AMD.

What does day zero support mean? Well, it means that it’s not the first time we’ve seen this code. It runs on AMD. It’s guaranteed. It’s optimized. you know a lot of people think that to run something in AI you need a specific GPU the whole point is with day zero support absolutely false right again with Lumi pre -chat GPT in 2022 we were building LLMs for effectively an Indic type language languages right and so the ability is there if there’s a new model coming out you want to run it you want to test that you want to see how it works for you guys then that is there and runs out of the box and you know again if we look at this line in the middle you know PyTorch if you look at the history of PyTorch you know there were lots of signatories on PyTorch to make sure that was available for everybody AMD was one of them this mainly comes out of Microsoft and Meta who did not want to be closed in to a single supplier so actually what you’re doing with PyTorch is you’re writing Python code right you’re not writing vendor specific code it’s an open ecosystem that’s the whole point right you don’t want to be tied in you know it’s gonna slice for innovation it’s going to increase So PyTorch came out and that is the basis of 99 % of all of the customers I talk to, right?

They’re all writing Python under PyTorch. JAX is then coming forward. Triton, this is a Python -like language which is specific for gem optimization. Again, if you’re getting to that area where you’re actually seeing the gem sizes that are coming through from your operations and want to do gem -level operations, then Triton enables you to do that at the compiler level. So then you can be completely agnostic of what the underlying hardware is. The ecosystem and the underlying compute becomes kind of abstracted away because Triton enables you to run on anybody. It’s just a compiler for the new architecture. If we look at these models on the bottom here, President Modi this week has announced the first 12 Indian languages.

I can’t wait to get you guys here. right, fully supported day zero support, you know just to give you an example here, DeepSeek of course when DeepSeek came out, they did some things a little bit special multi -head latent attention was new we had day zero support with DeepSeek why? Because we’re one of the main contributors to SG9 there’s not a tie -in to an inference engine here, it’s an open ecosystem so we were able to come out of the door with better TCO, better performance, better cost and full support through SG9 on that DeepSeek model which was you know, the leader of its time because of our complete commitment to the open ecosystem just to give you an idea, again you’re walking out of here in 45 minutes with changed ideas, this is what we’re going for here I did have two minutes I now have five, I don’t know who bought me extra time but I owe you a beer Okay, so really actually that’s kind of the end of the pitch here.

One thing I would say is we do have a booth here at 5 .10. I’m sorry, I’m going to do a little bit of an AMD plug at the end here. But do come by and see us. You know, we actually have some of the neoclads there. We have some model creators, vendors, some ecosystem partners there. You know, come see, come change your mind. Come see what’s available within an ecosystem with the compute that’s available for you guys. Okay, thank you.

Gilles Garcia

So first of all I’m Gilles Garcia I’m French so we can talk about LLMs for French language if you want so I’m French, I’m based in France but I’m covering worldwide and I’m focusing on physical AI for the communications and robotics and industrial so we have been talking a lot about AI and most of the people are thinking AI means GPUs, big cloud and what we are seeing is a big shift, that’s another change that we are seeing, change management, so I’m the change management first but changing is we are seeing the AI moving into the edge and moving into the far edge which is industrial, robotics vehicles as well as the networks and so for that you need to have different type of beast, GPUs is one aspect of it but you need to have very profound different technology that AMD has as part of the bread portfolio that we have, these technologies need to be able to send to the market and we need to have a lot of that are able to send the data to the market and we need to have a lot of and we need to have a lot of that are able to send the data to the market and we need to have a lot of that are able to send the data to the market that are able to send the data to the market act, react in a so quick manner that there is no time to go back to the cloud for that.

And so these technologies need to be, of course, that will be inference, but need to be able to take decisions and act very safely, reliability, reliable, without having to rely on the cloud. And so that’s a new change that we’re seeing at AMD on the physical AI, which will become very, very important for us, is how do we take what we have learned in the cloud, and how do we make it available in the physical AI? Software is a big thing. Full stack, meaning hardware and software being able to deliver to the customer solutions, is what AMD is aiming for. And so our CEO, Lisa Su, was saying, it’s AI anywhere. And one size does not fit all.

Meaning that if you want to address a robot you can put a GPU into it, it will burn to hell. So you need to have a very dedicated accelerator with a high software stack, open source, that will be able to have this robot perceiving, visualize, act, touch and be able to act accordingly to what his purpose will be. At CES early January, Lisa Su brings on stage Gene01, which was the first humanoid built on AMD technology. That’s just impressive. Everything has been done by a startup in Italy to make this humanoid being able to sense, visualize, touch when somebody is touching it and when it’s touching something to act and react very rapidly without having to rely on the centralized source.

So I will not be longer than that. Physical AI is probably something that India, by the way, will have a lot of things to act into. Because GPUs are there already where physical AI is something that you will have to create. A lot of things related to medical, related to autonomous networks, autonomous cars, autonomous plants, industrial, and that’s where I think India will start, with all the startups and capability to use accelerators that are much smaller than what GPUs are, and this is all available today in the EMD portfolio. So I will stop here, encourage you to come to the EMD booth, and we can continue the discussion. Thank you.

Thomas Zacharia

Well, so we gave you a lot of information on AI, gave you four different accents, I think the French guy probably carries today. But my one message is that stay curious, as all of us have said, things are going to change and continue to change at a rapid pace. And, you know, people talk about so many thousands of GPUs, that will not be the main thing. and I think that’s something that we need to because you will find that we there’s a whole lot of interest in trying to provide you with even more powerful GPUs for their infrastructure while at the same time provides you very lightweight low power at the edge and so I think stay curious look from the from a start -up community point of view for a research point of view but academic point of view look for really interesting problems challenges to deliver the infrastructure that you need because ultimately this applications with where it is going to change society and life that’s all thank you very much Thank you.

Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (12)
Factual NotesClaims verified against the Diplo knowledge base (3)
Additional Contexthigh

“Zacharia introduced the U.S. Department of Energy’s Genesis Initiative, a public‑private programme launched under the Trump administration to accelerate scientific discovery, energy research and national security through AI.”

The knowledge base confirms the existence of a DOE‑led Genesis Mission that mobilises all 17 DOE national laboratories and partners such as Google DeepMind, but it does not specify that the programme was launched under the Trump administration; the launch timing is not detailed in the sources.

Confirmedhigh

“The DOE’s three pillars—discovery science, energy, and national security—are supported by a network of 17 national labs, such as Oak Ridge, whose historic role in the Manhattan Project underscores the agency’s dual focus on energy and broader scientific outcomes.”

The knowledge base notes that the Genesis Mission involves 17 DOE national laboratories [S11] and that Oak Ridge National Laboratory played a major role in the Manhattan Project, receiving about 65 % of its funding [S5], confirming both the lab count and Oak Ridge’s historic significance.

Additional Contextmedium

“Any federated compute and data platform must be built security‑by‑design, incorporating confidential‑computing capabilities to protect sensitive research and national‑security workloads.”

Sources highlight the importance of secure-by-design ICT procurement and note existing gaps in security-standard implementation [S87], and they also reference confidential-computing features in new hardware offerings such as Fujitsu’s servers [S89], providing additional context for the security-by-design claim.

External Sources (90)
S1
Building the AI-Ready Future From Infrastructure to Skills — Timothy Robson, a hardware engineer who transitioned to software, reinforced the importance of vendor-agnostic developme…
S2
Building the AI-Ready Future From Infrastructure to Skills — – Thomas Zacharia- Gilles Garcia
S3
Advocacy to Action: Engaging Policymakers on Digital Rights | IGF 2023 — By engaging policymakers and parliamentarians, Garcia provides them with evidence of rights violations to support her ca…
S4
Building the AI-Ready Future From Infrastructure to Skills — -Paneerselvam M- CEO of the METI Startup Hub at Ministry of Electronics and IT, Government of India; distinguished leade…
S5
https://dig.watch/event/india-ai-impact-summit-2026/building-the-ai-ready-future-from-infrastructure-to-skills — I’d like to invite our next speaker, Paneerselvam M, CEO of the METI Startup Hub at Ministry of Electronics and IT, Gove…
S6
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S7
Keynote-Vinod Khosla — -Moderator: Role/Title: Moderator of the event; Area of Expertise: Not mentioned -Mr. Jeet Adani: Role/Title: Not menti…
S8
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S9
Building the AI-Ready Future From Infrastructure to Skills — – Timothy Robson- Thomas Zacharia
S10
The Global Power Shift India’s Rise in AI & Semiconductors — – Thomas Zacharia- Rahul Garg – Vivek Kumar Singh- Thomas Zacharia
S11
Google DeepMind partners with DOE for AI-driven science — Google DeepMind ispartnering with the US Department of Energy(DOE) to support the White House’s Genesis Mission, a natio…
S12
The fading of human agency in automated systems — In practice, however, being “in the loop” frequently means supervising outputs under conditions that make meaningful jud…
S13
Democratizing AI Building Trustworthy Systems for Everyone — “of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control …
S14
Keeping AI in check — Societies should not be forgetful of the fact that technology is a product of the human mind and that the most intellige…
S15
AI-Driven Enforcement_ Better Governance through Effective Compliance & Services — This comment is exceptionally thought-provoking because it addresses the critical tension between AI efficiency and publ…
S16
Driving Social Good with AI_ Evaluation and Open Source at Scale — Audience members repeatedly stress that humans are needed to evaluate prompts, identify system gaps, and craft test case…
S17
Open Forum #33 Building an International AI Cooperation Ecosystem — Participant: ≫ Distinguished guests, dear friends, it is a great honor to speak to you today on a topic that is reshapin…
S18
Leveraging the UN system to advance global AI Governance efforts — Daren Tang:Thank you, Reinhard, and thank you, Doreen, for leading us on this important conversation. Very happy to meet…
S19
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — <strong>Sihao Huang:</strong> of these agents work with each other smoothly. And protocols are so important because that…
S20
Responsible AI for Shared Prosperity — “The research and development capability, which I was in the first instance, and that was an amazing initiative because …
S21
[Tentative Translation] — –  In order to promote the creation of needs-pull innovation by the government, the government will promote the new Jap…
S22
Contents — Entrepreneurs deliver fresh ideas and rethink commerce. The networking of their innovative skills with established compa…
S23
1 Introduction — Improving the functioning of national and regional innovation ecosystems is a prerequisite for increasing ex…
S24
From Human Potential to Global Impact_ Qualcomm’s AI for All Workshop — All right. How am I doing on time? Maybe I have 10 minutes. So let me talk a little bit on data center. I don’t see the …
S25
Panel Discussion Data Sovereignty India AI Impact Summit — By domestic, which is because in the age of AI, I strongly believe that the sovereign AI compute infrastructure has beco…
S26
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — “What raw material is needed for AI?”[9]. “sovereign AI comes to India, we’ll have the control”[56]. “Indian government …
S27
Secure Finance Risk-Based AI Policy for the Banking Sector — Trust is built when systems are predictable, explainable, and accountable. Trust deepens when innovation aligns with pub…
S28
WS #294 AI Sandboxes Responsible Innovation in Developing Countries — Anastasiadou argued that sandboxes are “particularly beneficial for SMEs,” addressing a critical gap in the innovation e…
S29
Can National Security Keep Up with AI? / Davos 2025 — As the conversation concluded, it was clear that the intersection of AI and national security presents a complex landsca…
S30
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — Digital networks and AI developments are critical assets for countries worldwide. Thus, they become central to national …
S31
Driving Indias AI Future Growth Innovation and Impact — I don’t think it’s necessarily about security. You know, it’s really about saying how many of those have real use cases?…
S32
Indias AI Leap Policy to Practice with AIP2 — “This is essentially what we provide for startups.”[16]. “And startups become a very, very important player in this game…
S33
Open Forum #70 the Future of DPI Unpacking the Open Source AI Model — Economic | Infrastructure | Development PayPal chose to use open source protocols because it attracts the best talent t…
S34
The Innovation Beneath AI: The US-India Partnership powering the AI Era — This historical analogy brilliantly challenges the current centralized AI paradigm by suggesting that today’s massive da…
S35
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Artificial intelligence | Information and communication technologies for development Arun advocates for moving inferenc…
S36
Building the AI-Ready Future From Infrastructure to Skills — The programme’s implementation through the American Science Cloud, powered by AMD’s MI355 cluster, demonstrates public-p…
S37
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — The government’s response through the India AI Mission has established a shared compute framework providing access to 38…
S38
UK AI plan calls for AI sovereignty and bottom-up developments — The UK government has launched an ambitiousAI Opportunities Action Planto accelerate the adoption of AI to drive economi…
S39
AI Without the Cost Rethinking Intelligence for a Constrained World — Beyond 131,000 context window, CPU-based solutions with new algorithms can outperform GPU-based systems GPU-based infra…
S40
WS #208 Democratising Access to AI with Open Source LLMs — Developing countries face challenges in implementing open source AI due to limited infrastructure and technical expertis…
S41
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — Given the lack of GPUs and data centers in the Global South, new business models need to be developed that allow for sha…
S42
The fading of human agency in automated systems — Crucially, a human presence does not guarantee agency if the system is designed around compliance rather than contestati…
S43
Part 2.5: AI reinforcement learning vs human governance — In contrast, human governance involves learning through historical experience, cultural evolution, and institutional dev…
S44
Driving Social Good with AI_ Evaluation and Open Source at Scale — Moderate disagreement with significant implications. The disagreements reflect deeper tensions between technical efficie…
S45
Keynote-Julie Sweet — This distinction is philosophically profound and practically important. ‘Humans in the loop’ suggests a reactive, compli…
S46
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Diana Nyakundi:Yeah, thanks Fadi. So with regards to opportunities, there are a lot of AI pilot projects that are coming…
S47
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Portugal offers something increasingly rare, agility with stability. A country large enough to scale, yet compact enough…
S48
Driving Indias AI Future Growth Innovation and Impact — So as we step back and look at what are the key elements of what a country and companies need to do, there really are th…
S49
AI That Empowers Safety Growth and Social Inclusion in Action — The discussion revealed tension between framework proliferation and the need for practical implementation guidance. Diff…
S50
Is the AI bubble about to burst? Five causes and five scenarios — Centralised, closed platforms vs. decentralised, open ecosystems. Historically,open systems often win in the long run– …
S51
New plan outlines how India will democratise AI infrastructure — Indiais moving to rebalance access to AI infrastructureas part of a new national push to close gaps in computing power a…
S52
Mind the AI Divide: Shaping a Global Perspective on the Future of Work — A limited number of countries are leading the way in developing compute capacity, while many others are beginning from a…
S53
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — “as we go from one gig to nine to ten gig … land water and power …”[30]. “defining India’s access to compute, access…
S54
Catalyzing Cyber: Stimulating Cybersecurity Market through Ecosystem Development — However, there are concerns about standards becoming barriers for smaller businesses and new entrants in the digital mar…
S55
High Level Session 2: Digital Public Goods and Global Digital Cooperation — All speakers consistently emphasized that Digital Public Goods must be built on open source principles and collaborative…
S56
From summer disillusionment to autumn clarity: Ten lessons for AI — Evidence continues to mount that more computing power cannot overcome core LLM limits—fragility under adversarial prompt…
S57
Developing capacities for bottom-up AI in the Global South: What role for the international community? — **Amandeep Singh Gill**, UN Tech Envoy, provided the institutional perspective and outlined the Secretary-General’s upco…
S58
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — The discussion reveals strong consensus on key strategic directions: comprehensive ecosystem development beyond chip man…
S59
Multi-stakeholder Discussion on issues about Generative AI — It is crucial for individuals to understand how to utilize AI and other technological advancements effectively and respo…
S60
AI: Lifting All Boats / DAVOS 2025 — Brad Smith: In my opinion, just my opinion, but first of all, that your book is great. And there’s another book that …
S61
AI/Gen AI for the Global Goals — Priscilla Boa-Gue argues for the creation of supportive policy environments to foster AI startups. This includes develop…
S62
Indias AI Leap Policy to Practice with AIP2 — Just prior to this I was having a conversation with a large corporate and how they can actually use startups as a cataly…
S63
Building the AI-Ready Future From Infrastructure to Skills — This discussion focused on building AI readiness and capabilities, featuring speakers from AMD and the Indian government…
S64
The Global Power Shift India’s Rise in AI &amp; Semiconductors — -Thomas Zacharia(Dr. Thomas Zakaria): Senior Vice President for Strategic Technical Partnerships and Public Policy at AM…
S65
Can National Security Keep Up with AI? / Davos 2025 — As the conversation concluded, it was clear that the intersection of AI and national security presents a complex landsca…
S66
AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023 — Public-private partnerships play a key role in these collaborations. Public-private partnerships were considered crucia…
S67
Driving Indias AI Future Growth Innovation and Impact — I don’t think it’s necessarily about security. You know, it’s really about saying how many of those have real use cases?…
S68
Indias AI Leap Policy to Practice with AIP2 — “This is essentially what we provide for startups.”[16]. “And startups become a very, very important player in this game…
S69
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — And I want this. The most important thing that I want people to understand is… just because, and I think that the, you…
S70
WS #208 Democratising Access to AI with Open Source LLMs — Daniele Turra: Sure, actually, I was about to introduce some of the points that might help in that sense in this foll…
S71
Open Forum #70 the Future of DPI Unpacking the Open Source AI Model — Economic | Infrastructure | Development PayPal chose to use open source protocols because it attracts the best talent t…
S72
The Innovation Beneath AI: The US-India Partnership powering the AI Era — This historical analogy brilliantly challenges the current centralized AI paradigm by suggesting that today’s massive da…
S73
CES 2026 shows AMD betting on on-device AI at scale — AMD used CES 2026 to positionAI as a default featureof personal and commercial computing. The company said AI is no long…
S74
Designing Indias Digital Future AI at the Core 6G at the Edge — The convergence of AI and 6G will create a distributed computing fabric that extends far beyond traditional network boun…
S75
Opening of the session — Technology transfer is essential for capacity building in developing countries. The delegation commenced by expressing …
S76
Skilling and Education in AI — A technology company representative highlighted the critical importance of building comprehensive AI infrastructure with…
S77
WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance — Brandon Soloski: Okay, that’s interesting. I hear a little bit of a delay. Good idea. All right. Good afternoon, early…
S78
Building the Next Wave of AI_ Responsible Frameworks &amp; Standards — The panel addressed the fundamental tension between AI’s probabilistic nature and enterprise requirements for determinis…
S79
White House launches Genesis Mission for AI-driven science — Washington prepares for a significant shift in research as the White Houselaunches the Genesis Mission, a national push …
S80
A Global AI in Financial Services Survey — Indeed, Figure 2.17 shows that there seems to be an almost constantly positive relationship between investing in AI and …
S81
The Government’s AI dilemma: how to maximize rewards while minimizing risks? — Emma Inamutila Theofelus:Thank you so much, Robert. And I’m very happy to be on this panel with Neeraj and Mercedes, esp…
S82
AI for equality: Bridging the innovation gap — This comment is strategically insightful because it reframes women’s inclusion from a moral imperative to a business opp…
S83
Government AI investment grows while public trust falters — Rising investment in AIis reshapingpublic services worldwide, yet citizen satisfaction remains uneven. Research across 1…
S84
https://dig.watch/event/india-ai-impact-summit-2026/keynote-jeet-adani — She rises to stabilize, she rises to anchor a world searching for balance and she rises to build systems that are inclus…
S85
Towards 2030 and Beyond: Accelerating the SDGs through Access to Evidence on What Works — The discussion highlighted the transformative potential of AI and other digital technologies in accelerating evidence sy…
S86
AI Algorithms and the Future of Global Diplomacy — This observation sparked a deeper conversation about technological sovereignty and geopolitical risks in AI adoption. It…
S87
Dynamic Coalition Collaborative Session — Wout de Natris highlighted a concerning gap between available security standards and their implementation, noting that m…
S88
Scaling AI for Billions_ Building Digital Public Infrastructure — “Because trust is starting to become measurable, right, through provenance, through authenticity, as well as verificatio…
S89
Keynote by Vivek Mahajan CTO Fujitsu India AI Impact Summit — In the computing domain, Mahajan detailed Fujitsu’s hardware roadmap, beginning with their 2-nanometer ARM-based servers…
S90
IGF 2017 – Best practice forum on cybersecurity — Mr Belisario Contreras, Cyber Security Program Manager at the OAS, commented that cybersecurity is part of the IGF agend…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
T
Thomas Zacharia
8 arguments129 words per minute2769 words1283 seconds
Argument 1
Sovereign AI Infrastructure – Emphasizes the need for a government‑driven, public‑private partnership (U.S. DOE Genesis Initiative) that federates compute, data, and secure cloud‑enabled lab operations to accelerate scientific discovery, energy, and national security.
EXPLANATION
Thomas argues that AI readiness at the national level requires coordinated public‑private effort, integrating massive scientific infrastructure with secure, federated computing resources. This approach is positioned as essential to maintain the return on R&D investment and to drive breakthroughs in science, energy and security.
EVIDENCE
He describes the Genesis Initiative launched by the U.S. Department of Energy, noting its goal to use AI to accelerate scientific discovery and its structure as a public-private partnership that must federate compute, data and cloud-enabled lab operations, incorporate secure and confidential computing, and run on the American Science Cloud built on an MI355 cluster [16-48].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Genesis Initiative and DOE partnership are described in S1 and S11, illustrating a public-private effort to federate compute, data and secure cloud for science, energy and security. [S1][S11]
MAJOR DISCUSSION POINT
National AI infrastructure through public‑private partnership
AGREED WITH
Paneerselvam M
Argument 2
Governance with Human‑in‑the‑Loop – Calls for AI governance that keeps a person in the loop for validation, ensuring safe, responsible deployment of autonomous AI systems.
EXPLANATION
Thomas stresses that AI systems should not operate autonomously without oversight; a human must validate outputs before they are acted upon. This safeguards against unintended consequences and maintains trust in AI‑driven outcomes.
EVIDENCE
He explains that governance means keeping a person in the loop, using the example of a professor supervising student research and peer-review before publication, to ensure safe and responsible innovation [62-65].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The necessity and challenges of human-in-the-loop oversight are examined in S12, and the need for human evaluation to ensure trustworthy AI is highlighted in S16. [S12][S16]
MAJOR DISCUSSION POINT
Human oversight in AI governance
Argument 3
Human‑in‑the‑Loop Validation – Highlights the necessity of keeping humans involved in the validation loop to prevent unintended consequences and to maintain trust in AI‑driven scientific and commercial outcomes.
EXPLANATION
Thomas reiterates that human validation is essential to avoid accidental harms and to preserve confidence in AI‑generated results. This principle applies across scientific research and enterprise deployments.
EVIDENCE
He repeats the need for a human in the loop, citing the professor-student oversight model as a concrete illustration of validation before AI outputs are released [62-65].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Human validation as a safeguard is discussed in S12, while S16 stresses human evaluation as essential for reliable AI systems. [S12][S16]
MAJOR DISCUSSION POINT
Ensuring AI outputs are human‑validated
Argument 4
AI is broader than GPUs – need a holistic AI ecosystem.
EXPLANATION
Thomas points out that the current discourse over‑emphasizes GPUs as the sole driver of AI, while AI encompasses many other components and layers. He argues that a broader view is required to build true AI readiness.
EVIDENCE
He observes an over-indexing of AI on GPUs, noting that AI is much broader and that GPUs are only a part of the core infrastructure, while AMD provides a full suite of AI capabilities from PCs to the edge. [7-10]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Zacharia’s distinction between compute and capability and the broader AI stack is noted in S10, and S24 describes data-center AI capabilities that extend beyond GPUs. [S10][S24]
MAJOR DISCUSSION POINT
Broad AI ecosystem beyond GPUs
AGREED WITH
Gilles Garcia, Timothy Robson
Argument 5
Commitment to an open ecosystem and open standards to foster innovation.
EXPLANATION
Thomas stresses that AMD’s strategy is built on openness, both in hardware and software, to avoid vendor lock‑in and enable a vibrant ecosystem of developers and startups.
EVIDENCE
He states that AMD is committed to making both hardware and software infrastructure based on open standards, supporting open source and open platforms so innovators can build without being locked into a single vendor. [70-73]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AMD’s pledge to open standards is quoted in S1, and the importance of open, interoperable AI protocols is discussed in S19. [S1]<a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S19]
MAJOR DISCUSSION POINT
Open ecosystem and standards
AGREED WITH
Timothy Robson, Gilles Garcia
Argument 6
Developing talent and research enablement as the foundation for AI readiness.
EXPLANATION
Thomas argues that national AI readiness rests on skilled talent and on providing researchers with environments where they can constantly question and innovate, rather than relying solely on industry solutions.
EVIDENCE
He says AI readiness rests on talent, giving people access to compute and models, and that research enablement is key for continuous questioning and innovation. [66-68]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Building research capacity and talent as the first pillar of AI sovereignty is emphasized in S20, with S1 also linking skills development to infrastructure. [S20][S1]
MAJOR DISCUSSION POINT
Talent and research enablement
Argument 7
Supporting start‑up innovation labs to translate ideas into new companies.
EXPLANATION
Thomas highlights the importance of creating innovation labs where ideas can be nurtured into startups, which in turn drive enterprise and public‑sector adoption of AI technologies.
EVIDENCE
He recommends starting innovation labs so that new ideas can become companies, noting that many emerging technologies are led by startups and eventually adopted by both enterprise and the public sector. [69-71]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The role of start-ups in innovation ecosystems and the need for supportive funding are described in S22. [S22]
MAJOR DISCUSSION POINT
Start‑up innovation labs
AGREED WITH
Paneerselvam M, Timothy Robson
Argument 8
Energy‑efficient exascale computing demonstrates sustainable high‑performance AI.
EXPLANATION
Thomas describes the 2007 Exascale program’s goal of delivering an exascale system under 20 MW, emphasizing that sustainable power consumption is essential for large‑scale AI infrastructure.
EVIDENCE
He explains that the Exascale program aimed to deliver a system under 20 MW (instead of the gigawatt levels that scaling would have required) and succeeded with a system using less than 20 MW, showing that audacious goals can be met sustainably. [86-88]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The exascale program’s sub-20 MW power target and its relevance to sustainable AI compute are detailed in S1. [S1]
MAJOR DISCUSSION POINT
Energy‑efficient exascale systems
P
Paneerselvam M
4 arguments156 words per minute534 words205 seconds
Argument 1
India’s Sovereign AI Model – Highlights India’s five‑layer sovereign AI architecture and the government’s commitment to enable AI across all societal sectors, ensuring that the initiative is not limited to large corporates.
EXPLANATION
Paneerselvam outlines a comprehensive, five‑layer AI framework that the Indian government is building to serve every segment of society, from large enterprises to SMEs and individual users. He stresses that this model aims for inclusive, nation‑wide AI adoption.
EVIDENCE
He mentions that India is developing a five-layer sovereign AI architecture, that the government is ready to facilitate AI across all layers of society, and that the effort is not confined to large corporations but includes SMEs and individual users, citing the broad registration response to the summit and the ongoing work on all layers of the Indian context [106-113].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Discussion of India’s sovereign AI stack and national AI infrastructure appears in S25 and S26, outlining the five-layer model and its inclusive intent. [S25][S26]
MAJOR DISCUSSION POINT
Inclusive national AI architecture for India
AGREED WITH
Thomas Zacharia
Argument 2
Start‑ups as AI Natives – Argues that start‑ups, being native to AI, are crucial for improving the AI readiness quotient of SMEs and delivering broad‑based economic growth.
EXPLANATION
Paneerselvam claims that startups, having grown up with AI, can demonstrate value quickly and help raise the AI readiness of small and medium enterprises, thereby driving widespread economic benefits. Their role is positioned as essential for scaling AI across the country.
EVIDENCE
He states that startups have a very critical role to facilitate AI adoption, act as AI natives, demonstrate value, and improve the AI readiness quotient for small and medium enterprises, contributing to broad-based growth across the nation [106-108].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The critical role of start-ups for SME AI adoption is highlighted in S22, and S28 discusses AI sandboxes that specifically support SMEs and start-ups. [S22][S28]
MAJOR DISCUSSION POINT
Startups driving AI adoption for SMEs
AGREED WITH
Thomas Zacharia, Timothy Robson
Argument 3
Massive public interest, shown by 267,000 registrations, indicates strong demand for AI education and participation.
EXPLANATION
Paneerselvam points out the overwhelming response to the summit as evidence of widespread curiosity and eagerness among Indian citizens, especially youth, to engage with AI.
EVIDENCE
He reports that 267,000 people registered in the last five days, describing the response as unexpected, overwhelming, and a source of pride and excitement for youngsters across India. [110-112]
MAJOR DISCUSSION POINT
High public engagement
Argument 4
Strategic partnership with AMD and the METI Startup Hub will accelerate AI adoption across India.
EXPLANATION
Paneerselvam emphasizes the collaborative relationship with AMD as a key lever for delivering AI capabilities to startups and enterprises, positioning the partnership as central to the nation’s AI roadmap.
EVIDENCE
He thanks the AMD team, expresses looking forward to continued partnership with AMD and the METI Startup Hub, and notes that corporates have a huge role to play in startup success. [113-114]
MAJOR DISCUSSION POINT
AMD‑METI partnership
T
Timothy Robson
6 arguments167 words per minute2753 words986 seconds
Argument 1
Compute Access for Start‑ups – Describes AMD’s Developer Cloud, free GPU hours, Docker containers, and “day‑zero” model support that give start‑ups low‑cost, ready‑to‑run compute resources to move from proof‑of‑concept to production.
EXPLANATION
Tim outlines practical resources AMD provides to startups, including a cloud platform with complimentary GPU time, pre‑packaged Docker images, and immediate support for new AI models. These services lower barriers and enable rapid progression from prototype to market.
EVIDENCE
He details the AMD Developer Cloud offering 50-100 free GPU hours, ready-to-use Docker containers that bundle all required software, and “day-zero” support for new models, allowing startups to test and run models out-of-the-box without extensive setup [187-196].
MAJOR DISCUSSION POINT
Low‑cost compute resources for startups
Argument 2
Open‑Source, Vendor‑Agnostic Tools – Stresses that AI success depends on an open ecosystem (PyTorch, JAX, Triton) and day‑zero support for new models, preventing lock‑in and enabling rapid innovation.
EXPLANATION
Tim argues that open, vendor‑agnostic software stacks are essential for AI development, allowing developers to write code once and run it on any hardware. Day‑zero support ensures new models work immediately on AMD platforms, fostering innovation without vendor lock‑in.
EVIDENCE
He highlights the use of open frameworks such as PyTorch, JAX and the Triton compiler, explaining that they let developers write Python code that runs on any hardware, and notes AMD’s contributions that enable day-zero support for emerging models, thereby avoiding lock-in [210-218].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of open frameworks and standards to avoid vendor lock-in is discussed in S19, while S13 addresses democratizing AI through open resources. <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S19][S13]
MAJOR DISCUSSION POINT
Open, vendor‑neutral AI software ecosystem
Argument 3
Multilingual LLM development using the Lumi supercomputer to serve low‑resource languages.
EXPLANATION
Tim highlights work with Finland’s Lumi supercomputer to adapt large language models for languages with few speakers, including many Indian languages, demonstrating how AI can be inclusive of linguistic diversity.
EVIDENCE
He explains that Finland’s Lumi supercomputer was used to create LLMs for Finnish (a Uralic language) and that similar methods can be applied to Indian languages with under-5-million speakers, aiming to build an Indian LLM for all languages. [135-144]
MAJOR DISCUSSION POINT
Multilingual LLMs for low‑resource languages
Argument 4
Promotion of Neo clouds and alternative compute providers to offer flexible, cost‑effective AI services.
EXPLANATION
Tim describes Neo clouds as smaller, nimble providers that deliver bare‑metal or managed Kubernetes services, giving enterprises rapid and affordable access to compute beyond the hyperscalers.
EVIDENCE
He notes that Neo clouds are not hyperscalers but provide quick, affordable compute via APIs and token factories, often using bare-metal or managed Kubernetes, and that they are first movers in the market. [176-178]
MAJOR DISCUSSION POINT
Neo clouds for flexible compute
Argument 5
Emphasis on moving from proof‑of‑concept to production, highlighting a clear pathway for startups.
EXPLANATION
Tim stresses that startups need structured support to transition from prototype to marketable product, and that AMD can provide guidance, resources, and validation to ensure technology readiness before large investments.
EVIDENCE
He states that proof-of-concept to product (POC-to-PO) is essential, that startups must understand technology before investing, and that AMD offers hands-on assistance, accelerator cloud access, and industry relationships to facilitate this transition. [184-186]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Support mechanisms that help start-ups transition from prototype to market are described in S22, which emphasizes entrepreneurship and scaling pathways. [S22]
MAJOR DISCUSSION POINT
POC‑to‑production pathway
Argument 6
Day‑zero support for emerging models ensures immediate compatibility and reduces vendor lock‑in.
EXPLANATION
Tim outlines AMD’s practice of providing out‑of‑the‑box support for newly released AI models, guaranteeing they run on AMD hardware without additional engineering, thereby lowering total cost of ownership and avoiding lock‑in.
EVIDENCE
He lists day-zero support for Quen3 Codex, Baidu Paddle, and DeepSeek models, explaining that AMD’s contributions to frameworks like PyTorch enable new models to run immediately on AMD GPUs, offering better TCO and performance without vendor lock-in. [206-221]
MAJOR DISCUSSION POINT
Day‑zero model support
G
Gilles Garcia
4 arguments177 words per minute624 words211 seconds
Argument 1
Edge‑Centric Accelerators – Argues that AI is moving to the far edge (robots, vehicles, industrial plants) and requires low‑power, dedicated accelerators and a full hardware‑software stack, not just traditional GPUs.
EXPLANATION
Gilles points out that many AI workloads now need to run locally on devices with strict latency and power constraints, demanding specialized accelerators and integrated software. This shift calls for a different approach than data‑center GPU‑centric AI.
EVIDENCE
He states that AI is moving into the far edge-robots, vehicles, industrial plants-and that this requires low-power dedicated accelerators and a full hardware-software stack, rather than relying solely on traditional GPUs [230-233].
MAJOR DISCUSSION POINT
Need for specialized edge AI hardware
AGREED WITH
Thomas Zacharia, Timothy Robson
Argument 2
AMD’s Edge AI Portfolio – Showcases AMD‑based physical AI solutions such as the Gene01 humanoid, demonstrating that AI can run locally with high reliability and low latency.
EXPLANATION
Gilles cites the Gene01 humanoid, built on AMD technology, as evidence that AMD’s edge AI portfolio can deliver perception, visualization and actuation directly on the device without cloud dependence. This exemplifies AMD’s capability in physical AI.
EVIDENCE
He references the Gene01 humanoid, the first robot built on AMD technology showcased at CES, which can sense, visualize, touch and act rapidly without relying on centralized cloud resources [239-241].
MAJOR DISCUSSION POINT
AMD’s demonstrable edge AI solutions
Argument 3
Full‑stack hardware‑software integration is essential for edge AI, ensuring reliable, low‑latency operation without cloud dependence.
EXPLANATION
Gilles argues that moving AI to the far edge requires dedicated accelerators combined with a complete software stack, so devices can act instantly and securely without round‑trips to the cloud.
EVIDENCE
He notes that edge AI must operate with low power, high reliability, and without cloud reliance, requiring a full stack of hardware and software; AMD’s portfolio provides such integrated solutions. [231-235]
MAJOR DISCUSSION POINT
Full‑stack edge AI
Argument 4
AMD’s ‘AI anywhere’ philosophy and diverse product portfolio address varied use‑cases from robots to industrial plants.
EXPLANATION
Gilles highlights AMD’s strategy of offering different AI solutions for different contexts, emphasizing that a one‑size‑fits‑all approach does not work and that AMD’s portfolio can support everything from humanoid robots to industrial automation.
EVIDENCE
He cites Lisa Su’s statement that AI is ‘anywhere’, the principle that one size does not fit all, and the Gene01 humanoid built on AMD technology that can sense, visualize, and act locally without cloud dependence. [236-239]
MAJOR DISCUSSION POINT
AI anywhere across use‑cases
AGREED WITH
Thomas Zacharia, Timothy Robson
Agreements
Agreement Points
Open ecosystem and open standards are essential to foster innovation and avoid vendor lock‑in.
Speakers: Thomas Zacharia, Timothy Robson, Gilles Garcia
Commitment to an open ecosystem and open standards to foster innovation. Open‑Source, Vendor‑Agnostic Tools – Stresses that AI success depends on an open ecosystem (PyTorch, JAX, Triton) and day‑zero support for new models, preventing lock‑in. AMD’s ‘AI anywhere’ philosophy and diverse product portfolio address varied use‑cases from robots to industrial plants.
All three speakers stress that openness-both in hardware and software-enables broader participation, rapid innovation and prevents dependence on a single vendor [70-73][124-128][156-158][235-236][239-241].
POLICY CONTEXT (KNOWLEDGE BASE)
This view aligns with the Digital Public Goods agenda that stresses open-source, interoperable standards to prevent vendor lock-in [S55] and echoes analyses that open ecosystems outperform closed platforms over time [S50].
Start‑ups are critical AI natives that accelerate adoption and drive economic growth.
Speakers: Thomas Zacharia, Paneerselvam M, Timothy Robson
Supporting start‑up innovation labs to translate ideas into new companies. Start‑ups as AI Natives – Argues that start‑ups, being native to AI, are crucial for improving the AI readiness quotient of SMEs and delivering broad‑based economic growth. Compute Access for Start‑ups – Describes AMD’s Developer Cloud, free GPU hours, Docker containers, and “day‑zero” model support that give start‑ups low‑cost, ready‑to‑run compute resources.
Thomas highlights innovation labs, Paneer emphasizes startups as AI natives for SME uplift, and Timothy details concrete low-cost compute resources for startups, all underscoring the pivotal role of startups in AI diffusion [69-71][106-108][178-186][187-196].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy briefs highlight the role of startups in AI diffusion, recommending supportive measures such as financing and regulatory sandboxes [S61] and noting their importance in multi-stakeholder innovation ecosystems [S59].
National‑level sovereign AI infrastructure requires public‑private partnership and coordinated investment.
Speakers: Thomas Zacharia, Paneerselvam M
Sovereign AI Infrastructure – Emphasizes the need for a government‑driven, public‑private partnership (U.S. DOE Genesis Initiative) that federates compute, data, and secure cloud‑enabled lab operations to accelerate scientific discovery, energy, and national security. India’s Sovereign AI Model – Highlights India’s five‑layer sovereign AI architecture and the government’s commitment to enable AI across all societal sectors, ensuring that the initiative is not limited to large corporates.
Both speakers describe large-scale, government-led AI programmes that combine public and private resources to build a sovereign AI stack for scientific, energy and societal goals [16-48][106-113].
POLICY CONTEXT (KNOWLEDGE BASE)
Examples include the US American Science Cloud partnership with AMD [S36] and India’s AI Mission public-private compute framework [S37], reflecting a broader policy trend toward shared sovereign AI infrastructure.
AI readiness requires a broader hardware ecosystem beyond GPUs, including low‑power edge accelerators.
Speakers: Thomas Zacharia, Gilles Garcia, Timothy Robson
AI is broader than GPUs – need a holistic AI ecosystem. Edge‑Centric Accelerators – Argues that AI is moving to the far edge (robots, vehicles, industrial plants) and requires low‑power, dedicated accelerators and a full hardware‑software stack, not just traditional GPUs. Open‑Source, Vendor‑Agnostic Tools – Stresses that AI success depends on an open ecosystem and that GPUs are only one part of the solution.
All three note that focusing solely on GPUs is insufficient; a diverse set of accelerators, especially for edge workloads, is needed for future AI deployments [7-10][230-233][156-158].
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses criticize GPU-centric approaches and propose heterogeneous compute, including CPU-based and low-power accelerators, to democratise AI access [S39][S53].
Similar Viewpoints
Both emphasize that openness in software and standards is essential for AI progress and to avoid vendor lock‑in [70-73][124-128][156-158].
Speakers: Thomas Zacharia, Timothy Robson
Commitment to an open ecosystem and open standards to foster innovation. Open‑Source, Vendor‑Agnostic Tools – Stresses that AI success depends on an open ecosystem (PyTorch, JAX, Triton) and day‑zero support for new models, preventing lock‑in.
Both advocate for a coordinated national AI strategy that blends public and private resources to build sovereign capabilities [16-48][106-113].
Speakers: Thomas Zacharia, Paneerselvam M
Sovereign AI Infrastructure – Emphasizes the need for a government‑driven, public‑private partnership. India’s Sovereign AI Model – Highlights India’s five‑layer sovereign AI architecture and inclusive government commitment.
Both argue that AI development must go beyond data‑center GPUs to include diverse, low‑power hardware for edge applications [7-10][230-233].
Speakers: Thomas Zacharia, Gilles Garcia
AI is broader than GPUs – need a holistic AI ecosystem. Edge‑Centric Accelerators – Argues that AI is moving to the far edge and requires dedicated low‑power accelerators.
Unexpected Consensus
Need for AI capabilities at the edge, from national data‑center initiatives to low‑power devices.
Speakers: Thomas Zacharia, Gilles Garcia
And I have my colleague Tim from AMD, so we decided that we’re going to tag team. … I’ll focus perhaps a little bit on the sovereign side… (implies broader scope). Edge‑Centric Accelerators – Argues that AI is moving to the far edge (robots, vehicles, industrial plants) and requires low‑power, dedicated accelerators.
Thomas, while primarily discussing national-scale compute, also mentions AMD’s full suite of AI capability from PCs to the edge, aligning with Gilles’s focus on edge-centric accelerators-a convergence of high-level policy and low-level hardware that was not obvious from the outset [11][230-233].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy discussions on heterogeneous compute emphasize edge deployments and energy-efficient hardware to extend AI services beyond data centers [S53][S39].
Overall Assessment

The speakers converge on four main themes: (1) an open, standards‑based ecosystem; (2) the pivotal role of startups as AI natives; (3) the necessity of sovereign, public‑private AI infrastructure; and (4) the requirement for a diversified hardware stack beyond GPUs, especially for edge deployments.

High consensus across technical, policy and economic dimensions, indicating a shared vision that AI readiness depends on openness, inclusive innovation ecosystems, coordinated national strategies, and hardware diversity. This broad alignment strengthens the case for collaborative initiatives that combine government policy, industry resources, and startup agility to accelerate AI adoption.

Differences
Different Viewpoints
Centralized national AI cloud vs decentralized low‑cost compute for startups
Speakers: Thomas Zacharia, Timothy Robson, Gilles Garcia
Sovereign AI Infrastructure – Emphasizes the need for a government-driven, public-private partnership (U.S. DOE Genesis Initiative) that federates compute, data, and secure cloud-enabled lab operations to accelerate scientific discovery, energy, and national security. [16-48] Compute Access for Start-ups – Describes AMD’s Developer Cloud, free GPU hours, Docker containers, and “day-zero” model support that give start-ups low-cost, ready-to-run compute resources to move from proof-of-concept to production. [187-196] Edge-Centric Accelerators – Argues that AI is moving to the far edge (robots, vehicles, industrial plants) and requires low-power, dedicated accelerators and a full hardware-software stack, not just traditional data-center GPUs. [230-235]
Thomas advocates a large, federally funded national AI cloud (American Science Cloud) built on an MI355 cluster to serve strategic scientific and security missions, while Tim promotes a lightweight, cloud-based developer platform offering free GPU hours for startups, and Gilles stresses the need for edge-focused, low-power accelerators rather than centralized data-center resources. The speakers therefore disagree on the optimal scale and deployment model for AI infrastructure. [16-48][187-196][230-235]
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on centralised versus open, decentralized AI ecosystems are documented in analyses of AI platform models, highlighting the long-term advantage of open ecosystems for inclusivity [S50][S51].
Human‑in‑the‑loop governance vs rapid, open‑source deployment without explicit oversight
Speakers: Thomas Zacharia, Timothy Robson
Governance with Human-in-the-Loop – Calls for AI governance that keeps a person in the loop for validation, ensuring safe, responsible deployment of autonomous AI systems. [62-65] Open-Source, Vendor-Agnostic Tools – Stresses that AI success depends on an open ecosystem (PyTorch, JAX, Triton) and day-zero support for new models, preventing lock-in and enabling rapid innovation. [210-218]
Thomas insists that AI systems must always involve human validation before outcomes are acted upon, whereas Tim focuses on providing immediate, open-source toolchains and day-zero model support to accelerate deployment, without emphasizing a mandatory human-in-the-loop step. This reflects a tension between cautious governance and speed-driven openness. [62-65][210-218]
POLICY CONTEXT (KNOWLEDGE BASE)
Scholarly work contrasts formal human-in-the-loop oversight with deeper human agency, noting risks of compliance-only loops and advocating for more substantive governance [S42][S45][S44].
Emphasis on GPUs as core AI hardware vs broader AI ecosystem beyond GPUs
Speakers: Thomas Zacharia, Gilles Garcia
AI is broader than GPUs – Need a holistic AI ecosystem. [7-10] Edge-Centric Accelerators – Argues that AI is moving to the far edge and requires low-power dedicated accelerators, not just traditional GPUs. [230-235]
Thomas points out the over-indexing on GPUs and calls for a full AI stack, while Gilles highlights the need for specialized, non-GPU accelerators for edge applications, suggesting differing views on which hardware should be prioritized in AI strategy. [7-10][230-235]
POLICY CONTEXT (KNOWLEDGE BASE)
Critiques of GPU-centric hardware note supply constraints and propose CPU-centric or heterogeneous solutions as viable alternatives [S39][S40][S53].
Unexpected Differences
Scale of AI investment – massive national exascale projects vs inclusive, small‑scale SME focus
Speakers: Thomas Zacharia, Paneerselvam M
Energy-efficient exascale computing demonstrates sustainable high-performance AI. [86-88] Start-ups as AI Natives – Argues that start-ups, being native to AI, are crucial for improving the AI readiness quotient of SMEs and delivering broad-based economic growth. [106-108]
Thomas highlights ultra-large, energy-efficient exascale systems as the cornerstone of national AI readiness, whereas Paneer stresses building AI capacity through SMEs and startups, suggesting a divergence between focusing on massive flagship projects and grassroots, inclusive development. This contrast was not anticipated given the shared sovereign AI narrative. [86-88][106-108]
POLICY CONTEXT (KNOWLEDGE BASE)
Policy literature contrasts large exascale national programs with calls for democratized, SME-friendly investment models to ensure broader participation [S50][S48][S57].
Overall Assessment

The discussion reveals several points of tension: the appropriate scale and deployment model for AI infrastructure (centralized national clouds vs decentralized startup‑focused compute and edge accelerators), the balance between strict human‑in‑the‑loop governance and rapid open‑source deployment, and differing emphases on hardware priorities (GPUs vs specialized edge accelerators). While participants converge on openness, the importance of startups, and the need for sovereign AI frameworks, they diverge on how best to achieve these goals.

Moderate – disagreements are strategic rather than ideological, focusing on implementation pathways. They suggest that policy makers must reconcile large‑scale national investments with mechanisms that empower startups and edge deployments, and must embed governance safeguards without stifling the speed of innovation.

Partial Agreements
All three speakers stress the importance of openness—whether in hardware standards, software frameworks, or edge solutions—to avoid vendor lock‑in and to enable broad innovation across the AI stack. [70-73][210-218][239-241]
Speakers: Thomas Zacharia, Timothy Robson, Gilles Garcia
Commitment to an open ecosystem and open standards to foster innovation. [70-73] Open-Source, Vendor-Agnostic Tools – Stresses that AI success depends on an open ecosystem (PyTorch, JAX, Triton) and day-zero support for new models. [210-218] AMD’s Edge AI Portfolio – Showcases AMD-based physical AI solutions such as the Gene01 humanoid, demonstrating that AI can run locally with high reliability and low latency. [239-241]
All agree that startups play a pivotal role in scaling AI adoption and that providing them with accessible compute resources and innovation environments is essential. [69-71][106-108][187-196]
Speakers: Thomas Zacharia, Paneerselvam M, Timothy Robson
Supporting start-up innovation labs to translate ideas into new companies. [69-71] Start-ups as AI Natives – Argues that start-ups, being native to AI, are crucial for improving the AI readiness quotient of SMEs and delivering broad-based economic growth. [106-108] Compute Access for Start-ups – Describes AMD’s Developer Cloud, free GPU hours, Docker containers, and “day-zero” model support that give start-ups low-cost, ready-to-run compute resources. [187-196]
Both advocate for a sovereign, government‑led AI framework that integrates public and private resources to serve national priorities. [16-48][106-113]
Speakers: Thomas Zacharia, Paneerselvam M
Sovereign AI Infrastructure – Emphasizes the need for a government-driven, public-private partnership to build national AI capability. [16-48] India’s Sovereign AI Model – Highlights India’s five-layer sovereign AI architecture and the government’s commitment to enable AI across all societal sectors. [106-113]
Takeaways
Key takeaways
Sovereign AI requires government‑driven public‑private partnerships (e.g., US DOE Genesis Initiative, American Science Cloud) to federate compute, data, and secure cloud‑enabled lab operations for scientific discovery, energy, and national security. India is developing a five‑layer sovereign AI architecture that aims to bring AI capabilities to all sectors, including SMEs, through coordinated government effort. Start‑ups, being AI‑native, are critical for raising the AI readiness quotient of small and medium enterprises and for driving broad‑based economic growth. AMD is providing low‑cost, ready‑to‑use compute resources (Helios rack, AMD Developer Cloud, free GPU hours, Docker containers) and “day‑zero” model support to help start‑ups move from proof‑of‑concept to production. An open, vendor‑agnostic software ecosystem (PyTorch, JAX, Triton, open‑source tools) is essential to avoid lock‑in and enable rapid innovation. AI governance must retain a human‑in‑the‑loop for validation to ensure safe, responsible deployment of autonomous AI systems. Physical AI and edge computing are shifting AI workloads to the far edge (robots, vehicles, industrial plants) requiring low‑power dedicated accelerators and a full hardware‑software stack, exemplified by AMD’s Gene01 humanoid and edge AI portfolio. AMD’s exascale achievements demonstrate that ambitious compute goals can be met with efficient power usage, paving the way for future scaling (e.g., Zeta scale).
Resolutions and action items
AMD will continue to supply compute infrastructure (Helios rack, Developer Cloud) and maintain open‑source, day‑zero support for emerging models, especially for Indian language models. A partnership between AMD and the METI Startup Hub was reaffirmed to accelerate AI adoption among Indian start‑ups and SMEs. Commitment to build AI solutions on open standards and open‑source software to enable ecosystem interoperability and avoid vendor lock‑in. AMD will showcase and make available its edge AI portfolio (e.g., Gene01, low‑power accelerators) for developers targeting far‑edge applications.
Unresolved issues
Concrete framework for federating compute, data, and secure cloud operations across national labs, academia, and industry remains undefined. Specific processes and tooling for implementing human‑in‑the‑loop governance at scale were not detailed. Implementation roadmap for India’s five‑layer sovereign AI architecture, including timelines and responsible agencies, was not provided. Funding mechanisms and cost‑sharing models for large‑scale public‑private AI initiatives were not clarified. How to effectively integrate diverse accelerators (GPUs, TPUs, Inferentia, etc.) within the Indian ecosystem was raised but not resolved. Strategies to ensure widespread AI adoption by SMEs, beyond the availability of compute resources, were not fully addressed.
Suggested compromises
Adopt an open ecosystem approach that balances the need for security and governance with the desire to avoid vendor lock‑in. Combine high‑performance exascale compute for research with low‑power edge accelerators to meet both centralized and distributed AI workloads. Leverage public funding for foundational infrastructure while encouraging private sector innovation and start‑up participation as a public‑private partnership model.
Thought Provoking Comments
In AI, there seems to be an over‑indexing of AI and GPUs. When in reality, AI is much broader. GPU is obviously a significant part, but we provide a full suite of AI capability from AI PCs to core infrastructure to the edge.
Challenges the common narrative that AI equals GPUs, expanding the conversation to include software, data, edge devices, and end‑to‑end ecosystems.
Set the thematic foundation for the whole panel, prompting later speakers to discuss not just hardware but software stacks, open ecosystems, and edge deployments. It reframed the discussion from a hardware‑centric view to a holistic AI‑readiness perspective.
Speaker: Thomas Zacharia
The Genesis Initiative – using AI to accelerate scientific discovery, reduce R&D costs, and create a federated, secure, cloud‑enabled lab environment that spans national labs, academia, and industry.
Introduces a concrete government‑driven program that ties AI to national priorities (science, energy, security) and highlights the need for public‑private partnership, data federation, and security‑by‑design.
Created a turning point where the conversation moved from abstract AI readiness to concrete policy and infrastructure models. It prompted Paneerselvam and Timothy to reference sovereign initiatives and public‑private collaborations.
Speaker: Thomas Zacharia
Innovation in AI didn’t happen magically with NVIDIA or AMD. It happened because the US government took the risk to invest in first‑of‑a‑kind systems.
Places government investment at the heart of breakthrough AI hardware, countering the narrative that private sector alone drives progress.
Reinforced the earlier point about sovereign AI and gave credibility to the idea that nations must fund ambitious compute projects. It resonated with later remarks about national labs and the need for sustained R&D funding.
Speaker: Thomas Zacharia
Governance does not mean regulation. It’s about keeping a human in the loop so that AI agents can accelerate innovation while ensuring outcomes are validated.
Distinguishes between regulatory constraints and practical governance mechanisms, introducing the concept of “human‑in‑the‑loop” as a safeguard for autonomous AI systems.
Shifted the tone from purely technical capability to ethical and operational responsibility, prompting Timothy to discuss day‑zero support and open‑source tooling that enable transparent, auditable pipelines.
Speaker: Thomas Zacharia
Start‑ups have a very critical role to facilitate AI readiness because they are AI‑natives; they can improve the readiness quotient for SMEs and ensure the technology spreads beyond large corporates.
Highlights the ecosystem role of startups as catalysts for diffusion, adding a socio‑economic dimension to the technical discussion.
Opened a new thread about how government programs can leverage startups, leading Timothy to describe concrete support mechanisms (developer cloud, Docker containers, accelerator programs) for early‑stage companies.
Speaker: Paneerselvam M
ChatGPT’s launch on 30 Nov 2022 changed everything. Things are moving so fast that the only way to succeed is an open ecosystem.
Marks a clear turning point by pinpointing a recent event that accelerated AI adoption and underscores the urgency of openness for adaptability.
Prompted the panel to focus on software openness, interoperability, and community‑driven standards. It set up Timothy’s later discussion of day‑zero support and open‑source stacks.
Speaker: Timothy Robson
Day‑zero support means a model runs on AMD out of the box, with optimized performance—no lock‑in, just open‑source tools like Primus, PyTorch, Triton that abstract the hardware.
Introduces a tangible benefit for developers and startups, bridging the gap between hardware capability and immediate usability.
Provided a practical illustration of the open‑ecosystem promise, encouraging participants to consider AMD’s developer resources. It also reinforced the earlier governance point by showing transparent, reproducible pipelines.
Speaker: Timothy Robson
Physical AI is moving to the edge—robots, vehicles, industrial networks need dedicated low‑power accelerators that can act without round‑trips to the cloud.
Expands the conversation to edge AI, emphasizing latency, reliability, and power constraints, and introduces a new class of hardware beyond traditional GPUs.
Shifted the discussion from data‑center centric compute to distributed, real‑time AI, prompting Thomas’s closing remark about lightweight edge solutions and reinforcing the need for a diversified hardware portfolio.
Speaker: Gilles Garcia
Stay curious. The future won’t be just thousands of GPUs; it will be a mix of powerful data‑center GPUs and lightweight, low‑power edge accelerators.
Synthesizes the multiple strands of the conversation into a forward‑looking call to action, emphasizing continuous learning and balanced investment.
Served as a concluding turning point that tied together hardware, software, governance, and ecosystem themes, leaving the audience with a clear, motivating takeaway.
Speaker: Thomas Zacharia
Overall Assessment

The discussion was driven by a series of pivotal comments that repeatedly broadened the scope from a narrow GPU‑centric view to a holistic AI‑readiness ecosystem. Thomas Zacharia’s opening remarks and the Genesis Initiative framing anchored the conversation in national‑level strategy, while his points on government‑driven innovation and governance introduced policy and ethical dimensions. Paneerselvam’s emphasis on startups added a socio‑economic layer, and Timothy’s focus on the rapid post‑ChatGPT shift and day‑zero support supplied concrete, actionable examples of an open, developer‑friendly ecosystem. Gilles Garcia’s edge‑AI insight further diversified the technical narrative, prompting a final call from Thomas to stay curious and balance data‑center power with edge efficiency. Collectively, these comments redirected the dialogue multiple times, deepened analysis, and aligned participants around the need for open standards, public‑private collaboration, and inclusive growth across hardware, software, and societal dimensions.

Follow-up Questions
How can compute and data be federated across national labs, academia, and private sector to support sovereign AI initiatives?
Integrating diverse data sources and compute resources is essential for accelerating scientific discovery and ensuring secure, collaborative research across government and industry.
Speaker: Thomas Zacharia
What governance and security mechanisms are needed to enable public‑private partnerships for AI while maintaining confidentiality and national security?
Ensuring secure, confidential computing and governance by design is critical for trust and compliance in sovereign AI deployments.
Speaker: Thomas Zacharia
How can low‑resource and regional languages (e.g., Finnish, Bodo, Konkani, Dogri, Sindhi, Nepali) be incorporated into large language models to create effective Indian LLMs?
Building LLMs that understand all Indian languages is vital for inclusive AI services and aligns with national AI‑for‑all initiatives.
Speaker: Timothy Robson
What are the best approaches to move AI research prototypes into enterprise‑ready tools that employees can use within corporations?
Bridging the gap between research and production ensures that AI innovations translate into real business value and adoption.
Speaker: Timothy Robson
How should organizations evaluate trade‑offs between different Kubernetes services (e.g., hyperscalers vs. Neo clouds) for AI workloads?
Choosing the right cloud/Kubernetes platform impacts performance, cost, and agility for startups and enterprises deploying AI.
Speaker: Timothy Robson
How can AMD provide reliable ‘day‑zero’ support for newly released AI models to guarantee out‑of‑the‑box performance on its hardware?
Day‑zero support reduces integration friction for developers and accelerates adoption of new models on AMD GPUs.
Speaker: Timothy Robson
What hardware and software solutions are needed for physical AI at the edge (robots, autonomous vehicles, industrial systems) that are low‑power, reliable, and do not rely on cloud connectivity?
Edge AI requires specialized accelerators and a full software stack to enable real‑time decision‑making in safety‑critical applications.
Speaker: Gilles Garcia
How can an open ecosystem and open‑source tools be fostered to avoid vendor lock‑in and promote innovation across the AI community?
Open standards enable broader participation, interoperability, and faster advancement of AI technologies.
Speaker: Thomas Zacharia, Timothy Robson
What strategies can be employed to improve the AI readiness quotient for small and medium enterprises (SMEs) and startups in India?
Enhancing AI readiness among SMEs expands the economic impact of AI and ensures widespread adoption beyond large corporates.
Speaker: Paneerselvam M
How can talent development and access to compute resources be aligned to build national AI readiness?
A skilled workforce with adequate compute access is foundational for sustained AI innovation and competitiveness.
Speaker: Thomas Zacharia
What frameworks are needed to ensure human‑in‑the‑loop governance for agentic AI systems to prevent unintended consequences?
Human oversight is essential for safe deployment of autonomous AI agents, especially in critical scientific and security contexts.
Speaker: Thomas Zacharia

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.