Building the AI-Ready Future From Infrastructure to Skills
20 Feb 2026 11:00h - 12:00h
Building the AI-Ready Future From Infrastructure to Skills
Session at a glance
Summary
This discussion focused on building AI readiness and capabilities, featuring speakers from AMD and the Indian government’s METI Startup Hub at what appears to be an AI summit in India. Thomas Zacharia from AMD opened by emphasizing that AI extends far beyond just GPUs, encompassing a full suite of capabilities from AI PCs to edge computing. He discussed the U.S. Department of Energy’s Genesis Initiative, which aims to use AI to accelerate scientific discovery and reduce the gap between R&D funding and research output efficiency. The initiative represents a public-private partnership where AMD’s MI355 cluster will power the American Science Cloud for government functions including scientific discovery, energy, and national security.
Zacharia highlighted AMD’s leadership in high-performance computing and stressed the importance of open ecosystems and open-source platforms for innovation, drawing parallels to Android’s success in India. He introduced the concept of “agentic AI” with humans in the loop, where AI systems can operate autonomously but require human validation before implementing changes. The discussion showcased AMD’s Helios rack, which delivers 2.9 exaflops of AI compute in a single rack, demonstrating the rapid advancement toward zettascale computing.
Paneerselvam M from METI emphasized the overwhelming interest in AI in India, citing 267,000 registrations for the summit, and stressed the need for AI adoption across all business layers, not just large corporations. Timothy Robson from AMD focused on software aspects, discussing work with Finland on language models for smaller language populations and drawing parallels to India’s 22 official languages. He emphasized AMD’s day-zero support for new AI models and commitment to open ecosystems through PyTorch and other frameworks.
Gilles Garcia concluded by discussing physical AI and edge computing applications, highlighting the shift from cloud-based AI to edge devices for robotics and industrial applications. The speakers collectively emphasized that AI readiness requires talent development, open platforms, research enablement, and startup innovation, positioning India as well-suited for AI advancement given its software expertise and diverse linguistic landscape.
Keypoints
Major Discussion Points:
– Sovereign AI and National Readiness: Thomas Zacharia discussed the US Department of Energy’s Genesis Initiative, emphasizing how AI can accelerate scientific discovery and the importance of sovereign AI capabilities for government functions like healthcare, education, and national security. He highlighted the need for countries to develop their own AI infrastructure and capabilities.
– Open Ecosystem and Software Infrastructure: Multiple speakers emphasized the critical importance of open-source platforms and avoiding vendor lock-in. Timothy Robson particularly focused on AMD’s commitment to open standards, day-zero model support, and the ability to work across different hardware platforms through tools like PyTorch and open-source frameworks.
– Enterprise AI Adoption and Startup Enablement: The discussion covered the journey from proof-of-concept to production for enterprises and startups, with Paneerselvam M highlighting the role of startups as “AI natives” in helping small and medium enterprises improve their AI readiness quotient across India.
– Language Diversity and Localization: A significant focus on addressing linguistic challenges, particularly for smaller language communities. The speakers discussed work with Finnish language models and the challenge of incorporating India’s 22 official languages into AI systems, especially those with fewer than 5 million speakers.
– Physical AI and Edge Computing: Gilles Garcia introduced the concept of physical AI, discussing the shift from cloud-based AI to edge computing for robotics, industrial applications, and autonomous systems that require real-time decision-making without relying on cloud connectivity.
Overall Purpose:
The discussion aimed to showcase AMD’s comprehensive AI capabilities beyond just GPUs, demonstrate their commitment to open ecosystems, and explore partnerships with India’s government and startup community to build AI readiness from infrastructure to practical applications across various sectors.
Overall Tone:
The tone was consistently optimistic and collaborative throughout, with speakers expressing excitement about AI’s potential and India’s opportunities in the space. The discussion maintained an educational and partnership-oriented approach, with speakers encouraging curiosity and innovation while emphasizing the rapid pace of change in AI technology. The tone became increasingly encouraging toward the end, with multiple invitations for audience engagement and continued collaboration.
Speakers
Speakers from the provided list:
– Thomas Zacharia – AMD executive with 30+ years in computing, former leader of Oak Ridge National Laboratory, involved in supercomputing deployment and the Genesis Initiative at the US Department of Energy
– Moderator – Event moderator (no additional details provided about expertise/role)
– Timothy Robson – AMD executive, hardware engineer with chip design background, focuses on software aspects of AI and enterprise solutions
– Paneerselvam M – CEO of the METI Startup Hub at Ministry of Electronics and IT, Government of India; distinguished leader with over two decades of expertise in innovation, management, strategic growth and market development
– Gilles Garcia – AMD executive, French-based, covers worldwide operations focusing on physical AI for communications, robotics and industrial applications
Additional speakers:
None identified – all speakers in the transcript match the provided speakers names list.
Full session report
This discussion on building AI readiness from compute to capability featured speakers from AMD and India’s Ministry of Electronics and IT (METI) at a major AI summit in India. The conversation addressed the challenges and opportunities in developing national AI capabilities, covering infrastructure, inclusive approaches, and AI applications from cloud to edge deployment.
Sovereign AI and Research Acceleration
Thomas Zacharia from AMD, formerly of Oak Ridge National Laboratory, opened by addressing declining R&D efficiency despite increased funding. He noted that while the US spends approximately $1 trillion annually on R&D (with government contributing 20-30%), research output returns are diminishing as problems become more complex. This challenge led to the Genesis Initiative, kicked off by the Trump administration, which positions AI as a solution to accelerate scientific discovery.
The Genesis Initiative focuses on three areas: discovery science, energy, and national security. Zacharia emphasized this framework could be adopted internationally, with countries like Japan, the UK, and European nations potentially joining sovereign AI development efforts. The initiative recognizes that certain AI applications require government-led coordination beyond private sector capabilities.
The programme’s implementation through the American Science Cloud, powered by AMD’s MI355 cluster, demonstrates public-private partnerships in advancing national AI capabilities. This infrastructure supports research across healthcare, education, and advanced scientific domains requiring large-scale, coordinated efforts.
Open Ecosystems and Computing Infrastructure
A central theme was the importance of open ecosystems in AI development. Zacharia drew parallels between Android’s success in India and the potential for open AI platforms to enable broader participation in semiconductor and AI ecosystems. This approach allows innovation without vendor lock-in, creating opportunities for participation at various technology stack levels.
AMD’s Helios rack delivers 2.9 exaflops of AI compute in a single rack consuming 220 kilowatts, representing dramatic improvements in computational efficiency. This advances significantly from the exascale computing programme Zacharia helped initiate in 2007, which aimed for exascale performance under 20 megawatts when conventional scaling would have required 3-4 gigawatts.
Timothy Robson, a hardware engineer who transitioned to software, reinforced the importance of vendor-agnostic development environments. He highlighted how frameworks like PyTorch, JAX, and Triton enable developers to write code running across different hardware platforms without vendor lock-in, noting PyTorch emerged from Microsoft and Meta’s efforts to avoid single-supplier dependence.
Linguistic Diversity and Multilingual AI
Robson detailed AMD’s work with Finland’s Lumi supercomputer, providing insights into serving smaller language communities. Finnish, as a Uralic language unrelated to other European languages, presented challenges similar to those faced by several Indian languages including Bodo, Konkani, Dogri, Sindhi, and Nepali—each with fewer than five million speakers.
The work with Bloom, a 176-billion parameter model for European languages, demonstrated multilingual AI feasibility. This project ran on AMD infrastructure before ChatGPT’s widespread adoption, illustrating how government investments can enable breakthrough capabilities. AMD announced day-zero support for the first twelve Indian languages, representing progress toward inclusive AI systems, though serving all 22 official Indian languages while maintaining quality and cultural authenticity remains challenging.
Enterprise Adoption and Startup Ecosystem
Paneerselvam M from METI highlighted overwhelming AI interest across India, evidenced by 267,000 summit registrations that exceeded organizational capacity. He positioned startups as “AI natives” serving as crucial intermediaries helping small and medium enterprises improve their AI readiness.
Enterprise AI adoption shows significant variation in organizational readiness. Robson observed a spectrum from sophisticated customers discussing matrix optimization to those only articulating “we’re doing GenAI” without deeper understanding. This highlights the need for tailored AI implementation approaches across different organizational contexts.
The progression from proof-of-concept to production represents a critical challenge. Resources like AMD’s Developer Cloud, offering 50-100 hours of free compute time, address immediate technical barriers, but scaling from successful pilots to full production requires sustained support and partnership between government, industry, and startups.
Physical AI and Edge Computing
Gilles Garcia presented physical AI as a paradigm shift from cloud-centric to edge computing applications. His focus on robotics, industrial automation, and autonomous systems highlighted applications where real-time requirements make cloud connectivity impractical. The CES demonstration of Gene01, a humanoid robot built on AMD technology, illustrated practical applications creating systems that can perceive, visualize, touch, and react autonomously.
Physical AI represents opportunities for countries like India to participate through applications requiring different technological approaches than traditional cloud-based GPU systems. Industrial automation, medical devices, autonomous vehicles, and smart infrastructure enable substantial innovation and economic development through edge computing capabilities.
Technical requirements for physical AI—including safety, reliability, and real-time response—demand different hardware and software design approaches. These systems must operate autonomously while maintaining safety and reliability standards often more stringent than cloud-based applications.
Governance and Human Oversight
Zacharia distinguished between regulation and governance, introducing “agentic AI” with human-in-the-loop validation. This approach manages autonomous systems operating at machine speed while maintaining human oversight for critical decisions.
The inner-loop/outer-loop architecture allows AI autonomous operation for routine tasks—potentially coordinating thousands of agents across large-scale infrastructure—while requiring human validation before implementing broader changes. This acknowledges AI potential to operate beyond human capability scales and speeds while recognizing continued importance of human judgment and responsibility.
National AI Strategy and Implementation
The discussion outlined comprehensive national AI readiness encompassing talent development, compute access, research enablement, startup innovation, and enterprise adoption. METI’s multi-vendor infrastructure approach incorporates GPUs, TPUs, Inferentia, and other accelerators, demonstrating practical approaches to building national capabilities without vendor lock-in.
Research enablement emerged as critical, with speakers emphasizing environments where AI applications are continuously questioned and improved rather than simply accepting industry solutions. This supports indigenous AI capability development and ensures national strategies aren’t entirely dependent on external providers.
Future Directions
The speakers noted AI’s rapid evolution from deep learning through mixed precision computation to generative AI, with governance becoming increasingly important as systems become more capable and autonomous. The trajectory toward zettascale computing suggests computational limitations will continue diminishing, enabling increasingly complex problem-solving.
The discussion emphasized remaining curious and adaptable, noting AI represents the fastest technology adoption in human history, progressing from one million to one billion active users in just years. This underscores both opportunities and challenges in this technological transformation.
Conclusion
This comprehensive discussion revealed AI development as requiring coordinated efforts across technical, social, economic, and governance domains. The speakers moved beyond typical technology presentations to address how AI can serve national development goals while remaining inclusive and responsible.
The emphasis on open ecosystems, linguistic diversity, human oversight, and broad adoption provides a framework balancing innovation with social responsibility. The focus on India’s opportunities—leveraging software expertise, linguistic diversity, and startup ecosystem—suggests practical pathways building on existing strengths.
The integration of sovereign AI initiatives, enterprise adoption challenges, startup ecosystem development, and physical AI applications provides a comprehensive view extending beyond traditional cloud computing. The speakers’ commitment to continued collaboration, evidenced by ongoing AMD-METI relationships and concrete resources like developer cloud access, demonstrates pathways for translating strategic discussions into practical action.
Session transcript
So congratulations to all of you. You should be proud. And I just want to say that on behalf of the 30 ,000 AMDers worldwide, and particularly 10 ,000 in India, I just want to congratulate you and thank you for this opportunity to have this discussion. Since we are a small group, I think we’ll keep it informal. And I want to make sure that somebody please keep track of time so that I do justice to my colleagues here and the dais. The topic that I’ve been asked to talk about is sort of building AI readiness from compute to capability. In AI, in the field of AI these days, there seems to be an over -indexing of AI and GPUs.
When in reality, AI is much broader. GPU is obviously a significant part. It’s a part of the core infrastructure. But what we do at AMD is to really provide a full suite of AI capability from AI on AI PCs to core infrastructure to all the way out to the edge. And I have my colleague Tim from AMD, so we decided that we’re going to tag team. And so I’m going to focus perhaps a little bit on the sovereign side, and then Tim can focus on the enterprise side. That’s okay with you. So let’s just talk about sovereign AI in practice and exploring the motivators. So this particular slide is something that is created by the Department of Energy in the United States as part of a new initiative that was kicked off by the Trump administration called the Genesis Initiative.
and I had a role to play in terms of trying to support and address in crafting this initiative and the framing is very simple. If you look at the top line, I don’t know whether this has a pointer, it’s okay. Okay, so the top line, the white line is funding in the United States for R &D. Today, the United States spends about a trillion dollars a year in R &D. That’s just my involvement. Not all of that is government spending. It’s roughly about, say, 20 to 30 % U.S. government and the rest industry. The bottom line is what we consider research. Output efficiencies. So the problems are getting harder. it is getting more challenging and even though we are trying to tackle really important problems the sense is that throwing money is not having the same rate of return and this slide basically asks the question how do we reduce the gap and at least the thesis for the Genesis mission is that you can use AI as a way to accelerate scientific discovery the Genesis mission has three areas of importance for people who don’t know about the US Department of Energy the US Department of Energy is the nation’s largest physical science agency so it has it operates through 17 national labs and some of the earliest ones, like the one Oak Ridge National Laboratory, which I used to lead before joining AMD, came to being during the Manhattan Project.
And Manhattan Project, about 65 % of the entire funding of Manhattan Project was at Oak Ridge National Laboratory. And in addition to, you know, in fact, I think the Prime Minister mentioned that, about nuclear energy, both the destructive aspect as well as the significant outcomes that came out of that from nuclear medicine to nuclear navy to nuclear energy. These all came, you can trace back to Manhattan Project. So U.S. Department of Energy is not only responsible for energy, but it’s really a science organization. It’s got three priorities. One is just discovery science. The second is energy. And the third is national security. and national security. America has a really interesting thing, a way of keeping the nuclear arsenal away from the military in the sense that it is the U.S. Department of Energy and not U.S. Department of Defense or Department of War that is responsible for the nuclear arsenal.
And the three lab directors, Los Alamos, Livermore, and Sandia, has to certify each year that the arsenal is ready for the President of the United States. So this is a piece of the hypothesis. If you think about research, you can look at the left side. It starts with hypothesis, then you conduct experiments, get the data. And today, you take the data, use AI, machine learning, et cetera, you get analysis. What you’re trying to do is to make this much faster so that you can have science outcomes coming out. That’s it. do it in a reduced cost because you cannot throw more and more money at this problem and enhance global collaboration. I think there is a genuine interest on the part of the U.S. that this whole premise is not just a U.S. issue.
And so I think that it’s likely announcements that suggest that countries like Japan and Europe and UK and others may be part of this overall approach to drive sovereign AI for those aspects of AI deployment and scaling that is uniquely a government or state function. So as I mentioned broadly, scientific discovery, energy and national security. But if you look at the scientific discovery further to the next step, then you will see healthcare, education, skilling, all these things. Fundamentally, a government function. And this is not an easy task because if you think about how these institutions’ research is done, I mentioned large fraction of it in the private sector, a lot of it is done in academia funded by government, and then of course in national labs in the United States, India has its own set of national labs, academia, etc.
So what you need to do is take a look to see how do you integrate all this data? At least the U.S. Department of Energy operates these large, multi -billion dollar light sources, neutron sources, specialized scientific experiments. You need to be able to incorporate all these things, so you have to federate the compute and data, you have to have cloud -enabled lab operations, which is not how things are done today. Security and governance by design, especially when you’re thinking about public -private partnerships, even in the enterprise commercial scale, you want to make sure that you have secure computing, you have confidential computing, you can maintain integrity, but also if you think about national security, you have additional layer, and then you want composable standards versus infrastructure.
So this particular program was kicked out by Secretary Wright, well the President of the United States, then Secretary Wright, and the last fourth quarter of last year, and the first announcement was done with Lisa Su, our CEO, because one of the things that they wanted to do was a unique public -private partnership, and so the core infrastructure, which is currently called American Science Cloud, this program is just being stood up, is going to be run on an MI355 cluster, which is what this entire program that is aimed at driving innovation is going to be run on. And so we are really excited to be a part of this. initially US and soon an international effort to drive innovation in those areas that are uniquely government function.
I’ve had a ringside seat in computing for the last 30 years and been responsible for a lot of supercomputing deployment, a dozen or so. The last four or five of them were number one systems in the top 500, each first of a kind. This is another important thing. Innovation, if you think about AI, AI didn’t happen magically with NVIDIA or AMD. It happened because US government took the risk to invest in first of a kind systems. So we were the first to deploy 30 ,000 NVIDIA GPUs when people thought that CUDA was a four -letter word. Now everybody thinks that this is this amazing software, but change comes hard to people. And so I just want…
I want you to know that… particularly all of you who are youngsters things are going to evolve. If you think that AI is just like the Prime Minister said, it’s just the early stages. So you have to be open and you have to be part of this drive for effective, scalable and impactful AI. Then deep learning came where this mixed precision computation then generative AI and last year was really authentic AI and some of us think that this year we’re going to focus increasingly on governance. Governance does not mean regulation there is a role for regulation by governments but governance is that how do you want if you want to have agentic systems driving, accelerating innovation you want to make sure that the output has a person in the loop.
The one way to simply think about it is that if you are researchers here, if you have a professor who’s got a dozen students who are doing research you don’t let the students just go publish things. There is the professor’s responsibility, there is the peer review committee, etc. So you want that human in the loop before you can update and let this thing to drive innovation while it also allows it to do things that AI does best. So this is how we think about compute to capability, a model of national AI readiness. We want its rest on talent, talent and readiness of talent, giving people access to compute and models. Research enablement is key because you want people to operate AI in an environment where you’re questioning things and innovating all the time as opposed to assuming that what we in the industry is providing you is the only solution.
So I think… If you look at countries that are leading in AI, there is a very strong R &D and innovation foundation that is allowing you to lead because there are people who are questioning every time somebody says something. to make sure that it is validated, it’s continuing to innovate. Start up an innovation lab because you want to take these ideas and start new companies because many of these new innovation and new technologies may be led by people with new ideas and opportunities and of course ultimately enterprise and public sector adoption. We strongly believe, and again I heard Prime Minister Modi say, open ecosystem and open source, open platforms. These things, if you think about iOS and Android, India I find has a lot of penetration of Android systems because inherently open systems allows you to innovate without getting locked into vendors.
And so we at AMD have a commitment to make both our hardware infrastructure and our software infrastructure to be based on open standards so that you can innovate. Around this, any part of this infrastructure. can be part of a new startup or new company adding to that. That is also an important way for India to become part of the supply chain and the semiconductor ecosystem, because you don’t have to start with an attempt to go in for two or three nanometers. You can actually do amazing work and be part of leading -edge technology at different form factors. So I mentioned a little bit about how we think about agendic flows and AI scan work. This is simply the way you think about it.
The inner loop is an autonomous loop where AI and agendic AI does things, what it can do fast, it can operate. If you have 100 ,000 GPUs, you have 100 ,000 agents tackling this problem and it can actually go through the hypothesis -driven experiments and systems. So you can do simulation, campaign scale coordination, machine speed execution, etc. But we do not allow it to update. the outcome until a human in the loop has had the opportunity to validate to make sure that we don’t have unintended consequences. Now, how do you build this thing? So this is, if you haven’t gone to the AMD booth, I would encourage you to do. This is my only plug in this presentation.
We spent a ton of money to bring this Helios rack here just so that you can have a sense of what is, not what this particular rack can do, but giving a glimpse of what is possible the next year and the year after. So we, in 2007, myself and two of my colleagues started what is called the Exascale program. And the challenge was to deliver an Exascale system for under 20 megawatts. Because if you had just scaled the capability in 2007, it would have taken three to four gigawatts. And we knew that the government was not going to… sign, $4 billion for power, just electricity alone to run the computer. So we were motivated to drive that.
And we delivered that first exascale system, Valfrontier at Utrich, for less than 20 megawatts. Everybody thought it was crazy, it cannot be done, but there are some things, when you put audacious goals, people rally around and then deliver. This particular rack, in one rack, there are 72 GPUs that will deliver 2 .9 exaflops of AI compute, which is FP4, not FP64, just to be very clear. But for AI capability, you get 2 .9 exaflops of compute capability for 220 kilowatts. Right? That, even for somebody who’s been in this field for a long time, it’s just mind -blowing. this is where we are headed AI is the fastest adoption of any technology that humanity has introduced we’ve gone from 1 million active users to 1 billion in a matter of just a couple of years and we are headed to 5 billion users so there is a lot of opportunity to innovate in this field and all of us are going to continue to create these opportunities as Lisa said, we are entering the Yara scale so already people are thinking about the next 1000 so let me just say you can get to Zeta scale by just taking 300 of those racks and putting together and then it’s another 3x so I would say in the next 10 years maybe we would be at this 10 ,000 factor so the kind of problems that you are thinking about should not be constrained by what you can do today by the time you figure out the solution for an important problem compute will be there.
That is what we in the industry like to promise you. And I think advancing national economies, these are one of the things that people might you would be forgiven if you thought that does AMD do these things and how prevalent are our compute capabilities. I think Tim is going to tell you that our GPUs and our systems are in every hyperscaler globally and when it comes to HPC and national priority missions, AMD is the leader. If you listen to President Macron, he referenced Alice Recop, which is the first AI factory that the French government announced, the CEA announced, which is based on AMD MI430X, which is a variant of the MI450 on the right that you see outside.
I will close by saying that a shared path forward is really what we are looking for. I know India is in the early stages and we are really delighted to actually have this conversation. Thank you very much.
I’d like to invite our next speaker, Paneerselvam M, CEO of the METI Startup Hub at Ministry of Electronics and IT, Government of India. Dr. Paneerselvam M is a distinguished leader with over two decades of expertise in innovation, management, strategic growth and market development. He’s been instrumental in advancing India’s startup ecosystem and fostering impactful partnerships between the government, industry and entrepreneurs. In his
drawing insights out of this data and then comes the interface layer where most of it is going to be really driven by agents, by agentic AI and of course as Thomas mentioned there is always going to be a human in the loop perspective but as we progress this is going to change as well. So you know the two fundamental things that I want to share, one is the entire transformation in the readiness space for AI is an opportunity for you know certain change and intent needs to be very very clear and then comes the curiosity to learn about this little bit more for each business owner and then comes the implementation part of it and then start -ups have a very very critical role to facilitate this because you are coming in almost as AI natives, working with an understanding of this and you can really go out and demonstrate value.
and help implement the entire readiness, improve the readiness quotient or improve the readiness quotient for small and medium enterprises and ensure that, you know, this is a broad -based growth opportunity for businesses across the country and not limited to just a few of them, right? So there is huge potential and I think enough has been spoken. The summit itself is a proof of the kind of curiosity. We have had 267 ,000, you know, registrations, people who have registered in the last five days. Unexpected, overwhelming response to some extent that we couldn’t really handle it, right? At the same time, it gives us immense pride and excitement for the amount of curiosity and excitement for the amount of curiosity and excitement for the amount of curiosity and the youngsters in India, across India.
travel here from the length and breadth of country to understand what is AI going to be and how this is going to impact and what the opportunities are and that is itself is a fantastic starting point and and as I said you you know there’s a lot of happens this is in Indian sovereign models coming the tech the five layers the infrastructure the design you know all the layers are being worked upon in the Indian context and we are ready as a nation we are ready as a government to facilitate this this real disruptive transformative technology but the truth is it’s you know it’s also important for it has to populate into all the layers of the society beat just not just limit itself to large and large corporates but also to to small and medium enterprises and of course it is already populated with the large and medium enterprises and of course it is already populated well into the d2c to the individual users and it’s much much beyond the beyond the chat, GBTs of the world.
So with that, I think I once again take the opportunity to thank the entire team from AMD and we have had some interesting conversation and I look forward for the continued partnership with AMD and METI Startup Hub because in our perspective, corporates have a huge role to play in the success of the startups. Thank you.
Thank you. There’s a couple of things that I want you guys to think about as I go through my talk. 30th of November, 2022. The world changed. ChatGPT was launched. And I’m willing to bet that everyone in this room, myself included, what we thought we knew about AI changed. Two years ago, what we thought even after ChatGPT changed. A year ago, what we thought changed. And what I’m hoping is as you leave here, other things would have changed in the last 45 minutes of listening to these talks. Okay, so I’m going to skip through the reason why we need to go through and need compute. But I think one thing that is very, very, very important is things are moving so fast.
And things are moving in a way that we cannot predict that the only way that anybody is going to be successful is an open ecosystem. And, you know, both of these themes. And I think the other gentleman before me have alluded to this as well. and I’m going to take you through specifically around software. I mean, everything to do with AI really, I’m a hardware guy, I used to design chips, but everything today is software, right? And I was talking to one of my colleagues and I said, okay, so I’m going to India, I’m going to do all this, we’re going to go through. And I said, is it really going to be the, you know, are they going to understand it?
And he said, Tim, India is software. This is what we do. He said, you’re going to be in front of the best people in the world that are going to understand what you want to talk about. So I’m really going to focus on the software side of that. And one of the things that I wanted to do, understanding that we had our esteemed colleague from MITEI here, is we do have lots and lots of experience in this space. And one of the things that I want to highlight is some work that we did with Lumi in Finland. Now, why is this important? So within Europe, almost all the languages are Indo -European, right? If you know a little bit of Greek, if you know a little bit of Latin, if you know a little bit of one of the languages, there’s 27 countries in Europe.
so let’s call it 27 languages and then you have Finland Finland is a Uralic language nothing to do with any other language in Europe absolutely different construct, different base different absolutely everything and so what we found working with the guys in Finland is they were coming to us because they put in this Lumi supercomputer and they said okay so we have a small country in Europe, 5 million native speakers and we have to take all of this work that’s been done English, big codex Spanish, big codex, Hindi, English big codex of all of that to do your training, suddenly you have a language of 5 million people how do you get that language into your LLM model so that it becomes useful now I’m probably going to get the pronunciation really really wrong here okay but I did actually use chatGPT to look at the 22 Indian languages right so if we look at Bodo, Konkani, Dogri, Sindhi, Nepali there’s less than 5 million people that speak those languages so how do you get an Indian LLM that caters for everybody as we’ve seen from President Modi AI for all of that kind of thing and this is the kind of area that with Mighty this is where we would like to work with you guys and be able to bring some benefit of the work that we’ve been able to do now remember the first day 30th of November 2022 this machine was inaugurated so it was put together all of the systems were put together it was all brought up the chips were made years before this machine was inaugurated my birthday 13th of June 2022 6 months before ChatGPT came out so this machine with 12 ,000 GPUs that had the foresight from the Finnish government was using AMD technology to run AI before ChatGPT came out.
So a lot of people that think that a lot of the stuff from AI has come from a specific area. This again, think of our way of thinking. We were there and we have the ability. We actually did the Bloom 176 billion parameter model. It was an open model made for European languages. So again, we would love to bring this knowledge and use with the Indian ecosystem to make this successful for everybody. I’m not going to spend a lot of time on hyperscalers. They’re obviously an important part of the market. It’s where a lot of the capabilities go into. We’re there. We have tens of thousands of GPUs. We actually have, as Thomas mentioned, we have the Helios system coming here.
Please go and take a look at it. If you like Harvard, it’s an interesting piece of kit. But really the idea here is whether you’re in a hyperscaler… or whether you’re in any other area, there is an ability to have a wider ecosystem. And again, inference, so AMD specifically, it’s not really an AMD pitch, but there was an idea in the market that AMD was inference only. That dates from Q1 2024. That’s two years old. So again, we have to kind of change that thinking, right? That’s older thinking. We actually now, again, completely open source. There’s a Primus ecosystem or a Primus tool for the open source, which enables you to be able to do all of the training that you need to do for all of your Indic languages or for your use cases, which again is completely open.
Enterprise AI. This one I think is an interesting one. I know when I started going out to customers and going out to enterprise customers, the difference in customer knowledge on what AI was, was amazing. You go into one customer and they say, okay, so this is our use case and we’re seeing these kinds of sizes of matrices, so we’re doing these optimizations. And then you go into another customer and you say, what are you doing around AI? And the guy goes, oh yeah, we’re doing Gen AI. Okay, great, yeah, what are you doing with Gen AI? We’re using LLMs. Okay, great, so using LLMs, what do you think? LLMs. And they had no idea, right?
It’s just, we have to do something with AI. And that has changed over the last 18 months and chatbot was something that most people said, okay, that makes sense, I understand chatbot, we can fine tune the model, we can do an internal AI system within the company. And now we’re starting to see with the agentic workflows this entire plethora of different use cases coming through. And so how then do you take it from a research institute or people that actually get onto your accelerator, whether that’s a GPU or a TPU or an FPGA or whatever else? and get it to a stage where actually people within a corporation can use it. And so this is something that has been understood.
And again, no lock -in, open, everything here is something that can be used without having to tie you into one particular area. And actually, I’ll come on to it a little bit later as well. It’s also something that I’ve been very impressed with, with the infrastructure that MITEI have put into place. In this case, with the public -private partnership, you have GPUs, you have TPUs, you have Inferentia, you have all of the different types of accelerators available to you within the Indian ecosystem that MITEI have made available to you. I’ll come on to that a little bit later more. But again, the idea here is that whatever the ecosystem is, or whatever the compute that you’re using, you’re able to go from an area where, whether it’s in the cloud or whether it’s non -prem, you have an ability to be able to give your employees within your enterprise an ability to be able to use that AI assistant or tool.
Neo clouds so these are the kind of what we call the smaller clouds, you know, they’re not the hyperscalers they’re a little bit more nimble they are a little bit more available to doing things a little bit different a lot of these guys are doing sort of bare metal and managed Kubernetes services, but it is coming to areas where they’re becoming like APIs, token factories there’s an ability for these guys to be able to provide you with compute quickly easily and at reasonable pricing to enable you in whatever it is you’re trying to do we find these are the first movers in the market and again in the same way that we’re integrated and working with the hyperscalers, we have these relationships with the Neo clouds and actually we’re working with quite a few of the guys here in India as well to make that available for you as well, so the whole idea again here is there is that compute that’s available please go out and understand the benefits or the trade -offs between the different types of Kubernetes services that you have out there and get the right solution for you guys.
Now, I’m assuming that most people here are going to be startups. And again, startup is an interesting area, right? So you have a startup, you know what you want to do, you absolutely are laser focused on getting your MVP out there, getting in front of customers, how do you generate some value, how do you generate some revenue? Although that these days is less and less important, it seems, as people get funding even sometimes before a product. But one of the things that you guys have to be sure of is that the compute that you have and the capabilities that you have are capable for the products that you actually have to then go and put into position.
And so this is an area where we understand that proof of concept, it’s very important. And again, I was chatting with the CEO of Mighty here before, it’s something he was saying, you know, POC to PO. You know, you have to be able to make sure that you understand the technology and how you can take that to market before you can actually go and invest. So we have a couple of different ways that we can help here within the ecosystem. You could actually go on there right now, there’s the AMD Developer Cloud. You can get, I think it’s 50 or 100 hours of free compute. You want to go on, how does AMD work, you know.
It’s always going to be dependent on use case and what you’re trying to do. But there is a huge TCO advantage, which of course is important for startups. Get onto the Dev Cloud, get it working. We actually provide Docker containers, so that’s everything put into a single Docker. So you can download a Docker and run it, so you don’t have to spend your time and your energy installing all of the software, putting everything together, get everything working. We’ve done all of that for you. Take the Docker down, get your model off of Hugging Face, get your weights off of Hugging Face. Use your own model and do something else. Whatever there is that’s in there, in the open source ecosystem is there and it’s going to work.
Give it a go. Give it a play. And then of course from that we can… can take you into our accelerator cloud a little bit more sort of hands -on, making sure we understand what you’re doing, helping, guiding, and assisting you in moving yourself forward there. And then, of course, we have the relationships in with the industry, you know, try and buys, being able to get you access to the computer, being able to get you the right solution at the right kind of price. So this is something also that I really want to highlight. So day zero support of models. Now, we announced this. So Quen3 Codex came out last week, day zero support on AMD.
Baidu came out with one of their paddle models this week, day zero support on AMD. What does day zero support mean? Well, it means that it’s not the first time we’ve seen this code. It runs on AMD. It’s guaranteed. It’s optimized. you know a lot of people think that to run something in AI you need a specific GPU the whole point is with day zero support absolutely false right again with Lumi pre -chat GPT in 2022 we were building LLMs for effectively an Indic type language languages right and so the ability is there if there’s a new model coming out you want to run it you want to test that you want to see how it works for you guys then that is there and runs out of the box and you know again if we look at this line in the middle you know PyTorch if you look at the history of PyTorch you know there were lots of signatories on PyTorch to make sure that was available for everybody AMD was one of them this mainly comes out of Microsoft and Meta who did not want to be closed in to a single supplier so actually what you’re doing with PyTorch is you’re writing Python code right you’re not writing vendor specific code it’s an open ecosystem that’s the whole point right you don’t want to be tied in you know it’s gonna slice for innovation it’s going to increase So PyTorch came out and that is the basis of 99 % of all of the customers I talk to, right?
They’re all writing Python under PyTorch. JAX is then coming forward. Triton, this is a Python -like language which is specific for gem optimization. Again, if you’re getting to that area where you’re actually seeing the gem sizes that are coming through from your operations and want to do gem -level operations, then Triton enables you to do that at the compiler level. So then you can be completely agnostic of what the underlying hardware is. The ecosystem and the underlying compute becomes kind of abstracted away because Triton enables you to run on anybody. It’s just a compiler for the new architecture. If we look at these models on the bottom here, President Modi this week has announced the first 12 Indian languages.
I can’t wait to get you guys here. right, fully supported day zero support, you know just to give you an example here, DeepSeek of course when DeepSeek came out, they did some things a little bit special multi -head latent attention was new we had day zero support with DeepSeek why? Because we’re one of the main contributors to SG9 there’s not a tie -in to an inference engine here, it’s an open ecosystem so we were able to come out of the door with better TCO, better performance, better cost and full support through SG9 on that DeepSeek model which was you know, the leader of its time because of our complete commitment to the open ecosystem just to give you an idea, again you’re walking out of here in 45 minutes with changed ideas, this is what we’re going for here I did have two minutes I now have five, I don’t know who bought me extra time but I owe you a beer Okay, so really actually that’s kind of the end of the pitch here.
One thing I would say is we do have a booth here at 5 .10. I’m sorry, I’m going to do a little bit of an AMD plug at the end here. But do come by and see us. You know, we actually have some of the neoclads there. We have some model creators, vendors, some ecosystem partners there. You know, come see, come change your mind. Come see what’s available within an ecosystem with the compute that’s available for you guys. Okay, thank you.
So first of all I’m Gilles Garcia I’m French so we can talk about LLMs for French language if you want so I’m French, I’m based in France but I’m covering worldwide and I’m focusing on physical AI for the communications and robotics and industrial so we have been talking a lot about AI and most of the people are thinking AI means GPUs, big cloud and what we are seeing is a big shift, that’s another change that we are seeing, change management, so I’m the change management first but changing is we are seeing the AI moving into the edge and moving into the far edge which is industrial, robotics vehicles as well as the networks and so for that you need to have different type of beast, GPUs is one aspect of it but you need to have very profound different technology that AMD has as part of the bread portfolio that we have, these technologies need to be able to send to the market and we need to have a lot of that are able to send the data to the market and we need to have a lot of and we need to have a lot of that are able to send the data to the market and we need to have a lot of that are able to send the data to the market that are able to send the data to the market act, react in a so quick manner that there is no time to go back to the cloud for that.
And so these technologies need to be, of course, that will be inference, but need to be able to take decisions and act very safely, reliability, reliable, without having to rely on the cloud. And so that’s a new change that we’re seeing at AMD on the physical AI, which will become very, very important for us, is how do we take what we have learned in the cloud, and how do we make it available in the physical AI? Software is a big thing. Full stack, meaning hardware and software being able to deliver to the customer solutions, is what AMD is aiming for. And so our CEO, Lisa Su, was saying, it’s AI anywhere. And one size does not fit all.
Meaning that if you want to address a robot you can put a GPU into it, it will burn to hell. So you need to have a very dedicated accelerator with a high software stack, open source, that will be able to have this robot perceiving, visualize, act, touch and be able to act accordingly to what his purpose will be. At CES early January, Lisa Su brings on stage Gene01, which was the first humanoid built on AMD technology. That’s just impressive. Everything has been done by a startup in Italy to make this humanoid being able to sense, visualize, touch when somebody is touching it and when it’s touching something to act and react very rapidly without having to rely on the centralized source.
So I will not be longer than that. Physical AI is probably something that India, by the way, will have a lot of things to act into. Because GPUs are there already where physical AI is something that you will have to create. A lot of things related to medical, related to autonomous networks, autonomous cars, autonomous plants, industrial, and that’s where I think India will start, with all the startups and capability to use accelerators that are much smaller than what GPUs are, and this is all available today in the EMD portfolio. So I will stop here, encourage you to come to the EMD booth, and we can continue the discussion. Thank you.
Well, so we gave you a lot of information on AI, gave you four different accents, I think the French guy probably carries today. But my one message is that stay curious, as all of us have said, things are going to change and continue to change at a rapid pace. And, you know, people talk about so many thousands of GPUs, that will not be the main thing. and I think that’s something that we need to because you will find that we there’s a whole lot of interest in trying to provide you with even more powerful GPUs for their infrastructure while at the same time provides you very lightweight low power at the edge and so I think stay curious look from the from a start -up community point of view for a research point of view but academic point of view look for really interesting problems challenges to deliver the infrastructure that you need because ultimately this applications with where it is going to change society and life that’s all thank you very much Thank you.
Thank you. Thank you.
Thomas Zacharia
Speech speed
129 words per minute
Speech length
2769 words
Speech time
1283 seconds
Broad AI capability beyond GPUs
Explanation
Thomas Zacharia stresses that AI extends far beyond just GPU hardware, warning against an over‑reliance on GPUs for AI workloads. He highlights that AI’s scope is much broader and that focusing solely on GPU count will miss essential capabilities.
Evidence
“When in reality, AI is much broader.” [2]. “In AI, in the field of AI these days, there seems to be an over -indexing of AI and GPUs.” [3].
Major discussion point
Major discussion point 1: Sovereign AI and national initiatives
Topics
Artificial intelligence | The enabling environment for digital development
Genesis Initiative to accelerate scientific discovery, energy, and national security
Explanation
Zacharia describes the U.S. Department of Energy’s Genesis Initiative, a national program that leverages AI to speed up scientific discovery, improve energy outcomes, and strengthen national security. He links this effort to broader sovereign AI goals and collaborative research across labs.
Evidence
“this slide is something that is created by the Department of Energy in the United States as part of a new initiative that was kicked off by the Trump administration called the Genesis Initiative.” [18]. “So as I mentioned broadly, scientific discovery, energy and national security.” [16].
Major discussion point
Major discussion point 1: Sovereign AI and national initiatives
Topics
Artificial intelligence | Environmental impacts
Public‑private partnership with secure, federated compute and governance by design
Explanation
He argues that sovereign AI requires secure, confidential computing and governance built into the architecture, especially when public and private entities collaborate. Federated compute and data, together with cloud‑enabled labs, are essential for trustworthy AI deployment.
Evidence
“Security and governance by design, especially when you’re thinking about public -private partnerships, even in the enterprise commercial scale, you want to make sure that you have secure computing, you have confidential computing, you can maintain integrity, but also if you think about national security, you have additional layer, and then you want composable standards versus infrastructure.” [31]. “You need to be able to incorporate all these things, so you have to federate the compute and data, you have to have cloud -enabled lab operations, which is not how things are done today.” [35].
Major discussion point
Major discussion point 1: Sovereign AI and national initiatives
Topics
Data governance | Building confidence and security | The enabling environment for digital development
Commitment to open standards and open ecosystem
Explanation
Zacharia emphasizes AMD’s pledge to base both hardware and software on open standards, enabling innovation and avoiding vendor lock‑in. He aligns this with broader calls for open ecosystems and open source platforms at the national level.
Evidence
“And so we at AMD have a commitment to make both our hardware infrastructure and our software infrastructure to be based on open standards so that you can innovate.” [56]. “We strongly believe, and again I heard Prime Minister Modi say, open ecosystem and open source, open platforms.” [68].
Major discussion point
Major discussion point 3: Open ecosystem and software stack
Topics
Artificial intelligence | Internet governance | The enabling environment for digital development
Paneerselvam M
Speech speed
156 words per minute
Speech length
534 words
Speech time
205 seconds
Start‑ups as AI‑native drivers for SME readiness
Explanation
Paneerselvam highlights that startups, being AI‑native, are crucial for raising the AI readiness quotient of small and medium enterprises. Their agility and deep AI expertise enable them to demonstrate value quickly across the Indian business landscape.
Evidence
“and help implement the entire readiness, improve the readiness quotient or improve the readiness quotient for small and medium enterprises and ensure that, you know, this is a broad -based growth opportunity for businesses across the country and not limited to just a few of them, right?” [41]. “Start‑ups have a very very critical role to facilitate this because you are coming in almost as AI natives, working with an understanding of this and you can really go out and demonstrate value.” [42].
Major discussion point
Major discussion point 2: Startup and SME role in AI readiness
Topics
The enabling environment for digital development | Capacity development | Artificial intelligence
Government facilitation for inclusive, nation‑wide AI adoption
Explanation
He stresses that the Indian government is prepared to support AI deployment across all societal layers, not just large corporations. This inclusive approach aims to bring AI benefits to SMEs, consumers, and remote communities alike.
Evidence
“we are ready as a nation we are ready as a government to facilitate this this real disruptive transformative technology but the truth is it’s you know it’s also important for it has to populate into all the layers of the society beat just not just limit itself to large and large corporates but also to to small and medium enterprises…” [47].
Major discussion point
Major discussion point 2: Startup and SME role in AI readiness
Topics
The enabling environment for digital development | Closing all digital divides | Artificial intelligence
Timothy Robson
Speech speed
167 words per minute
Speech length
2753 words
Speech time
986 seconds
AMD Developer Cloud with free compute credits
Explanation
Robson points out that developers can instantly access AMD’s cloud platform and receive 50‑100 hours of free compute, lowering barriers for proof‑of‑concept work and accelerating AI experimentation.
Evidence
“You could actually go on there right now, there’s the AMD Developer Cloud.” [55]. “You can get, I think it’s 50 or 100 hours of free compute.” [63].
Major discussion point
Major discussion point 2: Startup and SME role in AI readiness
Topics
Artificial intelligence | Financial mechanisms | The enabling environment for digital development
Open ecosystem as a prerequisite for AI success
Explanation
Robson argues that success in AI now depends on an open ecosystem that prevents lock‑in and fosters rapid innovation. He notes that the industry is moving toward openness as the only viable path forward.
Evidence
“And things are moving in a way that we cannot predict that the only way that anybody is going to be successful is an open ecosystem.” [70]. “there’s not a tie -in to an inference engine here, it’s an open ecosystem so we were able to come out of the door with better TCO, better performance, better cost and full support through SG9 on that DeepSeek model…” [71].
Major discussion point
Major discussion point 3: Open ecosystem and software stack
Topics
Artificial intelligence | Internet governance | The enabling environment for digital development
Open‑source tools (Primus, day‑zero support) for multilingual LLM development
Explanation
He highlights AMD’s Primus ecosystem and day‑zero model support that enable developers to train LLMs for Indic languages and other regional tongues without vendor constraints, promoting linguistic diversity in AI.
Evidence
“There’s a Primus ecosystem or a Primus tool for the open source, which enables you to be able to do all of the training that you need to do for all of your Indic languages or for your use cases, which again is completely open.” [76]. “Baidu came out with one of their paddle models this week, day zero support on AMD.” [59].
Major discussion point
Major discussion point 3: Open ecosystem and software stack
Topics
Artificial intelligence | Closing all digital divides | Capacity development
Hardware‑agnostic software frameworks (PyTorch, Triton) enable AI on any accelerator
Explanation
Robson explains that frameworks like Triton abstract away hardware specifics, letting models run on GPUs, TPUs, FPGAs, or other accelerators. This hardware‑agnostic approach broadens AI accessibility across diverse compute environments.
Evidence
“The ecosystem and the underlying compute becomes kind of abstracted away because Triton enables you to run on anybody.” [84]. “so then you can be completely agnostic of what the underlying hardware is.” [83].
Major discussion point
Major discussion point 3: Open ecosystem and software stack
Topics
Artificial intelligence | Internet governance | The enabling environment for digital development
Gilles Garcia
Speech speed
177 words per minute
Speech length
624 words
Speech time
211 seconds
Physical AI shifting to the edge requires low‑power accelerators
Explanation
Garcia notes that AI workloads are moving from data‑center GPUs to edge devices such as robots and autonomous vehicles, demanding specialized, low‑power accelerators that AMD’s portfolio can provide.
Evidence
“we are seeing the AI moving into the edge and moving into the far edge which is industrial, robotics vehicles … GPUs is one aspect of it but you need to have very profound different technology” [6]. “accelerators that are much smaller than what GPUs are, and this is all available today in the EMD portfolio.” [7].
Major discussion point
Major discussion point 4: Edge/Physical AI
Topics
Artificial intelligence | Environmental impacts | The enabling environment for digital development
Full‑stack hardware‑software solution needed for robotics and autonomous systems
Explanation
He stresses that delivering AI at the edge requires an integrated stack of hardware and software, with open‑source tools that enable perception, actuation, and decision‑making in real time.
Evidence
“Full stack, meaning hardware and software being able to deliver to the customer solutions, is what AMD is aiming for.” [58]. “you need to have a very dedicated accelerator with a high software stack, open source, that will be able to have this robot perceiving, visualize, act, touch and be able to act accordingly to what his purpose will be.” [87].
Major discussion point
Major discussion point 4: Edge/Physical AI
Topics
Artificial intelligence | Internet governance | The enabling environment for digital development
Gene01 humanoid showcases successful edge AI deployment on AMD technology
Explanation
Garcia cites the Gene01 humanoid, unveiled at CES, as a concrete example of AMD‑powered edge AI, demonstrating the feasibility of sophisticated AI agents operating outside traditional data centers.
Evidence
“At CES early January, Lisa Su brings on stage Gene01, which was the first humanoid built on AMD technology.” [102].
Major discussion point
Major discussion point 4: Edge/Physical AI
Topics
Artificial intelligence | Social and economic development | The enabling environment for digital development
Moderator
Speech speed
11 words per minute
Speech length
65 words
Speech time
337 seconds
Facilitating India’s startup ecosystem and partnerships
Explanation
The moderator acknowledges the pivotal role of the AMD‑MITEI partnership in advancing India’s startup ecosystem, emphasizing collaboration among government, industry, and entrepreneurs to drive AI adoption.
Evidence
“He’s been instrumental in advancing India’s startup ecosystem and fostering impactful partnerships between the government, industry and entrepreneurs.” [95].
Major discussion point
Major discussion point 2: Startup and SME role in AI readiness
Topics
The enabling environment for digital development | Financial mechanisms | Capacity development
Agreements
Agreement points
Open ecosystems and standards are essential for innovation and preventing vendor lock-in
Speakers
– Thomas Zacharia
– Timothy Robson
– Paneerselvam M
Arguments
Open systems like Android allow innovation without vendor lock-in, which is crucial for India’s participation in the semiconductor ecosystem
PyTorch, JAX, and Triton provide vendor-agnostic development environments that prevent technological lock-in
Open source approach enables startups to demonstrate value and help implement AI readiness across small and medium enterprises
Summary
All speakers strongly advocate for open-source approaches and standards that prevent vendor lock-in, enable innovation, and allow broader participation in the AI ecosystem
Topics
The enabling environment for digital development | Artificial intelligence | The digital economy
AI requires comprehensive infrastructure beyond just GPUs
Speakers
– Thomas Zacharia
– Gilles Garcia
Arguments
AI requires full suite capabilities from PCs to edge computing, not just GPU-centric solutions
Physical AI is moving to edge computing requiring different technologies than cloud-based GPUs for robotics and industrial applications
Summary
Both speakers emphasize that AI deployment requires diverse technological approaches spanning from cloud infrastructure to edge computing, not just GPU-focused solutions
Topics
Artificial intelligence | Information and communication technologies for development
Startups play a crucial role in AI democratization and need accessible resources
Speakers
– Timothy Robson
– Paneerselvam M
Arguments
Startups need accessible compute resources and proof-of-concept capabilities to move from POC to purchase orders
Open source approach enables startups to demonstrate value and help implement AI readiness across small and medium enterprises
Summary
Both speakers recognize startups as key drivers of AI adoption and emphasize the need to provide them with accessible resources and support to validate their concepts
Topics
The enabling environment for digital development | Financial mechanisms | Artificial intelligence
Language diversity presents significant challenges in AI development
Speakers
– Timothy Robson
Arguments
Finland’s experience with Uralic language processing demonstrates challenges of incorporating smaller languages into LLM models
Indian languages like Bodo, Konkani, Dogri, Sindhi, and Nepali with less than 5 million speakers need specialized attention in AI development
Summary
Timothy highlights the specific challenge of incorporating languages with smaller speaker populations into AI systems, using both Finnish and Indian languages as examples
Topics
Artificial intelligence | Closing all digital divides | Social and economic development
Similar viewpoints
Both speakers emphasize the importance of comprehensive AI infrastructure and immediate technical support to enable broad AI adoption and development
Speakers
– Thomas Zacharia
– Timothy Robson
Arguments
National AI readiness requires talent development, compute access, research enablement, startup innovation labs, and enterprise adoption
Day zero support ensures new AI models run optimally on AMD hardware immediately upon release
Topics
Artificial intelligence | Capacity development | The enabling environment for digital development
Both speakers recognize the need for human oversight and the varying levels of AI literacy that require different approaches to AI deployment and governance
Speakers
– Thomas Zacharia
– Timothy Robson
Arguments
Agentic AI systems require human-in-the-loop validation before updating outcomes to prevent unintended consequences
Enterprise customers show varying levels of AI understanding, from sophisticated matrix optimization to basic chatbot concepts
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society | Capacity development
Both speakers emphasize the critical role of public-private partnerships and corporate support in enabling startup success in the AI ecosystem
Speakers
– Paneerselvam M
– Timothy Robson
Arguments
Corporates play a crucial role in startup success, requiring continued partnerships between government and industry
Startups need accessible compute resources and proof-of-concept capabilities to move from POC to purchase orders
Topics
The enabling environment for digital development | Financial mechanisms
Unexpected consensus
Human oversight remains essential even in advanced AI systems
Speakers
– Thomas Zacharia
– Timothy Robson
Arguments
Agentic AI systems require human-in-the-loop validation before updating outcomes to prevent unintended consequences
Enterprise customers show varying levels of AI understanding, from sophisticated matrix optimization to basic chatbot concepts
Explanation
Despite being from a technology company promoting AI capabilities, both speakers unexpectedly emphasize the continued importance of human oversight and validation in AI systems, showing a balanced approach to AI governance
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Small language communities require specialized attention in AI development
Speakers
– Timothy Robson
Arguments
Finland’s experience with Uralic language processing demonstrates challenges of incorporating smaller languages into LLM models
Indian languages like Bodo, Konkani, Dogri, Sindhi, and Nepali with less than 5 million speakers need specialized attention in AI development
Explanation
It’s unexpected for a technology company representative to specifically highlight the challenges faced by smaller language communities, showing genuine concern for linguistic inclusion rather than just focusing on major market languages
Topics
Artificial intelligence | Closing all digital divides
Overall assessment
Summary
The speakers demonstrate strong consensus on the importance of open ecosystems, comprehensive AI infrastructure, startup support, and inclusive development approaches. They share a vision of democratized AI access while maintaining human oversight and addressing linguistic diversity challenges.
Consensus level
High level of consensus with significant alignment on key principles of open development, inclusive AI deployment, and the need for comprehensive ecosystem support. This consensus suggests a mature understanding of AI development challenges and a commitment to sustainable, inclusive growth in the AI sector.
Differences
Different viewpoints
Focus on cloud-based AI versus edge computing applications
Speakers
– Timothy Robson
– Gilles Garcia
Arguments
PyTorch, JAX, and Triton provide vendor-agnostic development environments that prevent technological lock-in
Physical AI is moving to edge computing requiring different technologies than cloud-based GPUs for robotics and industrial applications
Summary
Timothy focuses on cloud-based AI solutions and vendor-agnostic frameworks for traditional AI applications, while Gilles emphasizes the shift toward edge computing and physical AI applications that require different technological approaches than cloud-based systems
Topics
Artificial intelligence | Information and communication technologies for development
Emphasis on governance versus technical implementation
Speakers
– Thomas Zacharia
– Timothy Robson
Arguments
AI governance focuses on ensuring responsible deployment rather than just regulation, similar to academic peer review processes
Day zero support ensures new AI models run optimally on AMD hardware immediately upon release
Summary
Thomas emphasizes the importance of AI governance, human oversight, and responsible deployment frameworks, while Timothy focuses more on technical implementation, immediate compatibility, and removing barriers to AI adoption
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society | The enabling environment for digital development
Unexpected differences
Priority between linguistic inclusion and technical optimization
Speakers
– Timothy Robson
– Thomas Zacharia
Arguments
Indian languages like Bodo, Konkani, Dogri, Sindhi, and Nepali with less than 5 million speakers need specialized attention in AI development
Genesis Initiative aims to use AI to accelerate scientific discovery and reduce the gap between R&D funding and research output efficiency
Explanation
While both speakers support AI development, Timothy emphasizes linguistic diversity and inclusion for smaller language communities, while Thomas focuses on scientific discovery and research efficiency. This represents different priorities in AI development – social inclusion versus scientific advancement
Topics
Artificial intelligence | Closing all digital divides | Social and economic development
Overall assessment
Summary
The speakers showed remarkable alignment on core principles like open ecosystems, comprehensive AI development, and the need for broad accessibility, with disagreements mainly on emphasis and approach rather than fundamental goals
Disagreement level
Low to moderate disagreement level. The differences were primarily about focus areas (governance vs. technical implementation, cloud vs. edge computing, linguistic inclusion vs. scientific advancement) rather than opposing viewpoints. This suggests a healthy diversity of perspectives within a shared vision for AI development, which could actually strengthen overall AI ecosystem development by addressing multiple critical aspects simultaneously
Partial agreements
Partial agreements
All speakers agree on the importance of open ecosystems and avoiding vendor lock-in, but they emphasize different aspects: Thomas focuses on national participation in semiconductor ecosystems, Timothy on technical frameworks and development environments, and Paneerselvam on startup enablement and SME adoption
Speakers
– Thomas Zacharia
– Timothy Robson
– Paneerselvam M
Arguments
Open systems like Android allow innovation without vendor lock-in, which is crucial for India’s participation in the semiconductor ecosystem
PyTorch, JAX, and Triton provide vendor-agnostic development environments that prevent technological lock-in
Open source approach enables startups to demonstrate value and help implement AI readiness across small and medium enterprises
Topics
The enabling environment for digital development | Artificial intelligence | The digital economy
Both agree on the need for comprehensive AI ecosystem development involving multiple stakeholders, but Thomas presents a broader national framework while Paneerselvam focuses specifically on corporate-startup partnerships and government facilitation
Speakers
– Thomas Zacharia
– Paneerselvam M
Arguments
National AI readiness requires talent development, compute access, research enablement, startup innovation labs, and enterprise adoption
Corporates play a crucial role in startup success, requiring continued partnerships between government and industry
Topics
Artificial intelligence | The enabling environment for digital development | Capacity development
Similar viewpoints
Both speakers emphasize the importance of comprehensive AI infrastructure and immediate technical support to enable broad AI adoption and development
Speakers
– Thomas Zacharia
– Timothy Robson
Arguments
National AI readiness requires talent development, compute access, research enablement, startup innovation labs, and enterprise adoption
Day zero support ensures new AI models run optimally on AMD hardware immediately upon release
Topics
Artificial intelligence | Capacity development | The enabling environment for digital development
Both speakers recognize the need for human oversight and the varying levels of AI literacy that require different approaches to AI deployment and governance
Speakers
– Thomas Zacharia
– Timothy Robson
Arguments
Agentic AI systems require human-in-the-loop validation before updating outcomes to prevent unintended consequences
Enterprise customers show varying levels of AI understanding, from sophisticated matrix optimization to basic chatbot concepts
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society | Capacity development
Both speakers emphasize the critical role of public-private partnerships and corporate support in enabling startup success in the AI ecosystem
Speakers
– Paneerselvam M
– Timothy Robson
Arguments
Corporates play a crucial role in startup success, requiring continued partnerships between government and industry
Startups need accessible compute resources and proof-of-concept capabilities to move from POC to purchase orders
Topics
The enabling environment for digital development | Financial mechanisms
Takeaways
Key takeaways
AI infrastructure is evolving beyond GPU-centric solutions to encompass full-stack capabilities from cloud to edge computing, with dramatic efficiency improvements (2.9 exaflops in one rack for 220 kilowatts)
Open ecosystem approach is critical for innovation and preventing vendor lock-in, enabling countries like India to participate in the semiconductor ecosystem without starting at cutting-edge manufacturing
Sovereign AI initiatives like the Genesis Initiative demonstrate how governments can leverage AI to accelerate scientific discovery and address the declining efficiency of R&D investments
Language diversity presents significant challenges for AI adoption, particularly for smaller languages with fewer than 5 million speakers, requiring specialized attention in LLM development
Physical AI represents a major shift toward edge computing for robotics and industrial applications, requiring different technologies than traditional cloud-based solutions
Human-in-the-loop governance is essential for agentic AI systems to prevent unintended consequences while maintaining innovation speed
India shows exceptional readiness for AI transformation with 267,000 summit registrations, indicating strong curiosity and engagement across the country
Enterprise AI adoption varies widely in sophistication, from advanced matrix optimization to basic chatbot understanding, requiring tailored approaches
Resolutions and action items
AMD committed to continued partnership with METI Startup Hub to facilitate AI readiness across small and medium enterprises
Participants encouraged to visit AMD booth (5.10) to see Helios rack demonstration and engage with ecosystem partners
AMD Developer Cloud access promoted with 50-100 hours of free compute for startups to test proof-of-concepts
Day zero support announced for new AI models including recent releases like Quen3 Codex and Baidu paddle models
First 12 Indian languages now have full day zero support for AI applications
Public-private partnership model established for American Science Cloud running on MI355 cluster infrastructure
Unresolved issues
Specific implementation details for incorporating smaller Indian languages (Bodo, Konkani, Dogri, Sindhi, Nepali) into LLM models not fully addressed
Timeline and specific mechanisms for international expansion of Genesis Initiative beyond initial US focus unclear
Detailed technical specifications and deployment strategies for physical AI applications in Indian industrial contexts not elaborated
Specific funding mechanisms and resource allocation for startups transitioning from proof-of-concept to commercial deployment not detailed
Integration challenges between different accelerator types (GPUs, TPUs, FPGAs, Inferentia) in METI’s infrastructure not fully explored
Scalability concerns for moving from current AI adoption levels to projected 5 billion users not addressed
Suggested compromises
Balanced approach between autonomous AI operations and human oversight through inner-loop/outer-loop architecture for agentic systems
Multi-vendor approach in METI infrastructure providing access to various accelerator types (GPUs, TPUs, Inferentia) rather than single-vendor lock-in
Graduated compute access model from free developer cloud hours to accelerator programs to full commercial partnerships for startups
Open standards adoption (PyTorch, JAX, Triton) to enable vendor-agnostic development while maintaining performance optimization
Federated compute and data approach for scientific research to integrate private sector, academia, and national lab resources
Phased approach to AI readiness focusing on talent development, compute access, research enablement, and enterprise adoption rather than attempting simultaneous transformation
Thought provoking comments
The problems are getting harder. It is getting more challenging and even though we are trying to tackle really important problems the sense is that throwing money is not having the same rate of return… how do we reduce the gap and at least the thesis for the Genesis mission is that you can use AI as a way to accelerate scientific discovery
Speaker
Thomas Zacharia
Reason
This comment reframes AI not just as a technological advancement but as a solution to a fundamental crisis in research productivity. It challenges the assumption that more funding equals more innovation and positions AI as a paradigm shift in how scientific discovery operates.
Impact
This set the foundational framework for the entire discussion, shifting focus from AI as a commercial technology to AI as a tool for solving humanity’s most complex challenges. It established the context for discussing sovereign AI and national readiness beyond just economic competitiveness.
Innovation, if you think about AI, AI didn’t happen magically with NVIDIA or AMD. It happened because US government took the risk to invest in first of a kind systems. So we were the first to deploy 30,000 NVIDIA GPUs when people thought that CUDA was a four-letter word.
Speaker
Thomas Zacharia
Reason
This comment challenges the popular narrative about AI’s origins and highlights the critical role of government investment in breakthrough technologies. It reveals how current AI success stories were built on earlier public sector risk-taking that private markets wouldn’t support.
Impact
This insight shifted the conversation toward the importance of public-private partnerships and government leadership in emerging technologies. It provided historical context that supported arguments for sovereign AI initiatives and validated government investment in cutting-edge research infrastructure.
30th of November, 2022. The world changed. ChatGPT was launched… what we thought we knew about AI changed. Two years ago, what we thought even after ChatGPT changed. A year ago, what we thought changed. And what I’m hoping is as you leave here, other things would have changed in the last 45 minutes
Speaker
Timothy Robson
Reason
This comment captures the unprecedented pace of change in AI and challenges the audience to remain intellectually flexible. It emphasizes that expertise in AI requires constant learning and adaptation, not fixed knowledge.
Impact
This observation created a sense of urgency and openness in the discussion. It prepared the audience to question their assumptions and established the theme that success in AI requires embracing continuous change rather than relying on current understanding.
Finland is a Uralic language nothing to do with any other language in Europe… so if we look at Bodo, Konkani, Dogri, Sindhi, Nepali there’s less than 5 million people that speak those languages so how do you get an Indian LLM that caters for everybody
Speaker
Timothy Robson
Reason
This comment reveals a critical challenge in AI democratization – how to serve linguistic minorities in a world dominated by large-language models trained on major languages. It connects technical AI development to issues of cultural preservation and inclusive technology.
Impact
This shifted the discussion from general AI capabilities to specific challenges of linguistic diversity and cultural inclusion. It highlighted how technical solutions must address social equity and demonstrated the complexity of building truly inclusive AI systems for diverse populations.
We are seeing the AI moving into the edge and moving into the far edge which is industrial, robotics vehicles as well as the networks… these technologies need to be able to act, react in a so quick manner that there is no time to go back to the cloud for that
Speaker
Gilles Garcia
Reason
This comment challenges the cloud-centric view of AI and introduces the concept of autonomous, real-time AI systems that must operate independently. It expands the AI discussion beyond data processing to physical world interaction and safety-critical applications.
Impact
This broadened the conversation from computational AI to physical AI, introducing new considerations around safety, reliability, and real-time decision-making. It opened up discussion of AI applications in robotics, autonomous systems, and industrial automation that require fundamentally different approaches than cloud-based AI.
Governance does not mean regulation there is a role for regulation by governments but governance is that how do you want if you want to have agentic systems driving, accelerating innovation you want to make sure that the output has a person in the loop
Speaker
Thomas Zacharia
Reason
This comment distinguishes between regulatory compliance and responsible AI governance, introducing the concept of human-in-the-loop systems for agentic AI. It addresses the critical balance between AI autonomy and human oversight in high-stakes applications.
Impact
This comment introduced a nuanced approach to AI safety and control that went beyond simple regulation. It influenced the discussion toward practical frameworks for managing autonomous AI systems while maintaining human agency and responsibility.
Overall assessment
These key comments fundamentally shaped the discussion by expanding it beyond typical AI technology presentations to address deeper questions about AI’s role in society, governance, and human progress. The speakers successfully challenged conventional thinking about AI development, moving from a hardware-centric view to a more holistic understanding that encompasses linguistic diversity, physical world applications, and responsible governance. The comments created a progression from foundational challenges (research productivity crisis) through technical solutions (open ecosystems, edge computing) to societal implications (linguistic inclusion, human oversight). This elevated the conversation from a product showcase to a strategic dialogue about building inclusive, responsible, and nationally relevant AI capabilities.
Follow-up questions
How do we reduce the gap between R&D funding and research output efficiencies?
Speaker
Thomas Zacharia
Explanation
This addresses a fundamental challenge in scientific research where increased funding is not yielding proportional returns, suggesting need for new approaches like AI-accelerated discovery
How do you integrate data from private sector, academia, and national labs for federated compute and data systems?
Speaker
Thomas Zacharia
Explanation
This is crucial for implementing sovereign AI initiatives that require collaboration across different institutional boundaries while maintaining security and governance
How do you get languages with less than 5 million speakers into LLM models effectively?
Speaker
Timothy Robson
Explanation
This addresses the challenge of creating inclusive AI systems for smaller language communities, particularly relevant for Indian languages like Bodo, Konkani, Dogri, Sindhi, and Nepali
How do you create an Indian LLM that caters for everybody across all 22 Indian languages?
Speaker
Timothy Robson
Explanation
This relates to the national goal of ‘AI for all’ and ensuring linguistic diversity is preserved and supported in AI systems
How do you take AI capabilities from research institutes and accelerators to a stage where corporations can actually use them?
Speaker
Timothy Robson
Explanation
This addresses the critical gap between AI research and practical enterprise implementation, which is essential for widespread adoption
How do we make AI populate into all layers of society, not just large corporates but also small and medium enterprises?
Speaker
Paneerselvam M
Explanation
This is important for ensuring broad-based economic growth and preventing AI benefits from being concentrated only among large organizations
How do we address autonomous networks, autonomous cars, autonomous plants, and industrial applications in the Indian context?
Speaker
Gilles Garcia
Explanation
This explores opportunities for India in physical AI applications that require edge computing solutions rather than cloud-based GPU systems
How do we implement cloud-enabled lab operations when that’s not how things are done today?
Speaker
Thomas Zacharia
Explanation
This addresses the operational transformation needed in research institutions to support AI-driven scientific discovery
How do we ensure security and governance by design, especially in public-private partnerships?
Speaker
Thomas Zacharia
Explanation
This is critical for maintaining trust and safety in AI systems, particularly when dealing with sensitive national security and research data
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

