Waves of infrastructure Open Systems Open Source Open Cloud

20 Feb 2026 18:00h - 19:00h

Waves of infrastructure Open Systems Open Source Open Cloud

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session, led by Renu Raman, introduced Proximal Cloud’s vision to build enterprise-private cloud infrastructure that brings compute close to data for India’s large-scale AI needs [1-5]. Raman outlined historical technology cycles-semiconductor advances in the 80s-90s, the cloud era of the last two decades, and the current AI wave driven by large language models-as the backdrop for a new “infant-scale” compute layer aimed at population-scale workloads [28-34][38-41]. She emphasized a growing demand-supply gap in AI-ready infrastructure, noting that global compute spending is projected to rise from $50 billion to $300 billion and eventually near $2 trillion, creating pressure for more affordable, distributed systems [47-49][57-60][46-47].


To address this, Proximal is partnering with UC San Diego’s new data-science institute to develop hardware kernels and inference engines for health-science and agriculture use cases [9-14][108-110]. A hardware-software co-design strategy with AMD is being pursued to combine x86 CPUs and high-memory GPUs capable of running 128-billion-parameter models, enabling single-node solutions for many customers [105-106]. The company defines “Proximal” as bringing compute nearer to data, memory, and the business domain, thereby supporting sovereign, low-cost AI deployments especially in India’s health and agriculture sectors [130-133].


Jensen Huang reminded that most data-processing workloads still run on CPUs, reinforcing the need for a balanced CPU-GPU stack in private-cloud offerings [97-104]. Divium, presented by Lalit Bhatt, offers an inference layer that automatically evaluates model quality, selects the best-performing model per dollar, and reduces cost and latency for enterprise pilots, as demonstrated with a 60 % cost cut for an Indian travel aggregator [156-182]. Instant System’s venture-builder Sandeep Kumar highlighted common AI challenges such as hallucinations, disambiguation, and data-privacy, and claimed its platform achieves 99 % reliability while keeping costs low [196-205][207-226].


Infosys representative Arya Bhattacharjee argued that India’s AI future depends on software and AI capabilities rather than waiting for domestic chip fabrication, and cited on-premise data (over 90 %) as a driver for agentic AI solutions [244-252]. Raman quantified the infrastructure needed for a 10-gigawatt AI-ready capacity in India as roughly $250 billion in hardware, suggesting this scale could spawn a domestic ecosystem of semiconductor, OEM, and application companies akin to global SAP or Palantir players [315-322]. She also noted that emerging Indian manufacturers such as VVDN and public-market financing could supply chassis and board design, but substantial early-stage investment is required to bridge the demand-supply and skill gaps [378-395].


The discussion concluded that coordinated public, private, and venture funding, together with open-source model innovation and localized hardware, is essential for India to achieve sovereign, low-cost AI compute at population scale [274-288][298-303].


Keypoints


Major discussion points


Proximal Cloud’s vision and market focus – The presenters framed the company as a provider of “infant-scale” sovereign compute that brings processing close to data, especially for India’s massive population and for verticals such as health, education and agriculture [5-7][13-15][94-95][112-115][130-133].


Technology trends and infrastructure gaps – A recurring theme was the shift from CPU-only workloads to heterogeneous systems (CPU + GPU, new memory hierarchies, terabit-scale Ethernet) and the resulting demand-supply gap in compute, power and capital expenditure (-$300 B now, projected $2 T in the next decade) [28-34][57-61][78-80][84-88][115-118][41-47].


Strategic partnerships and concrete use-cases – The talk highlighted collaborations with UC San Diego, AMD (CPU/GPU blend), PharmEx (precision agriculture), Divium (LLM inference routing), ZetaVault and other ecosystem players to demonstrate real-world applications in education, health-sciences and industry [9-14][105-108][135-152][155-162][170-179][190-194].


Barriers to scaling Gen-AI pilots – The panel stressed three core obstacles: undefined quality metrics, unpredictable inference costs, and rapid model churn, which Divium aims to solve through automated model evaluation and routing [156-164][170-179].


India’s strategic opportunity and investment needs – Speakers argued that India’s push for 10 GW of AI-ready power translates into $250 B of hardware spend, creating space for new semiconductor, hardware, and software firms (potential “SAP-like” or “Palantir-like” companies). They called for coordinated public-private funding and a domestic manufacturing ecosystem to capture this value [113-115][250-258][315-322][378-386][390-398].


Overall purpose / goal


The session was designed to introduce Proximal Cloud, outline its technical roadmap and business model, showcase partner ecosystems and early use-cases, and position India as a fertile ground for a sovereign, low-cost AI compute stack. By mapping technology trends, market gaps, and investment requirements, the presenters aimed to attract collaborators, customers, and capital to accelerate the rollout of infant-scale AI infrastructure in India and beyond.


Overall tone and its evolution


Opening (0-15 min): Highly enthusiastic and visionary, emphasizing excitement about new offerings and the long-term “technology shifts” that will reshape computing [1-6][28-34].


Technical deep-dive (15-40 min): Shifts to a detailed, data-driven tone, citing historical cycles, infrastructure statistics, and hardware specifications [41-47][57-61][78-80].


Partner & use-case showcase (40-55 min): Becomes collaborative and demonstrative, highlighting concrete projects (UC SD, PharmEx, Divium) and their impact [135-152][170-179].


Strategic & policy discussion (55-70 min): Moves to a broader, forward-looking and somewhat persuasive tone, arguing for India’s sovereign AI agenda, the need for massive investment, and the potential emergence of new “national champions” [113-115][315-322][378-386].


Closing (70-71 min): Returns to an inclusive, call-to-action tone, inviting questions, emphasizing ecosystem building, and thanking participants [374-376][380-386].


Overall, the conversation progressed from excitement to technical depth, then to partnership validation, and finally to a strategic, policy-oriented appeal, maintaining an optimistic and collaborative spirit throughout.


Speakers

Full session reportComprehensive analysis and detailed insights

The session opened with Renu Raman welcoming the audience, announcing a flurry of activities and the recent launch of Proximal Cloud’s offering, and positioning the company as a provider of enterprise-private-cloud infrastructure that brings compute close to data for the Indian market [1-5]. She outlined the agenda – setting the industry context, presenting partner ecosystems, and a planned Q&A that would have featured presentations from Bharat Jain and Zeta Bolt[6-8]. The recorded session, however, moved directly to partner talks without those presentations.


Raman highlighted the sponsorship of UC San Diego’s public-private AI initiative, noting the university’s new School of Computing, Information Sciences and Digital Sciences as a collaborative hub for health-science use cases[9-14]. She also referenced an AI-for-Education component that will provide Jupyter-style notebooks, a research-paper archive, and a commercial AI chat service [200-202]. An intended MRI-image demo could not be shown, illustrating the health-science application she hopes to enable [203-204].


Shifting to a historical perspective, Raman reminded listeners that humanity tends to under-estimate ten-year horizons while over-estimating two-year gains[18-20]. She traced three major technology waves: the semiconductor boom of the 1980s-90s driven by Moore’s Law [29-30], the cloud era of the past two decades [31-33], and the current AI surge powered by large language models [34-38]. Emphasising a hardware-software co-design philosophy, she stated that “serious software teams should eventually design their own hardware, and serious hardware teams should design their own software” [34-36].


Raman then quantified the growing demand-supply gap in AI-ready infrastructure. Global compute spending has risen from roughly $50 B in 2000 to $300 B today, and is projected to approach $2 T within the next 5-10 years[41-47]. This surge translates into massive capital outlays for power, memory, networking and storage – roughly $3-5 of compute-related capital for every $1 of power[44-46]. She noted that AI will affect 95 % of work, far exceeding the productivity gains of the SaaS era, and therefore demands far greater compute capacity [38-40][57-61].


To address the gap, Proximal is forging strategic partnerships. The collaboration with UC San Diego’s data-science institute enables joint work on hardware kernels, inference engines and health-science applications [108-110]. A hardware-software co-design deal with AMD supplies a balanced CPU + GPU stack with high-capacity memory (256 GB HPM, scaling to 512 GB) capable of hosting 128-billion-parameter models on a single node [105-108].


Raman also described the network and memory hierarchy evolution – moving from 10 GbE to 800 GbE and toward terabit links, and the debate between single-type versus multi-type memory architectures [205-207].


Jensen Huang reinforced the CPU-centric nature of current data-processing workloads, noting that platforms such as Databricks, Snowflake and Oracle’s SQL engines still run almost entirely on CPUs [97-104]. He announced an upcoming initiative to accelerate data processing, echoing Raman’s call for a heterogeneous “happy blend” of CPUs and GPUs[105-107].


Partner use-cases followed:


* PharmEx (Lalit Bhatt) showcased a precision-farming platform that integrates soil sensors, drone imaging and autonomous tractors. By placing inference locally, the solution reduces cost for cost-sensitive farmers (≈ ₹45,000 per unit) and supports applications such as irrigation scheduling, anomaly detection and yield prediction [135-152].


* Divium (presented by Bharat, Director at Divium) tackled the three killers of Gen-AI pilots – undefined quality, unpredictable costs and rapid model churn. The platform automatically evaluates model quality, routes queries to the most cost-effective model, and continuously upgrades without breaking production. Pilot results showed a -60 % cost reduction for a travel-aggregator and -30 % latency with 95 % case-resolution for an e-pharmacy[156-186].


* Instant System (Sandeep Kumar) described a venture-builder framework that mitigates hallucinations, disambiguation errors, data-privacy breaches and reliability issues, achieving 99 % reliability while keeping costs low for enterprise AI agents [196-226].


In the India-focused segment, an Infosys representative (Arya Bhattacharjee) argued that India’s advantage lies in software and on-premise AI rather than immediate chip fabrication. She cited that >90 % of enterprise data remains on-prem and that on-prem AI factories can cut fab-level costs by ≈ 25 % (≈ $10 M per day)[244-252]. Raman quantified the infrastructure required for a 10 GW AI-ready power capacity in India as roughly $250 B in hardware, a scale that could nurture a domestic ecosystem of semiconductor, OEM and application companies comparable to global SAP or Palantir players [113-115][315-322][378-398]. She pointed to emerging Indian manufacturers such as VVDN and Sanmina, and to public-market financing avenues, as the nascent supply chain needed to materialise this vision [378-386][390-398].


Latency emerged as a critical engineering target. Citing Google’s historic 20 ms query-response benchmark, Raman proposed a more realistic ≈ 120 ms target for population-scale services in India, arguing that additional compute resources and algorithmic improvements can achieve this goal [300-304]. Audience member Abhishek Singh asked whether sub-second or sub-millisecond responses could be delivered to 1.5 billion users at a cost of ~200 rupees per month; the panel discussed the feasibility of such ultra-low latency at massive scale [292-297].


Raman framed open-source models as the next abstraction layer in distributed computing, likening them to the role hypervisors and Linux played in earlier eras. She explained that while closed models will continue to evolve within organisations like OpenAI and Google, the broader ecosystem will innovate around open models, potentially leading to a Distributed Computing 3.0 paradigm [274-288]. In response to a question from Abhishek, she emphasized that models now serve as a new “graph layer” underlying email, documents, Teams, etc., which she described as the most important enterprise database[308-310].


Funding mismatches were highlighted: government startup grants of 20 crore contrast sharply with the $100 M required for deep-tech talent, raising doubts about the availability of capital to build Nvidia-scale ventures in India [343-352]. Raman noted a “demand-supply-skill” gap that must be bridged through coordinated public-private investment and early-stage venture support [361-368][378-395].


In closing, Raman reiterated the need for a sovereign, low-cost, infant-scale compute layer that serves health, education and agriculture, invited participants to engage further with Proximal Cloud and its partners, and emphasized that realising India’s AI ambition will require long-term investment, ecosystem collaboration and a balanced hardware-software strategy[374-386].


Overall, the discussion displayed strong consensus on the necessity of heterogeneous compute, on-premise AI for data sovereignty, and the importance of latency and cost optimisation. Divergence remained on the relative emphasis of hardware versus software development and on the adequacy of current funding mechanisms. The participants collectively outlined a roadmap that blends visionary goals with concrete partnerships, use-case pilots and policy-level considerations to drive India’s transition to population-scale AI infrastructure.


Session transcriptComplete transcript of the session
Renu Raman

Announcements and a lot of activities going on here this week. Excited about it. We are excited about introducing what we do and what we do more in the context of India. We just launched our offering and we’ll be talking more of what we do with our partners in the coming weeks and months. But today, I’d like to introduce ourselves. But before we introduce, we want to set the context of where we fit in, both in the industry trends and the ecosystem and what category we go after from an enterprise private cloud infrastructure. And then we’ll get into sharing some of our partners that we work with and then a Q &A at the end of it with a presentation from Bharat Jain and from Zeta Bolt.

We’ll have an interactive Q &A on some key top three questions or end questions that we think need to be answered. With that, let me start with the first. I want to… thank our sponsors and our collaborators and partnerships at UC San Diego, where they have an initiative for public -private partnership at UC San Diego for AI for education, AI for research, and AI for industry. And we are one of the early industry partners. There’s a newly constituted data science and data center institute called School of Computing, Information Sciences, and Digital Sciences. And we’ll talk a little bit more about it downstream. But this collaboration enables us to not only work on technologies, but also look at key use cases, particularly on health sciences, because San Diego has got one of the largest health science, both hospital system as well as clinical research and variety of health and biotech research.

With the thesis that fundamentally computing is going to be driven by biology and health, it’s a very key partnerships that we hope to work with. going forward. With that, let me step back. This is my standard slide I use in any presentations in terms of long -term reminders, what happens in technology. So where we fit in, we’ll just walk through for the next 20, 30 minutes about what we are doing from a systems innovation, but the systems innovation is going to be punctuated or represented in the context of where the technology shifts that have occurred and will occur as we go forward. So simple reminders are we, as humanity, underestimate. We overestimate what can be done in two years, but we underestimate what can be done in 10 years.

You can go back in history, look at self -driving cars, look at neural link. I remember a slide I had put at UC Berkeley, a conference about programming languages and productivity languages and kind of a very tongue -in -cheek thought, and you just have to think and write and get confused. And I thought, well, I’m going to record out. that was in 2014. I’ll put the slides out later. I thought it would be science fiction, never happened for hundreds of years. But guess what? You can think, you can put a neural link, and probably have cursors generate code for you today. That I never thought about in 2014. So never underestimate what will happen. The big technology shifts that occur every 30 years, 15 years, 7 years.

But the key thing is semiconductors drove the technology innovation in the 80s and 90s, thanks to Moore’s Law. And the cloud phenomenon happened in the last two decades. I do see the pattern now as you are innovating, as you can see, where NVIDIA is innovating tremendously from the silicon side up. And of course, there are innovations going from the top -down, from the use case, from the language models, and higher order functions in AI. And both are coming at the same time, together. A third bullet I would say is, people who are serious about software should make their own hardware. The corollary is, people who are serious about hardware should also make their own software.

So I’m a hardware guy who’s done software, and this venture, I should be doing software first, going to the hardware later, kind of reverse model. this is the last one day one thing I’ll say about myself my professional life has been shaped by luckily I didn’t realize where between 1980s and 2000 there was a peak of Moore’s law there was an exponential part but happened to be lucky to have been part of the semiconductor innovation cycle having developed and delivered a number of world class microprocessors so today we talk about models there are only 4 or 5 guys who could do microprocessors there’s a difficult very small teams 150 person if you look at model foundation models today it’s the same characteristic there are hard problems of course it’s a lot more money you need a billion dollars and lots of GPUs but you still need the same 150 people to do the models it’s not like everybody can do the models so there is a similarity between what happened in the 90s about microprocessors and what I see today in model building it’s the same level of complexity where you need the best and brightest roughly about it’s not me it’s some altman coding that it’s 120 people and I think that’s the difference and you need to have them with the right resources computing.

We also need to have a lot of computing resources to go build the models. So with that let me start I think the next wave and we hope to drive the innovation and disrupt in terms of systems building going forward but the context is why it’s economically interesting and valuable is I think everybody knows if you look at GDP we’ve gone in the last 20 years from 33 trillion dollars to almost 100 trillion dollars by all accounts the GDP could improve by 2x or 4x in the next 20 years but the SAS era was really a productivity improvement so it really scratched the surface about productivity whereas AI is going to impact 95 % of work so the time is much bigger the impact is much bigger, the blast radius is much bigger than the last 20 years.

That’s why the computing also is needed much more. We have 300 billion dollars of infrastructure we’re in from 50 billion dollars I believe in 2000 So we’re going to be in from 50 billion dollars in the next 20 years. So we’re going to be in from 50 billion dollars in the next 20 years. So we’re going to be in from 50 billion dollars in the next 20 years. to now about $250 to $300 million of capital expenditure spent for infrastructure. So power in, capital spent. So every dollar of power you spend ends up being $3 to $5 of capital for compute, memory, network storage. And from there, you do the upper layers of software and then applications. So that $50 billion was $300 billion, but if you look at all the spending, we’re already at $400, $500 billion, and all accounts in the next 5 to 10 years will be almost $2 trillion of spend.

That creates, obviously, there’s a big demand -supply gap. The great thing about programming is every time there is a layer of abstraction, the programming gets simpler, which means it brings more people to the party to be able to compute. I think what LLMs and natural and transform models have done is bring everybody to be able to program. We all are logical. We can algorithmically think. We can program makers, but not everybody could program. finally we have a tool to be able to program in the natural language your mother, your grandmother can also talk to the computer and tell what steps to take and it will do the steps for you or it will tell you what steps to do so that’s the fundamental shift which means at population scale you’re going to have computing for everybody, that creates a huge gap, it’s not even 1000x as Jensen would say it’s a billion x absolutely true, but it creates a big technology gap, supply gap and increasingly because of model and languages and data the sovereignty gap also that’s appeared, that’s the theme of the conference that continues to drive tremendous amount of demand now we have seen a little bit of this before, I have been through the first two cycles of innovation in semiconductors in my first job as at Sun Microsystems and then the dot com era and then now and there was always a demand supply gap in one of these transitions and But we solved it in one way.

It doesn’t mean you can solve it the same way, but we are at the crux of solving it also in a similar way, but with a different set of boundary conditions, if you will. So what we solved between 1990 and 2000, if you look at, we went from clock rate, single CPU, to fundamentally shifting to multicore threaded and distributed systems, and that was the cloud phenomena. I have a slide later to show what the transition was. I’ll probably skip this slide. I think everybody knows we need lots of power, and one interesting point is I think India is going from almost nothing less than a gigawatt to about 10 gigawatt buildup, while the U .S. is going from 25 to 125 gigawatt in other regions, and China.

EMEA is going to be on a comparative basis, on a relative basis, a lot less. But the need for AI -ready geolocal data centers we already see. Everybody is building out. And what is the infrastructure? What is the architecture to support that? there’s certainly reference architectures inside the hyperscalers Google has got a TPU based and AWS has got Tranium based infrastructure Tranium plus general purpose computing and of course Microsoft has got Maya and of course NVIDIA so those are probably and of course AMD but increasingly over time you want to have an open multi -vendor strategy that’s probably where we’ll check we’ll talk more about so why do I believe these transitions and distributed systems are drivers of new innovation up and down the stack this is not new this has happened in history starting from VAX 11 780 was disrupted by of course at that time PC but more so in the enterprise side was Sun and the workstation if you think of the first distributed system in the modern era it was Sun Microsystems where Ethernet was used to build a distributed system network file system and that was version one and over time it’s like evolution you gain more mass, more momentum more weight in your capabilities and you end up building big monstrous machines in E10K that drove the internet and the dot com era but that was also was an Achilles heel because that was not going to enable the scale that people had to go build at much bigger so Google was probably the epitome of the next big shift I’ll talk about that and similar thing we see today is we’ve gone from CPU only dual socket x86 memory clusters to heterogeneous compute but also gone to a fairly large scale up now the interesting transition today was then is training and inferencing as you’ve seen the news lately with the Grok acquisition by Nvidia and others there’s clearly a separation between the training kind of workloads and inference type of workloads and what kind of systems you want to support because inference is going to drive a lot more of the compute so the one way I think about is inference and biology or workloads related to biology and healthcare are going to be the drivers of computing like it was for graphics in the 1990s.

So this is back again to reinforce the point that between 1994 and 2005, we saw the shift going from the version 1 .0 of distributed system to version 2 .0, which is open source. So the first one was open systems in the first 20 years. And open source came and enabled a new way to build distributed system because from an economics, it removed the cost of middle age of software. Everybody got access. In this case, Linux. This is Solaris. But that also enabled to build truly hundreds of thousands of machines in a single cluster. And out of it came Borg, Kubernetes, a whole bunch of other distributed file systems, all kinds of innovations that happened. So the proposition here is I think we are at the cusp of similar things on the infant scale computers.

And I think we are at the cusp of similar things on the infant scale computers. And I think we are at the cusp of similar things on the infant scale computers. And I think we are at the cusp of similar things on the infant scale computers. So just a reminder. And the punctuation that happens every time turns out to be, if you look back in history, it’s Ethernet. Yes, the network is the computer, but more important is 10 megabit was the onset of replacing big mainframes or miniframes like VAX to workstations and network of workstations. Then right at the point of 10 gigabit Ethernet coming around 2000 to 2002 timeframe, along with it was multi -core, was enabled the new distributed building block.

We are at the same point. We have got 800 gigabit Ethernet going to 800 gigabit Ethernet and probably a terabit Ethernet networks. And that’s hopefully, and that will be the enabler, and that’s a bet we are making. So the other element of the system is its network and then the memory. And do you build a full scale -up system at data center -wide? Certainly you need for training for backpropagation and forward. but inference can be much more distributed, shardable and it’s time to rethink what kind of systems you want for inference only dominating infrastructure. The other dimension to think about is we’ve gone from a single memory type to multiple memory types so do we need four light types of memory to deal with a variety of layers or just two or one?

That’s a lot of debate in the technical community but that’s a critical decision that will happen. So a way to think about this, we think of the entire system not from flops and GPU and compute. GPU compute, CPU compute are needed but really what does the memory hierarchy or memory system look like because there is a physical view because that dominates the cost function and the power function but equally at the same time you have to represent that especially from a performance standpoint you are caching lots of different data. for computing. So think of the KV caches for the LLM side, the in -memory representations of many of the data from a performance standpoint. So that’s a layer that is continuous, is rich in innovation and technical innovation that we hope to have an influence as well as probably make a mark.

And then the large part is the logical view of memory, especially deep context. You want to go from session to session, location to location, and you want to have your memory state. You want to be able to switch models and have some state of the memory state. All of these consume various layers of the logical and the physical layers of memory. So that’s what we think about. So net, putting all this together, we think of taking a bet with interrnet, taking a bet with memory, and build an infant -scale compute for population scale, like in this case India, but also in certain key verticals like health sciences and others. So… So there’s another important element we want to highlight.

I can’t take a quote from Jensen.

Jensen Huang

One of the applications that my favorite is just good old -fashioned data processing, structured data and unstructured data, just good old -fashioned data processing. And very soon we’re going to announce a very big initiative of accelerated data processing. Data processing represents the vast majority of the world’s CPUs today. It still completely runs on CPUs. If you go to Databricks, it’s mostly CPUs. If you go to Snowflakes, mostly CPUs. SQL processing at Oracle, mostly CPUs. Everybody’s using CPUs to do SQL, structured data.

Renu Raman

So taking a cue from what he’s saying, historically, databases, SQL, all run on CPUs. And that will remain the case for a variety of reasons. so that’s an important metric in terms of why we believe the new systems that we compose going forward needs to have a happy blend especially the ways to design systems for the hyperscalers but also the whole category of use case and customers in the private side where they don’t need to have 100 ,000 machines but smaller scale machines but it needs to have a happy blend of CPUs and GPUs that’s the main point in terms of so in that context we have taken a position to start working in partnership with AMD because they’ve got the x86 CPU assets and a compelling GPU roadmap as well as an architecture that supports both from the network side as well as the memory side they have higher memory capacity for LLM so it started with 256GB of HPM which supports 128 billion parameter models at least now it’s going to go to 288 and 512 and no time which means we can fit fairly sizable models so that enables one to do more kind of classical distributed systems principles of single node that captures most of the workload for most customers and be able to optimize it on that.

So coming back, before we get into what we do in Proximal, I want to emphasize the partnership with UC San Diego. They have a data center, as well as I told, it’s a supercomputing data center for research for NSF and DARPA, where we are doing some of the work in terms of the hardware level at the middle layer, in terms of the compute kernels, as well as in the inference engines, as well as the use cases, as I said, because there’s a data science institute, AI for education, to transform the undergrad and graduate level programs using the same tools to have advanced research capability, as well as for health sciences. So with that, I think that is a part to set the motivation for the future of the field.

Thank you. what we’re doing in Proximal Cloud. The next phase we want to go into specifically what we are launching in the four layers, the key components of what we are building and delivering to many cloud partners in India, starting with. There’s also a why India question. I think I’ll say one aspect is India demands an extremely low -cost infant -scale compute at population scale, and that’s a challenge. So we really are excited to work on that problem to start with. So the first thesis, why do we need compute other than the cloud? I think the best way to quote is Michael Dell telling you what he sees. To the beginning here.

Michael Dell

Yeah so we in the last year you know delivered a little over 3 ,000 of these still AI factories and you know those are increasingly to enterprise and commercial customers that want to bring the AI to their data not the data to the AI and you know there’s just a ton of data that is still on -prem and being generated on -prem and it turns out to the beginning here

Renu Raman

If you have a particular question in the domain that you understand, we can try it out after this. So we enable with this interactive learning for the students, contextualized intelligence, and, of course, instructor empowerment in that. And the way it will look and feel will be like a Jupyter notebook on the extension side will be the research content, the archive papers for them to use. It’s an add -on thing. It doesn’t have to be integrated. It’s a commercial AI chat, if you will. Then the next example would be MRI images. Unfortunately, I’m not able to log in remotely onto that right now. The other one I had a local copy. I’m not able to show the MRI images right now.

So at this point, I want to summarize saying that what is Proximal? The word Proximal brings compute closer to your data. The word Proximal means it’s sovereign to the nation or the region or the business that cares about it. And the word Proximal also means we bring compute closer to memory. We bring compute closer to where the business is. so that was the thesis we are not doing this alone we are doing it with some technology partners as well as we have some key customers and partners so with that let me give an example for a given education use case. Let’s go to I’ll bring Lalit Bhatt here to talk about who is director here heading the India office for PharmEx the key partner

Lalit Bhatt

Thanks Renu So what I’ll do is and thanks to Proximal Cloud for giving the stage out here what we’ll do is that basically first I’ll little bit talk about what PharmEx stands for and then why in this space and different space why local compute and all these things are becoming important so PharmEx is basically a comprehensive AI stack so if you see on the left hand side we have lot of infield sensors and So we have a complete comprehensive platform in terms of not only soil moisture sensor but dendro meters and multiple sensors. We also have imaging capabilities where we can take images using satellite, using drones. And we also have an autonomy stack and we just now have acquired an autonomous electric tractor.

Basically these are pretty big machines. They might look like transformers but they are like almost 70, 80 horsepower machines. And we are putting our autonomy stack here so that they will go completely as autonomous ones. So what I’ll do is that I’ll just run a small clip. And I’ll just run a small clip. Thank you. Thank you. Thank you. again I think this is probably very standard everyone understands this you need to do AI you need data these two things we need that one then what becomes important is how efficient you are in terms of running those inferences using those data and we are also dealing with huge amount of data and that’s where we are looking into this technologies where we can reduce our cost everyone understands that in agriculture it is very difficult to ask lot of money from the farmer so where we can really make our operation more efficient if we start like making sure that we are very efficient very effective in terms of dealing with a large amount of data and able to run inferences on top of that one but essentially that’s what happens we get a lot of data both from the imaging side both from the sensor side and then we have our all engines running which basically leads to diagnostic and recommendations and this is just an example of the kind of thing that we do with our customers.

You would see here like complete or autonomous irrigation scheduling. A lot of data points would go into those models basically to create those schedules, anomaly identifications, crop stresses, yield predictions, frost predictions, and even we have worked on this soil percolations model as well. It depends on what all sensors you take. So in India I can tell you like we sell one, there is two feet four sensing probe, which is like four sensing, it goes two feet one, and with the whole controller unit it says 45 ,000 per unit basically. Usually in India we recommend one unit in one hectare, but again it will change based on the variation of the soil and these things like that, but this has been a good ballpark basically.

So yeah, I guess that’s it. And I think the whole theme is that we also are looking for really reduce our inference cost and that’s where Proxima Cloud comes into picture. Thank you.

Renu Raman

Thank you, Bharat. Okay, next we’ll have Bharat, Director at Divium, who is a key partner, and as I mentioned earlier, about model selection and runtime optimization that is integrated or will be integrated into a stack. So, Bharat.

Lalit Bhatt

Hey, good afternoon, everyone. So, let’s address the… hard truth out there. 90 % of Gen -AI pilots never make it to production. Not because the demo was bad or the models were weak or bad. It’s primarily because of three reasons. Number one, quality is undefined. What’s good for one use case is not necessarily good for another one. There’s no standardized way for evaluation or regressions. Number two, the costs are unpredictable. Be the cheapest model or the best model, you can see the price of these models ranging different from like 10 to 50x. The moment your application goes into production and hits real traffic, the costs spike up. There are AI engineers who are running experimentations and trying to tune this.

But model selection is always a moving target. There are always new models coming which are trying to fix something and are breaking something else. So without addressing all three, it’s very difficult for an enterprise to take their pilots to production. And that’s why we built Divium. So Divium is the only inference layer built on quality. Thank you. Divium defines measurable evals aligned against each use cases And it optimizes every incoming query to select the model Which is giving you the best quality per dollar The other part is that Divium automates the entire model selection process By continuously evaluating new models Deprecating the previous old ones And migrating you to new ones If we find something better, we auto -upgrade without breaking production Evals first, routing second And that’s what makes Divium different from every other routing platform out there Divium is the only inference layer with customer -specific intelligence Your apps can be AI agents, rack pipelines, or multi -agent workflows And the LLMs can be from the standard OpenAI, Anthropic Be your own fine -tuned models or deployed open -source models We sit right in between We provide you a single API.

We are continuously evaluating each and every incoming request, routing it to the model which is giving you the most optimal performance and also giving you detailed visibility on what models are working, how is your agent performing and what’s the overall quality. Remember, DVM is trained on your data, your agents and your quality. There’s nothing generic out there. And this is just a theory. We’ve already proven it across multiple deployments. For the India’s largest travel aggregator, which runs a conversational shopping assistant in their application homepage, we were able to cut down the cost by more than 60%. For one of the leading e -pharmacies of India, the customer support chatbot had a little bit lower latency. So we ended up reducing the cost by 30%, reducing the latency, latency by 30%.

leading to a case resolution improvement of 95%. As you can see, different use cases, different industries, but the result is the same. Lower cost and better outcomes. And we understand enterprise realities. You can keep your data secure. We have flexible deployment options, be it SaaS, privately hosted, or on -prem clusters. You stay in control. If you’re trying to take your AI pilots to production, feel free to reach out to us. Thank you.

Renu Raman

Thank you, Bhatt. So we talked first about application use case, one in education and agriculture. Second one, how we are bringing optimization to the system stack. Some of it we do and some of it with our partners. Third, we want to bring in how do we get customers, many of them mid -market, small, as well as large ones, enabled on our platform. I’m happy to introduce Sandeep Kumar. Coming is part of… venture builder instances. It’s a company that we partnered with in Delhi here to take this to a variety of customers, small, medium, large, with a higher velocity. Let him describe what they can do and how we partner.

Sandeep Kumar

Hello, everyone. I’m Sandeep Kumar from Instant System. We are a Silicon Valley -based venture builder. We do not just build startups. We grow them. We are partners in every domain of a startup, be it engineering, be it product, be it marketing. We just give them full blueprint to be a successful startup. We co -invest in the startup so that we are there in every journey of them for them to be a successful startup. We are a venture builder. Sometimes it’s often confused with the incubator, but we are a venture builder. We are a venture builder where we actually help in every step of your startup to be a successful startup. Part of the engine system I am mostly responsible for a company called VanEye though we usually do not disclose the name of the company that we are partners with to protect the IP and the confidentiality but just to give you a use case that you know what our capabilities are and what we have been able to build so far so this is a use case that I am taking this company has got nearly around 200 million dollars funding from the top investors including South Bank we are building an AI conversation software here and we are dealing with real use case real challenges of you know for a mostly like financial domain or financial based industries but all of these solutions are also generic for the analytical based industries as well so I am going to talk about you know some of the challenges that are actually common to every problem or every AI -based solution.

But we’ve been able to identify these challenges and we’ve been able to solve these challenges for this particular use case. So one of the most challenged that every AI -based software face is hallucinations. So LLMs always try to answer to your question irrespective of how much of the context it does have. We’ve been able to solve this problem up to a very good extent and our system is almost 99 % reliable. They do not hallucinate. That was the biggest problem that we’ve been able to solve. Next challenge that we face is disambiguation. So in spite of providing the context, sometimes the system is not able to understand how to disambiguate between some specific terms which may exist in different domains.

So that’s also the problem that we’ve been able to solve. As the theme of the system, and it’s very closely related to the theme of the system, because data security and the data privacy is one of the major industry concerns that we’ve been able to solve. So we’ve been able to address and challenge this problem so that the data privacy and the data access control is being managed at the raw level or in a very technical term I would say at the object level. We’ve been able to tackle that problem and solve that problem efficiently and that’s already running there and working fine. Evaluation and the quality management, that’s also one of the key areas.

That we need to solve as part of the venture that we are building. That’s also that we’ve been able to solve very efficiently. Another thing is the reliability because since we are talking about the financial system, the system has to be reliable. It has to be reliable every time. You cannot send a million dollars to someone’s account by mistake. That doesn’t work in the financial world. Or you cannot report data where you could show losses. instead of revenue or vice versa because you cannot survive in that world with hallucinated or the data which is not correct or factual. So being able to, with our advanced architecture system, we’ve been able to solve that problem as well.

There’s a long list of the pointers that we’ve been able to solve, but I’ll just cut down short. The system that we’ve been building, our performance, the reliable, we’ve been able to keep a check on the cost and efficiency of the system. That’s how we’ve been able to serve to the different audience, different customers from the different niche. So that’s what our theme is. We are a venture builder. Please feel free to reach out to us, and we’d love to talk to you about your startup. We don’t pick a selected startup to work with, but you all are free. You’re welcome to reach out to us, and we can discuss all the stuff that we are doing.

working on. Thank you so much.

Renu Raman

Thank you, Sandeep. I think that ends what we’re doing in Proximal and what our partners and our customers are working with in the early phases. We have partners in the U .S. like UCSD and Life Sciences, Health Sciences, Education, and here in Agriculture and soon to other, particularly we’re going to focus more on, it turns out to be that the Government of India initiative of Education, Health and Agriculture coincidentally aligned. It was not planned. It turned out to be that way. With that, I can go back to any questions. We’ve had a small panel session we can go to. I don’t know if Piyush has come here or not, but I think there’s a question here.

I’m here from ARIA, from Infosys, Senior VP at Infosys. please

Audience

Hello excuse me my name is Arya Bhattacharjee I am from entrepreneur from Silicon Valley so right now like Renu said I am driving this semiconductor and AI vision for Infosys from the United States and India also so the reason I am here is because like Renu said very correctly a very important question that what’s in future for India how can India capitalize or make a mark in this journey so not a small answer but I can tell you what we are trying to do at Infosys because if India is going to win this Semicon 3 .0 or 4 .0, 2 .0, I don’t know, it has to be in software, it has to be in AI. The chip building -wise is going to take some time.

So Renu said that 80 % of the data is on -premise. And what we are working on together is to see on the semiconductor price, this is true, absolutely true, more than 90 % of the data is on -premise. Yes. So the whole journey of how to take the data and how to create solutions through agentic AI approach, through distributed computing, and actually by owning the architecture to lower inferencing cost is a main challenge. So to answer the question which Renu asked me, what’s the future of India? I think that India… what we’re going to do is we’re going to look at a domain. So at Infosys at least we have selected domain and semiconductor, I was talking to him also, that is a large domain and we have taken the leadership with some major clients right now, I can’t talk about details, we’re using an agentech AI on premise and delivering productivity solutions, AI solutions and by cutting our productivity for chip making at least by 25 % and every day in a semiconductor fab you save $10 million, benchmark for a 7 nanometer type of technology, not even 1 .9.

So with that, good luck to Renu and I look forward to collaborating. Thank you.

Renu Raman

Thank you, Arya. Now we welcome Abhishek Venjan but before, just come on board. To summarize what we do, graph, that is underneath what I think is the most important AI factors. Organizing the data layer turns out to be probably the most complicated thing, which spans the enterprise such that it can meet the intelligence. And so that’s the stuff that I think we’ll probably do a lot of. We still don’t really have deep research in a corporate context. We do. That’s what Copilot is about. But most people day -to -day do not have this. So are they just underusing AI that exists? Yes. In fact, it’s interesting you brought that up because to me that is the killer feature.

So the biggest thing we did was we took this graph that is underneath what I think is the most important database in any company, which is underneath your email, your documents, your Teams calls, what have you. It’s the relationships that, by the way, own AI factors. Organizing. Organizing the data layer. so that’s a best summary obviously Satya wants to do it in the cloud and that will happen but also you need to have it in your on -prem, near -prem isolated from other sovereign as well and have the same capabilities, in a sense that’s what we bring to the enterprise if you will any other questions before we go to a panel session

Abhishek Singh

Thanks for having me here this is Abhishek, founder of ZetaVault we did a lot of work on the LLM acceleration, what it means is that we offload the large language models to the specific chips and custom silicon and thereby get the inferencing states and all we have Renu here who has wealth of experience on the distributed computing side and we were supposed to have a panel discussion but I thought I would pick his brain on what the challenges and what kind of changes he has seen in the industry. So, Renu, like, you have been part of, like, Sun Microsystems and early sort of, like, pioneers of distributed computing. So from Sun, which was maybe the distributed systems 1 .0 to Linux, which pretty much democratized the entire competition space and brought the Linux and x86 and now almost every embedded device, every competition pretty much happens on Linux.

So that was the distributed systems 2 .0. And now coming to the distributed computing space with the open models, right? Open source has played a lot of sort of role in the proliferation of the distributed computing. What do you foresee or what do you envision the open models are going to do for the distributed computing? Are we going to see a distributed computing 3 .0?

Renu Raman

Hello. Yeah. So that’s a fundamental thesis in that. I mean, we are, in a way, in part of that continuum to some extent. If you look at… not to take anything away from how NVIDIA designs, but there is a clear bifurcation going on right now as we speak on, as we said, training versus inferencing. And then there is open source models, and a variety of customers’ use cases would use and need the open source models. There’s always been the history of open and close in every transition. I mean, if you go back in the 80s and you go back, I mean, if you look at what enabled the cloud was hypervisors. There was KVM and VMware.

The same thing will apply. There will be open models and closed models. But the way I like to think about it is models is a new abstraction layer that separates between the underlying computing needs and everything above. Hypervisors separated the physical machine to a virtual machine, and then operating systems unix at that time also did that. The same thing is models are the abstraction layer that provides a higher degree of innovation both from closed and open models. The closed one will probably be innovated within OpenAI and Google, but the rest of the world will take the open source model, like what happened with Linux, and innovate. It’s not just going to be an NVIDIA GPU or AMD GPU, or there could be a plethora of GPUs, country -specific, region -specific, domain -specific.

Anything can happen over time.

Abhishek Singh

That’s a very wonderful take. One of the things that we have been wondering is about the latency you talked about in the various scientific and other applications you’re working. When we build the solutions for our customers, we build a lot of, actually, natural language to query processing kind of solutions. Like, we have been able to do maybe a sub -minute kind of a solution, which is acceptable to the customer because from weeks or days, he is able to answer or get the answers to their queries in less than a minute, right? But even a minute is not sufficient, right? When you talk about, like, really interactive queries and all, you want, like, sub -millisecond. Or maybe, like, sub -second kind of response.

what are your thoughts on that? Is it even possible that to a population or to a large customer base that we have in India about 1 .5 billion people at a very low cost, maybe like 200 rupees per month, you can provide query processing at a scale which is like within sub second?

Renu Raman

I think that’s a very good question. Sometimes scaling the problem is more important than the answer. This is an interesting way to frame the problem is, if you go back and look in history, why did Google succeed? A fundamental decision that was made by them on the toolbar is every query response has to be in 20 milliseconds. Now, nobody thought about it prior to that. It’s obvious today. But that key proposition or definition or the question that was asked, maybe by Larry Page or Sergey Brin, whoever it was, led to what we see as Google today in the back end, which is a huge amount of infrastructure to satisfy the 20 millisecond response to any query. so to me the same thing applies today, maybe 20 is too hard I’m just going to arbitrarily pick I have a simple demo or animation thing I was trying to show every 120 milliseconds you want to have the answer today if you go ask a question it will take seconds sometimes longer than that we are all impatient, we want the answer in quick order, when I ask you a question you don’t say let me think and come back, you want to give the answer if you don’t know how to think and come back, ok, you can but that’s a very deep question you can think and come back but we can throw more computing resources to get that so what it tells you is you can throw a lot more computing to get to the answer, it’s not just hardware, it’s going to be algorithmic improvements other improvements, but to me that’s the benchmark, get 120 milliseconds to any query for anybody so there’s a global context and India context, India provides an ample opportunity for 1 .4 billion people if you can deliver at a cost point and you can deliver at a cost point like 200 rupees a month at 120 seconds and any query to be handled which is a long road to go but if you can meet the objective in 10 to 20 years it serves a lot of people but it also will drive tremendous amount of innovation that’s why when somebody says population scale unique India has a unique thing about the population scale problem and the cost problem so hopefully there are enough people within as Arya said here semiconductor 3 .0 and other innovations that can drive to build India’s own sovereign lowest cost, shortest answer to any language to the question that you asked

Abhishek Singh

interesting take on that one of the things that keeps coming is about the scale of the global corporations or the size that they have been able to reach with AI gaining mainstream in India. And there is a parallel actually theme going on, which is on the semiconductor side, right? Like we are putting, the government is putting a lot of focus. Private players are putting a lot of focus. We have an audience like esteemed, like our Infosys guest. And the question that everybody keeps wondering about is with the AI and AI speeding up things, he’s talking about productivity gains and all. Like, can we, like what kind of corporations can come out of India? Can we see like NVIDIA is coming out or, I don’t know, like Palantir or Supermicro or even a new version of some microsystems coming out just because there is so much emphasis on AI or the Semicon side.

I’ll let Renu talk and then maybe you can also have your take on this particular question, right? What kind of corporations can come out, right? Your take.

Renu Raman

transition, can there be an SAP coming out of this transition in the AI? Why not? To give you some raw numbers, every gigawatt of power will require $25 billion worth of compute memory network storage. So if India is going to do 10 gigawatts, that’s $250 billion of hardware. That brings multiple super micros, or that sustains a semiconductor ecosystem at that scale. So certainly the investments going in for power, which is a long lead item, is important, but the next layer provides the economic value to host the hardware systems company, the HP, the Dells, and Supermicro can emerge. You can go each layer of the stack. The next layer is the application in your tier. Proximal is that. Maybe we could become the SAP of tomorrow.

That could be a Palantir, which is the application tier, not just Palantir, any other company. So both the scale and if you can solve the technology, the scale, and the cost economic it’s not just restricted India it can be global unlike the China model where it ended up being a very close garden wall I think India has the opportunity to be make in India and make for global it’s much better but you just have to think bigger take more important is take bolder bets and go for the long haul not just work for it for 5 years 7 years these are 10 20 years cycles to change very interesting take and thanks a lot for actually and I would like to have your opinion on this particular question do we see NVIDIA SAP’s and Oracle’s coming out thanks sir so I think the semiconductor data for example recently working with more than large companies I want to give some specific examples they are just ingesting data right now just I’m talking about a fab not design they have got 7 petabytes of data ingested and they don’t know what to do with it And like I said, a typical fab manufacturing facility is worth at least $10 billion.

And it’s got thousands of steps. It takes about 120 days to make a chip. So $10 billion for 120 days producing a wafer, and there are defects, design issues, things. So if you take the data just from basic information, run -time, real -time data, defects, soft defects and hard defects, because, you know, just because a chip fails doesn’t mean it’s slow. Slow means no money. That’s a failure. So collecting all that data, classificating the data, understanding and using agents in an edge computing way. You cannot solve this in a server. And then feeding it back to design infrastructure. So the design time also has shrunk a lot. And the yields are going up. 30 years ago when I was in Intel, we were talking about die sizes of like maybe one centimeter by one centimeter.

Today, in a 300 millimeter wafer, NVIDIA’s latest wafer level ship is about 20 centimeter by 20 centimeter. That level of yield and reliability is unimaginable without the use of AI. So I can go on and on, but I think if India has to win, I don’t think India needs to become a Palantir. And India does not want to become a slave shop. So the way I explain that in a one level page, the Palantir’s gross margin is 95%. Indian company’s gross margin is 30%. Can we build a business at 50 % gross margin where the amount of domain expertise India provides with the amount of data is available to take these technologies we talk about to implement them in real practice? That’s what India can win.

it is the execution with the best technology thank you thank you

Abhishek Singh

thanks everyone I have one question for the venture side so like all these like technologies require a lot of investment before they can actually become fruitful right I heard somewhere that the government of Karnataka and I don’t want to demean them by the way I put like 20 crores of fund for actually funding the startups and all that right the single engineer which actually Meta is hiring right now they are throwing how much 100 million dollars at that engineer 20 crore for funding like hundreds of startups versus 100 billion dollar like being given to one engineer right there is a huge mismatch now the question is do we like for Indian companies do the venture capitalists or this or the private equity do they have such deep pockets to fund them for like continuously fund them for hundreds of millions of billions of dollars so that an Nvidia or AMD or I don’t know like the Sun Microsystems can

Renu Raman

I want you to take. Answer this question. Actually, I would like to take your. I don’t want to answer. I want you to answer the question. Answer your own question.

Abhishek Singh

I want to answer my own question. Yes, it will require that kind of investment, right? I think this was one topic I touched upon a long time back. ISRO has been funded like continuously, right? Initially, the ISRO rockets would all land up in the ocean, right? Or the sea or whatever. But over a period of time, they gained competence. They are among the top four in the world right now, right? I think that kind of like continuous and continued support is needed for whatever industry we are picking, whether it is AI or whether it is Semicon. Like we need the private players. We need the government to support it like till the end, right? And that’s when maybe the key players and the winners will emerge.

Renu Raman

I think your question has got two parts. I think the first part is that the government is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going to be the one that is going Sorry, there was a public announcement.

There’s an interruption here. I think there are two parts. One is there’s a mismatch between what a demand supply gap and skills in the model companies, if you

Abhishek Singh

Hopefully, yeah.

Renu Raman

So why did you do it and what do you think? That’s why I asked the question back to you.

Abhishek Singh

It’s good. It’s fun to build for India, by the way, and build from India, right? Build for India, build from India. That’s why we are here. And that’s why all this conference is here. All the discussions are happening.

Renu Raman

But thanks a lot, Renu, for all the wonderful insights. Last call. Anybody from the audience wants to ask any question to Renu?

Audience

Yes, sir. Thanks, sir. So my question is that you shared that if 10 gigawatt business comes to India, that means $250 billion worth of equipment will be purchased or something will happen in India. so how can we ensure that 10 gigawatts just leave 10 let’s start with 1 so how will that business come to India how will that I am just sharing that if 1 gigawatt business you said will come so how will that business come how will that business come

Renu Raman

today we already see most of the hardware is either broadened by the hyperscalers who got some capacity and then Dell HP are the largest OEMs as I understand super micro is behind I guess most of the hardware level systems are manufactured in Taiwan and other places and brought here and there are emergent players like VVDN Sanmina has got a manufacturing plant in Chennai who is going to come and do make in India I don’t want to steal the thunder so there are emergent ones seeing the economic value of that scale to start designing but we have already seen I don’t know the details of all the phone manufacturing that’s happened So the ecosystem of building chassis systems, board design, design capability was there, but manufacturing and operations support and all that was not there.

So I do expect that to start happening. That’s why we started working with CDAC and VVDN to some extent. We do see the opportunity that there’s at least a $300 to $500 million opportunity. If you look at the interesting aspect is the Indian public market is also valuing these things fairly high. Look at NetWeb and others. So you can’t go and raise money in the public market for these kinds of businesses in NASDAQ. You can certainly do that in India. So it’s an interesting point in time where there’s a demand, there’s a need, there needs to be enough people willing to invest. And there’s also probably a way to scale the business. I don’t view going public as an exit.

But really? I’m viewing going public as a way to raise money to scale the business. So there’s enough financial muscle that’s getting built at all stages. But the question is, are there enough people funding at the early phases to fund some of these, right? That I think they have to come together. I’m on the entrepreneur side, not the venture side. I’ve played both, but that has to come. That’s my point of answer to Abhishek’s question is, at a 10 gigawatt, it’s going to be multi -hundreds of billion dollars worth all the layers of the stack, and there should be enough investments going in. And if you look at what has happened in China, there’s a different way to drive that capitalistic structure, right?

They have taken a centralized model, but enabled a lot of districts and regional people to go invest. Look at the cars. How many car companies are there? I’m not saying you should follow the same model, but there should be enough early stage at various layers of the stack to be invested. So, the opportunity, the exit

Abhishek Singh

Thank you. Thank you, Renu, for all that, and everybody who has participated. Thanks for coming, guys. We’ll close the session here, so thanks a lot. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (38)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“Renu Raman announced the recent launch of Proximal Cloud’s offering and described a flurry of activities targeting the Indian market.”

The knowledge base notes that the team just launched their offering and is focusing on activities in India, confirming the report’s statement [S24].

Additional Contextmedium

“Proximal is forging a hardware‑software co‑design partnership with AMD that provides a balanced CPU + GPU stack for AI workloads.”

AMD’s presentation highlighted that AI extends beyond GPUs and involves a full suite of hardware and software, adding nuance to the reported AMD partnership [S23].

Additional Contextmedium

“Proximal positions itself as a provider of enterprise‑private‑cloud infrastructure for India, emphasizing public‑private collaboration.”

The knowledge base discusses the importance of public-private partnerships in India’s AI and semiconductor ecosystem, providing broader context for Proximal’s market positioning [S96].

!
Correctionhigh

“Humanity tends to under‑estimate ten‑year horizons while over‑estimating two‑year gains.”

A cited speaker stated that technology shifts are generally over-estimated both in the short term and the long term, contradicting the report’s claim of under-estimation for the ten-year horizon [S124].

External Sources (125)
S2
Oracle to oversee TikTok algorithm in US deal — The White House has confirmed that TikTok’s prized algorithmwill be managed in the US under Oracle’s supervisionas part …
S3
How TikTok is changing world politics — The 2025 U.S. deal may set a new precedent for navigating this complex field. Under the terms, a group of American inves…
S4
Waves of infrastructure Open Systems Open Source Open Cloud — – Renu Raman- Abhishek Singh – Renu Raman- Abhishek Singh- Audience – Renu Raman- Jensen Huang – Renu Raman- Michael …
S5
From High-Performance Computing to High-Performance Problem Solving / Davos 2025 — Azeem Azhar: Good morning, and welcome to our panel discussion today on quantum computing, titled From High Performanc…
S6
Driving U.S. Innovation in Artificial Intelligence — 15. Jensen Huang – CEO and Founder, NVIDIA
S7
Nvidia CEO Jensen Huang claims AI hallucinations are solvable; AGI is 5 years away — CEO Jensen Huang addressed the press this week at Nvidia’s annual GTC developer conference, sharing histhoughtson AI hal…
S8
Waves of infrastructure Open Systems Open Source Open Cloud — The partner presentations demonstrated practical applications across diverse sectors. Lalit Bhatt from PharmEx presented…
S9
https://dig.watch/event/india-ai-impact-summit-2026/multistakeholder-partnerships-for-thriving-ai-ecosystems — I would like to introduce, sitting on my left, Dr. Barbel Koffler, who is the Parliamentary State Secretary at Germany’s…
S10
Open Forum #30 High Level Review of AI Governance Including the Discussion — – **Abhishek Singh** – Under-Secretary from the Indian Ministry of Electronics and Information Technology Abhishek Sing…
S11
Announcement of New Delhi Frontier AI Commitments — -Abhishek: Role/Title: Not specified (invited as distinguished leader of organization), Area of expertise: Not specified
S12
GPAI: A Multistakeholder Initiative on Trustworthy AI | IGF 2023 Open Forum #111 — Abhishek Singh:I can take that, no worries. Thank you, Abhishek. The floor is yours. You can give your question. Yeah, t…
S13
Waves of infrastructure Open Systems Open Source Open Cloud — Hello, everyone. I’m Sandeep Kumar from Instant System. We are a Silicon Valley -based venture builder. We do not just b…
S14
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — I’m still in my… So thank you, thank you to all of you. Now we are actually arriving to today’s session where we are g…
S15
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S16
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S17
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S18
Keynote by Naveen Tewari Founder & CEO, inMobi India AI Impact Summit — “the third is is a very disproportionate rate of growth of economic prosperity because of all the factors that the level…
S19
AI expected to reshape 89% of jobs across the workforce in 2026 — AI is set totransformtheUKworkforce in 2026, with nearly 9 out of 10 senior HR leaders expecting AI to reshape jobs, acc…
S20
The impact of AI on jobs and workforce — The ILO’s webinar was triggered by the recent impact of ChatGPT on our society and jobs. OpenAI’s ChatGPT, in particular…
S21
Strategy — The term AI in itself has morphed over the years since it was coined by John McCarthy et al at Dartmouth University in 1…
S22
NRIs MAIN SESSION: DATA GOVERNANCE — Artificial Intelligence depends on the data system, which has to be balanced
S23
Building the AI-Ready Future From Infrastructure to Skills — “Full stack, meaning hardware and software being able to deliver to the customer solutions, is what AMD is aiming for.”[…
S24
https://dig.watch/event/india-ai-impact-summit-2026/waves-of-infrastructure-open-systems-open-source-open-cloud — And it’s got thousands of steps. It takes about 120 days to make a chip. So $10 billion for 120 days producing a wafer, …
S25
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — First of all, Jacob, let me just say congratulations on this India and U.S. Paxilica signing today. This is certainly a …
S26
Driving Indias AI Future Growth Innovation and Impact — “Investment also includes energy infrastructure, because without energy, there is really no compute infrastructure you c…
S27
Artificial Intelligence & Emerging Tech — Connectivity issues in developing countries for leveraging AI are also highlighted. This negative sentiment emphasizes t…
S28
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — “When it comes to discovery, we need to develop foundation models for proteins, RNA, cellular circuits and systems biolo…
S29
From KW to GW Scaling the Infrastructure of the Global AI Economy — Hundreds of thousands of dollars. Real money. Right? Real money. So while you as a cloud provider might be thinking, and…
S30
To share or not to share: the dilemma of open source vs. proprietary Large Language Models — Bilel Jamoussi:Thank you very much for the great introduction and good afternoon. I really admire your energy to stay wi…
S31
https://dig.watch/event/india-ai-impact-summit-2026/how-small-ai-solutions-are-creating-big-social-change — And then the other one is a complementary side, which is working with the ecosystem, working with partners in Africa, in…
S32
https://dig.watch/event/india-ai-impact-summit-2026/indias-ai-future-sovereign-infrastructure-and-innovation-at-scale — I think the biggest challenge in not making AI aligned is that we will become products, not even consumers, right? We wa…
S33
Book launch: What changes and remains the same in 20 years in the life of Kurbalija’s book on internet governance? — 3. **Processing Architecture Shift**: The transition from CPU-based to GPU-based computing, fundamentally altering how c…
S34
Panel Discussion Data Sovereignty India AI Impact Summit — This example demonstrates what Gupta termed “partnership not dependence” – utilizing “the best of foreign technologies” …
S35
Panel Discussion: 01 — Unexpectedly, both speakers identified knowledge gaps and institutional capacity as more significant barriers than techn…
S36
Next-Gen Industrial Infrastructure / Davos 2025 — There are significant disparities in global investments in compute power, with the US and Asia leading, while Europe and…
S37
Laying the foundations for AI governance — – The four fundamental obstacles identified by the moderator: time, uncertainty, geopolitics, and power concentration R…
S38
Global AI Policy Framework: International Cooperation and Historical Perspectives — Werner identifies three critical barriers that prevent AI for good use cases from scaling globally. He emphasizes that d…
S39
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Key barriers to scaling include the need for high-quality data foundations, reimagined business processes, and comprehen…
S40
Keynote-Jeet Adani — Adani announced that “earlier this week, the chairman of the Adani Group made one of the most transformative announcemen…
S41
Keynote Adresses at India AI Impact Summit 2026 — The speakers demonstrate remarkable consensus across multiple dimensions: the strategic importance of U.S.-India partner…
S42
AI for Good Innovation Factory Grand Finale 2025 — Infrastructure | Economic Predixion employs both a hardware-with-software strategy for independent hospital deployment …
S43
Fireside Chat Intel Tata Electronics CDAC & Asia Group _ India AI Impact Summit — He advocated for a layered approach to sovereignty, focusing on controlling critical chokepoints whilst accepting strate…
S44
Indias Roadmap to an AGI-Enabled Future — This comment shifted the entire discussion from a hardware-centric view to an algorithm-centric one, giving hope that In…
S45
Secure Finance Risk-Based AI Policy for the Banking Sector — -India’s Strategic AI Positioning: Discussion centered on how India should position itself globally in AI governance, le…
S46
Agents of Change AI for Government Services & Climate Resilience — Srinivas Tallapragada introduced an important distinction between strategic sovereignty and technical sovereignty that p…
S47
The Foundation of AI Democratizing Compute Data Infrastructure — The emphasis on community participation, data sovereignty, and alternative technical architectures suggests AI developme…
S48
Democratizing AI Building Trustworthy Systems for Everyone — “of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control …
S49
WS #462 Bridging the Compute Divide a Global Alliance for AI — Elena emphasizes that sustainable collaborative models need credibility and trust to maintain participation and continue…
S50
Open Forum #53 AI for Sustainable Development Country Insights and Strategies — Aubra Anthony: Yeah, thanks, Yuping. And, yeah, a very auspicious time, really. I mentioned earlier some of the issues t…
S51
AUDA-NEPAD White Paper: Regulation and Responsible Adoption of AI in Africa Towards Achievement of AU Agenda 2063 — Additionally, in an AI-driven economy, it will be necessary to take practical steps to implement policy considerations t…
S52
Generative AI and Synthetic Realities: Design and Governance | IGF 2023 Networking Session #153 — Policy, regulation, and market rules were mentioned as important factors to address in order to limit the circulation of…
S53
Cyber Resilience Playbook for PublicPrivate Collaboration — – Within a given country, there is often intense competition for the promise of enormous investment by companies buildi…
S54
Research Publication No. 2014-7 March 17, 2014 — Improved interfaces are not only necessary at the data level, but also with respect to the normative spheres, at the val…
S55
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Artificial intelligence | Information and communication technologies for development Arun advocates for moving inferenc…
S56
https://dig.watch/event/india-ai-impact-summit-2026/ai-algorithms-and-the-future-of-global-diplomacy — For example, AI in healthcare is a fantastic opportunity for. Indo -German cooperation, there is fantastic data availabl…
S57
Waves of infrastructure Open Systems Open Source Open Cloud — Jensen announces an upcoming initiative to accelerate data processing, signalling a shift toward GPU‑based workloads.
S58
I NTRODUCTION — – Review and enhance the existing data governance framework to ensure comprehensive coverage of the data management life…
S59
African Union (AU) Data Policy Framework — Data processing roles as a form of security protection should be specified in policy by policymakers. Member States sho…
S60
Developing data capacities for policy makers and diplomats — Asked about the single most important capacity development need of policy makers and diplomats, panellists put awareness…
S61
Building Population-Scale Digital Public Infrastructure for AI — Good afternoon. We have an exciting panel discussion ahead. Let me start off with where Nandan stopped. Hundred pathways…
S62
Overview of AI policy in 10 jurisdictions — Summary: Brazil is working on its first AI regulation, with Bill No. 2338/2023 under review as of December 2024. Inspire…
S63
Driving Indias AI Future Growth Innovation and Impact — Dr. Vivek Mohindra from Dell Technologies presented a comprehensive AI blueprint built upon three foundational pillars d…
S64
Building the AI-Ready Future From Infrastructure to Skills — And he said, Tim, India is software. This is what we do. He said, you’re going to be in front of the best people in the …
S65
The Virtual Worlds we want: Governance of the future web | IGF 2023 Open Forum #45 — Another major challenge highlighted is network latency in the context of virtual reality (VR) and extended reality (XR) …
S66
Artificial General Intelligence and the Future of Responsible Governance — The participants generally agreed that AGI represents AI systems capable of performing any human task at professional le…
S67
Waves of infrastructure Open Systems Open Source Open Cloud — Focus on Government of India initiatives in Education, Health, and Agriculture as primary market segments Proximal Clou…
S68
https://dig.watch/event/india-ai-impact-summit-2026/waves-of-infrastructure-open-systems-open-source-open-cloud — Thank you. what we’re doing in Proximal Cloud. The next phase we want to go into specifically what we are launching in t…
S69
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — First, India possesses “a huge talent pool of young, vibrant, intelligent, smart, educated people,” with one of the worl…
S70
Book launch: What changes and remains the same in 20 years in the life of Kurbalija’s book on internet governance? — 3. **Processing Architecture Shift**: The transition from CPU-based to GPU-based computing, fundamentally altering how c…
S71
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — -Infrastructure Constraints and Resource Management: Significant focus on three critical bottlenecks – power consumption…
S72
Next-Gen Industrial Infrastructure / Davos 2025 — There are significant disparities in global investments in compute power, with the US and Asia leading, while Europe and…
S73
AI Infrastructure and Future Development: A Panel Discussion — Compute Capacity and Demand Dynamics Efficiency improvements will accelerate rather than reduce infrastructure demand
S74
Building Public Interest AI Catalytic Funding for Equitable Compute Access — This comment introduced a completely different perspective on the compute scarcity problem, suggesting that technologica…
S75
From High-Performance Computing to High-Performance Problem Solving / Davos 2025 — Ana Paula Assis: One example is what we are doing with ExxonMobil, for example, with their strategy and research divis…
S76
AI at 45W: Neuchips showcases energy-saving chips for LLMs — As global energy demand surges alongside AI growth, Neuchips isstepping up with energy-efficient solutionsthat deliver h…
S77
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Key barriers to scaling include the need for high-quality data foundations, reimagined business processes, and comprehen…
S78
Leveraging AI4All_ Pathways to Inclusion — The discussion revealed that many AI products remain stuck in pilot stage due to surrounding system challenges rather th…
S79
Building Population-Scale Digital Public Infrastructure for AI — -Scaling AI from Pilots to Population-Scale Implementation: A key challenge discussed is moving beyond impressive pilots…
S80
Keynote Adresses at India AI Impact Summit 2026 — The speakers demonstrate remarkable consensus across multiple dimensions: the strategic importance of U.S.-India partner…
S81
Keynote-Jeet Adani — Adani announced that “earlier this week, the chairman of the Adani Group made one of the most transformative announcemen…
S82
Powering the Technology Revolution / Davos 2025 — The tone was generally optimistic and forward-looking, with panelists highlighting opportunities for innovation and prog…
S83
AI in education: Leveraging technology for human potential — The tone is consistently optimistic and inspirational throughout, with Mills maintaining an enthusiastic and visionary a…
S84
Opening — Pace of technological progress is accelerating unpredictably
S85
Governments, Rewired / Davos 2025 — The overall tone was optimistic and forward-looking, with speakers highlighting the transformative potential of technolo…
S86
AI as critical infrastructure for continuity in public services — The discussion maintained a collaborative and constructive tone throughout, with participants building on each other’s p…
S87
Critical Infrastructure in the Digital Age: From Deep Sea Cables to Orbital Satellites — The discussion maintained a balanced tone that was simultaneously informative and concerning. It began with an education…
S88
Bridging the Digital Divide: Inclusive ICT Policies for Sustainable Development — The discussion maintained a formal, academic tone throughout, characteristic of a research presentation or conference se…
S89
AI and Data Driving India’s Energy Transformation for Climate Solutions — The tone was collaborative and solution-oriented throughout, with speakers building on each other’s insights rather than…
S90
The Foundation of AI Democratizing Compute Data Infrastructure — The tone was collaborative and solution-oriented throughout, with speakers building on each other’s ideas rather than de…
S91
Presentation of outcomes to the plenary — The event showcased the power of collaboration and innovation.
S92
Digital Cooperation and Empowerment: Insights and Best Practices for Strengthening Multistakeholder and Inclusive Participation — ## Concrete Examples of Multi-Stakeholder Success Hisham Ibrahim: I’ll also mention three quick ones, looking across my…
S93
WS #453 Leveraging Tech Science Diplomacy for Digital Cooperation — Need to showcase concrete examples and successes; AFNIC’s collaborative projects as examples of multi-stakeholder work
S94
Assessing the Promise and Efficacy of Digital Health Tool | IGF 2023 WS #83 — Deborah Rogers:I guess my closing remark would be that technology is a great enabler. It can actually be used to decreas…
S95
Building Indias Digital and Industrial Future with AI — The discussion maintained a collaborative and forward-looking tone throughout, with industry experts, regulators, and po…
S96
The Global Power Shift India’s Rise in AI & Semiconductors — The discussion maintained an optimistic and forward-looking tone throughout, with speakers expressing confidence in Indi…
S97
How to make AI governance fit for purpose? — The discussion maintained a collaborative and optimistic tone throughout, despite representing different national perspe…
S98
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — The discussion maintained an optimistic and collaborative tone throughout, characterized by constructive problem-solving…
S99
Driving Indias AI Future Growth Innovation and Impact — The discussion maintained an optimistic and forward-looking tone throughout, characterized by enthusiasm for India’s AI …
S100
Closing Session  — The tone throughout the discussion was consistently formal, collaborative, and optimistic. It maintained a celebratory y…
S101
Safeguarding Children with Responsible AI — The discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in B…
S102
Closing remarks — The tone is consistently celebratory, optimistic, and forward-looking throughout the discussion. It maintains an enthusi…
S103
Trusted Connections_ Ethical AI in Telecom & 6G Networks — The discussion maintained a consistently optimistic and forward-looking tone throughout. Speakers expressed confidence i…
S104
AI Meets Agriculture Building Food Security and Climate Resilien — The discussion maintained an optimistic and collaborative tone throughout, characterized by visionary leadership and pra…
S105
Opening of the session — Emerging Technologies:Present both challenges and opportunities. Recent Initiatives:Provide a foundation for further pr…
S106
Opening of the session — relevance of technological innovation and the establishment of new norms to guarantee freedoms and protections online
S107
Tightening the interconnectedness of ICT, Digitalization and Industry 4.0 to accelerate Economic growth and industrialization in developing countries — Ana Paula NISHIO DE SOUSA:Ah, thank you. Yeah, so you’re absolutely right about, I would say, 35 years ago, maybe even 4…
S108
WS #51 Internet & SDG’s: Aligning the IGF & ITU’s Innovation Agenda — Jasmine Ko emphasised the need to prioritise and set achievable goals within limited resources. She suggested using desi…
S109
Empowering Women Entrepreneurs through Digital Trade and Training ( Global Innovation Forum) — The analysis identifies two remarkable entrepreneurs from Senegal, namely Awa Caba. Awa Caba is actively involved in the…
S110
Multistakeholder Partnerships for Thriving AI Ecosystems — This extends to evaluation and quality assurance, where the absence of regional AI evaluation hubs creates uncertainty a…
S111
Advancing rights-based digital governance through ROAM-X | IGF 2023 — In her closing remarks, Grigoryan reflected on the insightful discussion and offered speakers an opportunity for final t…
S112
Session — A pointed inconsistency is detected within international negotiations concerning the balance between human rights protec…
S113
Ad Hoc Consultation: Thursday 1st February, Morning session — This expression of gratitude not only served as a respectful acknowledgment of the session’s orderly progression but als…
S114
Blockchain and Biometric-based Digital Identity Solution — The session concluded without any questions from the audience, suggesting that the presentations were comprehensive and …
S115
Institute of AI Education marks significant step for responsible AI in schools — TheInstitute of AI Educationwas officiallylaunchedatYork St John University, bringing together education leaders, teache…
S116
Teachers see AI as an educational tool — Teachers have longworriedabout ChatGPT enabling students to cheat, with its ability to produce essays and solve problems…
S117
AI cheating scandal at University sparks concern — Hannah, a university student,admits to using AIto complete an essay when overwhelmed by deadlines and personal illness. …
S118
Artificial intelligence (AI) and cyber diplomacy — Adil Suleyman:Thank you. Once again. Just one last question. What does it take for the African Union Commission’s one de…
S119
AI for Social Good Using Technology to Create Real-World Impact — Kiran Mazumdar-Shaw, chairperson of Biocon Group, presented perhaps the most visionary perspective on AI’s potential in …
S120
29, filed Jan. 22, 2010, at 9-10. — New broadband-enabled solutions are transforming how teachers and students use content and media. But copyright law must…
S121
Table of Contents — Information and Communication Technologies (ICT) applied to health and healthcare systems can increase their efficiency,…
S122
The Expanding Universe of Generative Models — Attempt to perform similar processes on images or videos has not been successful
S123
Thinking through Augmentation — He presents a theoretical scenario in which AI-driven vehicles might result in only 50,000 deaths internationally, a 90%…
S124
Fireside Conversation: 02 — So, usually in technological shifts of this type, we are overestimating. And the changes in the short term and overestim…
S125
From Human Potential to Global Impact_ Qualcomm’s AI for All Workshop — Perhaps the most transformative aspect of the discussion centred on how AI will fundamentally reshape human-computer int…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
R
Renu Raman
11 arguments174 words per minute6044 words2073 seconds
Argument 1
AI will impact 95 % of work, driving massive productivity gains (Renu Raman)
EXPLANATION
Renu argues that artificial intelligence will affect the vast majority of jobs, delivering unprecedented productivity improvements across the economy. This broad impact will far exceed the gains seen during the SaaS era.
EVIDENCE
She notes that AI is expected to impact 95 % of work, describing it as a “blast radius” much larger than previous productivity waves and linking this to the need for far more computing resources [52].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Industry surveys and ILO reports highlight that AI will reshape around 89-90% of jobs and deliver large productivity gains, supporting the claim of a 95% impact [S18][S19][S20].
MAJOR DISCUSSION POINT
Scale of AI-driven productivity
Argument 2
India needs ultra‑low‑cost, infant‑scale compute for population‑scale AI (Renu Raman)
EXPLANATION
Renu highlights the unique challenge of delivering very cheap, small‑scale compute resources that can serve India’s massive population. She frames this as a core problem for the Proximal Cloud initiative.
EVIDENCE
She states that India demands an “extremely low-cost infant-scale compute at population scale” and that this is a key focus for their work [113-115].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Waves of Infrastructure discussion emphasizes India’s demand for extremely low-cost, infant-scale compute at population scale, and notes the need for energy-linked compute infrastructure investments [S1][S26].
MAJOR DISCUSSION POINT
Cost‑effective compute for mass adoption
AGREED WITH
Lalit Bhatt
Argument 3
Successful AI systems require tight hardware‑software co‑design; “make your own hardware if you care about software” (Renu Raman)
EXPLANATION
Renu emphasizes that serious software developers should build their own hardware, and vice‑versa, to achieve optimal AI performance. This co‑design approach is presented as a strategic principle for innovation.
EVIDENCE
She says, “people who are serious about software should make their own hardware” and the corollary for hardware developers, underscoring the need for integrated design [34-36].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of hardware-software co-design and building own hardware is echoed in the Waves of Infrastructure talk and AMD’s full-stack strategy, while the complexity and cost of chip development are highlighted as challenges [S1][S23][S24].
MAJOR DISCUSSION POINT
Hardware‑software integration
Argument 4
Partnership with AMD provides a balanced CPU + GPU stack and high‑capacity memory for LLMs (Renu Raman)
EXPLANATION
Renu explains that Proximal Cloud is collaborating with AMD to combine x86 CPUs with a strong GPU roadmap and large memory capacities, enabling support for sizable language models. This partnership is positioned as a way to deliver a “happy blend” of compute resources.
EVIDENCE
She describes AMD’s CPU assets, GPU roadmap, and memory capacity of 256 GB HPM supporting 128-billion-parameter models, with plans to increase to 512 GB, facilitating single-node workloads for many customers [105-107].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Renu’s AMD partnership aligns with statements about AMD’s full-stack hardware-software approach and the “happy blend” of CPUs and GPUs discussed in the Waves of Infrastructure session [S1][S23].
MAJOR DISCUSSION POINT
Strategic hardware partnership
AGREED WITH
Jensen Huang
Argument 5
Proximal’s platform supports education, health‑science, and research use cases in partnership with UC San Diego (Renu Raman)
EXPLANATION
Renu outlines collaborations with UC San Diego’s data‑science institute to provide compute resources for AI in education, health sciences, and research. The partnership leverages a supercomputing data center and aims to transform curricula and research capabilities.
EVIDENCE
She mentions the UC San Diego partnership, the data-science institute, AI for education, health sciences, and a supercomputing data center used for hardware-level work, compute kernels, and inference engines [108-110].
MAJOR DISCUSSION POINT
Sector‑specific AI deployments
AGREED WITH
Michael Dell
Argument 6
Target query latency of ~20 ms (Google) or ~120 ms for population‑scale services (Renu Raman)
EXPLANATION
Renu cites Google’s historical benchmark of 20 ms per query and proposes a more realistic 120 ms target for large‑scale Indian services. She argues that meeting such latency goals will drive massive infrastructure investment.
EVIDENCE
She references Google’s 20 ms goal and suggests a 120 ms benchmark for population-scale queries, linking this to the need for extensive compute resources [300-304].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The 120 ms latency benchmark for population-scale services mirrors the target presented in the Waves of Infrastructure discussion, which references Google’s historic 20 ms goal [S1].
MAJOR DISCUSSION POINT
Performance benchmarks for large‑scale AI
AGREED WITH
Abhishek Singh
Argument 7
Sustained, multi‑decade investment is essential to build a sovereign AI hardware ecosystem (Renu Raman)
EXPLANATION
Renu stresses that long‑term, consistent funding is required to develop a domestic AI hardware stack, similar to historic investments in semiconductors. She frames this as a prerequisite for India’s AI sovereignty.
EVIDENCE
She discusses the need for multi-decade investment, referencing historical cycles and the importance of continuous support for building AI hardware capabilities [315-322].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Long-term investment, including energy infrastructure for compute, is emphasized as critical for building AI-ready hardware ecosystems [S26].
MAJOR DISCUSSION POINT
Long‑term funding for AI infrastructure
AGREED WITH
Abhishek Singh
DISAGREED WITH
Abhishek Singh
Argument 8
Scaling to 10 GW of AI‑ready power could unlock $250 B of hardware spend and create a domestic semiconductor supply chain (Renu Raman)
EXPLANATION
Renu quantifies the economic impact of building 10 GW of AI‑ready power in India, estimating a $250 billion hardware market that would foster a local semiconductor ecosystem. She uses this figure to illustrate the scale of opportunity.
EVIDENCE
She states that each gigawatt requires $25 billion of compute, memory, network, and storage, so 10 GW would represent roughly $250 billion of hardware spend, enabling a domestic supply chain [316-319].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of AI infrastructure scaling note that 10 GW of AI-ready power could drive on the order of $250 B in hardware spend, enabling a domestic semiconductor supply chain [S29].
MAJOR DISCUSSION POINT
Economic potential of AI‑ready power
Argument 9
Open‑source models will play a role analogous to Linux in the next “distributed computing 3.0” era (Renu Raman)
EXPLANATION
Renu predicts that open‑source AI models will democratize distributed computing much like Linux did for operating systems, driving a new wave of innovation. She positions open models as a catalyst for the upcoming era.
EVIDENCE
She notes that open-source models will have a role similar to Linux, enabling new ways to build distributed systems and fostering ecosystem growth [270-272].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Discussions on open-source models driving a new distributed computing era and being comparable to Linux’s impact support this view [S30][S1].
MAJOR DISCUSSION POINT
Open models as enablers of distributed computing
Argument 10
Historical shifts (hypervisors, Linux) illustrate how abstraction layers enable ecosystem growth (Renu Raman)
EXPLANATION
Renu recounts past technology transitions—hypervisors, Linux—that created abstraction layers, allowing rapid ecosystem expansion. She uses these examples to argue that similar layers will emerge with AI models.
EVIDENCE
She references the role of hypervisors (KVM, VMware) and Linux in past shifts, showing how abstraction layers removed middle-age software costs and enabled massive scaling [280-286].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Waves of Infrastructure narrative cites hypervisors and Linux as past abstraction layers that spurred ecosystem expansion, illustrating the point [S1].
MAJOR DISCUSSION POINT
Role of abstraction in tech evolution
Argument 11
Models act as a new abstraction layer separating compute needs from higher‑level applications (Renu Raman)
EXPLANATION
Renu describes AI models as a fresh abstraction that decouples underlying hardware requirements from application logic, similar to how virtual machines and operating systems functioned previously. This layer is expected to spur both closed and open innovation.
EVIDENCE
She states that “models is a new abstraction layer that provides a higher degree of innovation” and compares it to hypervisors and operating systems [284-287].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The same Waves of Infrastructure discussion describes AI models as a new abstraction layer that decouples hardware requirements from application logic [S1].
MAJOR DISCUSSION POINT
Models as a computing abstraction
J
Jensen Huang
2 arguments149 words per minute86 words34 seconds
Argument 1
Data processing workloads still run on CPUs, highlighting a gap for acceleration (Jensen Huang)
EXPLANATION
Jensen points out that the majority of structured and unstructured data processing—such as SQL queries in Databricks, Snowflake, and Oracle—still relies on traditional CPUs. This reliance signals a need for accelerated processing solutions.
EVIDENCE
He lists data processing platforms (Databricks, Snowflake, Oracle) and notes they “still completely runs on CPUs” [97-104].
MAJOR DISCUSSION POINT
CPU‑centric data processing
AGREED WITH
Renu Raman
Argument 2
Accelerated data processing must move beyond CPU‑only architectures (Jensen Huang)
EXPLANATION
Building on his earlier point, Jensen argues that future data‑processing initiatives must incorporate specialized accelerators rather than relying solely on CPUs. This shift is essential to meet growing performance demands.
EVIDENCE
He emphasizes that “very soon we’re going to announce a very big initiative of accelerated data processing” because current workloads are CPU-bound [97-104].
MAJOR DISCUSSION POINT
Need for hardware acceleration
AGREED WITH
Renu Raman
M
Michael Dell
1 argument126 words per minute73 words34 seconds
Argument 1
On‑prem AI factories enable enterprises to keep data local and cut costs (Michael Dell)
EXPLANATION
Michael describes AI factories that allow companies to run AI workloads on‑premise, keeping data where it is generated and reducing the expense of moving data to the cloud. This model is presented as a way to lower overall AI costs for enterprises.
EVIDENCE
He notes that they have delivered “over 3,000 of these AI factories” that bring AI to the data rather than the data to AI, addressing the large amount of on-premise data [118].
MAJOR DISCUSSION POINT
Local AI deployment for cost reduction
AGREED WITH
Renu Raman
L
Lalit Bhatt
3 arguments123 words per minute1070 words519 seconds
Argument 1
Local compute lowers inference cost for agriculture AI (Lalit Bhatt)
EXPLANATION
Lalit explains that placing compute close to agricultural sensors and imaging data reduces the cost of running inference, which is critical for price‑sensitive farmers. This approach improves efficiency across the entire data‑to‑insight pipeline.
EVIDENCE
He mentions that “we are looking into technologies where we can reduce our cost… it is very difficult to ask a lot of money from the farmer” and that local compute helps keep inference costs low for agriculture applications [144-146].
MAJOR DISCUSSION POINT
Cost‑effective AI for farming
AGREED WITH
Renu Raman
Argument 2
Divium provides a quality‑first inference layer that selects the best model per dollar and automates model upgrades (Lalit Bhatt)
EXPLANATION
Lalit describes Divium’s platform, which evaluates model quality against cost, routes queries to the optimal model, and continuously updates to newer models without breaking production. This ensures both performance and cost efficiency for enterprises.
EVIDENCE
He outlines Divium’s capabilities: measurable evaluations, model selection per dollar, automated upgrades, and single-API access, citing deployments that cut costs by over 60% for a travel aggregator and 30% for an e-pharmacy [170-179].
MAJOR DISCUSSION POINT
Intelligent model routing and cost optimization
Argument 3
Lack of standardized evaluation and unpredictable costs hinder Gen‑AI pilot production (Lalit Bhatt)
EXPLANATION
Lalit points out that 90 % of generative AI pilots fail to reach production because quality metrics are undefined and costs can spike dramatically. These challenges make it difficult for enterprises to scale AI initiatives.
EVIDENCE
He notes that “quality is undefined” and “costs are unpredictable,” with price variations of 10-50× and cost spikes when traffic increases, leading to pilot failures [156-165].
MAJOR DISCUSSION POINT
Barriers to AI pilot scaling
A
Abhishek Singh
4 arguments157 words per minute955 words364 seconds
Argument 1
Custom silicon can offload LLM inference, improving performance and efficiency (Abhishek Singh)
EXPLANATION
Abhishek states that using specialized chips to run large language models can accelerate inference and reduce power consumption compared to general‑purpose CPUs/GPUs. This custom silicon approach is presented as a key performance enhancer.
EVIDENCE
He explains that “we offload the large language models to specific chips and custom silicon” to achieve better inference performance [266-268].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
While custom silicon can accelerate LLM inference, industry analyses highlight the high cost, long lead times, and defect challenges of chip development, providing a counterpoint to the claim [S24].
MAJOR DISCUSSION POINT
Specialized hardware for LLMs
Argument 2
Semiconductor fab data analytics require edge AI to handle petabytes of real‑time data (Abhishek Singh)
EXPLANATION
Abhishek describes how semiconductor manufacturing generates massive amounts of data that must be processed in real time at the edge, requiring AI to classify defects and improve yields. He argues that centralized servers cannot meet these latency and bandwidth needs.
EVIDENCE
He details that fabs generate “7 petabytes of data,” need real-time defect analysis, and require edge computing to process and feed insights back into design, highlighting the scale and speed requirements [324-334].
MAJOR DISCUSSION POINT
Edge AI for fab data processing
Argument 3
Current funding gaps (e.g., 20 crore for startups vs. $100 M for single engineers) risk under‑investment in deep‑tech ventures (Abhishek Singh)
EXPLANATION
Abhishek compares the relatively modest government funding for Indian startups with the massive investments made by global tech firms in individual engineers, arguing that the disparity could hinder deep‑tech development in India.
EVIDENCE
He cites the mismatch between “20 crore of fund for startups” and “$100 M for a single engineer,” suggesting that such gaps threaten sustained innovation [343-352].
MAJOR DISCUSSION POINT
Funding disparity for deep‑tech
AGREED WITH
Renu Raman
Argument 4
Achieving sub‑second, low‑cost query responses for 1.5 billion users is a long‑term engineering challenge (Abhishek Singh)
EXPLANATION
Abhishek raises the question of whether India can deliver sub‑second query latency at a very low monthly cost for its massive population, indicating that meeting such performance at scale will require significant engineering breakthroughs.
EVIDENCE
He asks whether sub-millisecond or sub-second responses can be provided to 1.5 billion users at a cost of about 200 rupees per month, emphasizing the difficulty of the task [292-297].
MAJOR DISCUSSION POINT
Scalable low‑latency AI services
AGREED WITH
Renu Raman
S
Sandeep Kumar
1 argument158 words per minute769 words290 seconds
Argument 1
Venture‑builder model solves AI challenges such as hallucinations, disambiguation, data privacy, and reliability (Sandeep Kumar)
EXPLANATION
Sandeep outlines how his venture‑builder approach addresses common AI pitfalls: eliminating hallucinations, improving disambiguation, ensuring data privacy at the object level, and guaranteeing reliability for financial‑grade applications.
EVIDENCE
He lists solutions for hallucinations (99 % reliability), disambiguation, data privacy at the raw/object level, and reliability for financial transactions, noting these have been implemented in their systems [207-226].
MAJOR DISCUSSION POINT
Comprehensive AI risk mitigation
A
Audience
1 argument136 words per minute428 words188 seconds
Argument 1
India’s AI advantage lies in software, on‑prem AI, and productivity gains rather than chip design alone (Audience)
EXPLANATION
The audience member argues that India should focus on software and on‑premise AI solutions to achieve productivity improvements, as chip design will take longer to mature. This perspective frames software as the primary lever for AI leadership.
EVIDENCE
He states that “India has to be in software, it has to be in AI… the chip building-wise is going to take some time” and emphasizes the importance of on-premise data and AI solutions [244-247].
MAJOR DISCUSSION POINT
Strategic focus on software for AI leadership
Agreements
Agreement Points
Both speakers stress that current data‑processing and AI workloads are dominated by CPUs and that accelerated, heterogeneous compute (CPU + GPU) is needed to meet future performance demands.
Speakers: Renu Raman, Jensen Huang
Partnership with AMD provides a balanced CPU + GPU stack and high‑capacity memory for LLMs (Renu Raman) Data processing workloads still run on CPUs, highlighting a gap for acceleration (Jensen Huang) Accelerated data processing must move beyond CPU‑only architectures (Jensen Huang)
Renu notes a strategic partnership with AMD to combine x86 CPUs and powerful GPUs for AI workloads [105-107], while Jensen points out that major data-processing platforms still rely exclusively on CPUs and calls for accelerated solutions [97-104]. Both converge on the need for heterogeneous, accelerated compute beyond CPUs.
POLICY CONTEXT (KNOWLEDGE BASE)
The need for heterogeneous CPU-GPU acceleration is echoed in Jensen Huang’s announcement of a new initiative to speed up data-processing workloads toward GPU-based solutions [S57] and aligns with broader calls for edge-centric heterogeneous compute architectures [S55].
On‑premise or local compute is essential to keep data sovereign, reduce costs and improve performance.
Speakers: Renu Raman, Michael Dell
Proximal’s platform supports education, health‑science, and research use cases in partnership with UC San Diego (Renu Raman) On‑prem AI factories enable enterprises to keep data local and cut costs (Michael Dell)
Renu describes Proximal’s goal of bringing compute close to data, making it sovereign and nearer to memory and business needs [130-133]. Michael Dell describes AI factories that bring AI to the data rather than moving data to AI, reducing costs [118-119]. Both advocate for local compute to keep data on-premise and lower expenses.
POLICY CONTEXT (KNOWLEDGE BASE)
India’s layered sovereignty framework emphasizes controlling critical compute chokepoints while accepting strategic dependencies, highlighting the importance of on-premise resources for data sovereignty and cost efficiency [S43]; the distinction between strategic and technical sovereignty further reinforces this priority [S46].
Deploying compute close to the data source lowers inference cost for domain‑specific applications.
Speakers: Renu Raman, Lalit Bhatt
India needs ultra‑low‑cost, infant‑scale compute for population‑scale AI (Renu Raman) Local compute lowers inference cost for agriculture AI (Lalit Bhatt)
Renu emphasizes the need for extremely low-cost, infant-scale compute to serve India’s massive population [113-115]. Lalit explains that placing compute near agricultural sensors reduces inference cost, which is critical for price-sensitive farmers [144-146]. Both highlight local compute as a cost-reduction strategy for specific sectors.
POLICY CONTEXT (KNOWLEDGE BASE)
Advocacy for moving inference to the edge to reduce reliance on large data centres supports the claim that proximity to data lowers inference costs, as described in the heterogeneous compute for democratizing AI discussion [S55] and the strategic-technical sovereignty perspective [S46].
Achieving very low query latency at massive scale is a central technical challenge.
Speakers: Renu Raman, Abhishek Singh
Target query latency of ~20 ms (Google) or ~120 ms for population‑scale services (Renu Raman) Achieving sub‑second, low‑cost query responses for 1.5 billion users is a long‑term engineering challenge (Abhishek Singh)
Renu cites Google’s 20 ms benchmark and proposes a 120 ms target for Indian population-scale services [300-304]. Abhishek asks whether sub-second (or sub-millisecond) response times can be delivered to billions of users at low cost [292-297]. Both converge on the importance of ultra-low latency at scale.
POLICY CONTEXT (KNOWLEDGE BASE)
Low-latency requirements for massive-scale AI services are reflected in the VR/XR latency benchmarks of sub-20 ms for interactive rendering [S65] and in discussions on building population-scale digital public infrastructure for AI that stress latency as a key metric [S61].
Building a sovereign AI hardware ecosystem requires sustained, multi‑decade investment and adequate funding mechanisms.
Speakers: Renu Raman, Abhishek Singh
Sustained, multi‑decade investment is essential to build a sovereign AI hardware ecosystem (Renu Raman) Current funding gaps (e.g., 20 crore for startups vs. $100 M for single engineers) risk under‑investment in deep‑tech ventures (Abhishek Singh)
Renu stresses the necessity of long-term, continuous funding to develop AI-ready hardware infrastructure [315-322]. Abhishek highlights a mismatch between modest government startup funds and massive private investments, warning of under-investment risks [343-352]. Both agree on the critical role of sustained financing.
POLICY CONTEXT (KNOWLEDGE BASE)
Sustained multi-decade investment is highlighted in the Dell Technologies AI blueprint that earmarks long-term funding for compute infrastructure and energy systems [S63], while policy papers on collaborative financing models and equipment financing schemes underscore the need for dedicated funding mechanisms [S49][S51].
Similar Viewpoints
Both emphasize that software capabilities, especially when coupled with appropriate hardware, are the primary lever for India’s AI leadership, while chip design alone will take longer to mature. Renu argues for co‑design of hardware and software to achieve performance [34-36], and the audience stresses focusing on software and on‑prem AI solutions [244-247].
Speakers: Renu Raman, Audience
Successful AI systems require tight hardware‑software co‑design; “make your own hardware if you care about software” (Renu Raman) India’s AI advantage lies in software, on‑prem AI, and productivity gains rather than chip design alone (Audience)
Unexpected Consensus
Recognition by a GPU‑centric CEO (Jensen Huang) that the majority of data‑processing workloads remain CPU‑bound and need acceleration, aligning with Renu’s call for heterogeneous compute.
Speakers: Renu Raman, Jensen Huang
Partnership with AMD provides a balanced CPU + GPU stack and high‑capacity memory for LLMs (Renu Raman) Data processing workloads still run on CPUs, highlighting a gap for acceleration (Jensen Huang)
Despite Jensen Huang leading a company known for GPU acceleration, he acknowledges that most data‑processing still runs on CPUs and calls for accelerated solutions. Renu simultaneously promotes a CPU + GPU blend via AMD partnership, showing an unexpected alignment between a GPU leader and a hardware‑software co‑design advocate.
POLICY CONTEXT (KNOWLEDGE BASE)
Jensen Huang publicly acknowledged that most data-processing workloads remain CPU-bound and require GPU acceleration, confirming the speakers’ view [S57].
Overall Assessment

The discussion reveals strong convergence on several fronts: the necessity of heterogeneous, accelerated compute; the strategic importance of on‑premise/local compute for cost and data sovereignty; the critical challenge of ultra‑low latency at population scale; and the need for long‑term, well‑funded investment to build a sovereign AI ecosystem. Participants also share the view that software innovation, supported by appropriate hardware, is the immediate lever for India’s AI leadership.

High consensus across technical, economic, and policy dimensions, indicating a shared understanding that India’s AI future hinges on integrated hardware‑software solutions, local compute deployment, latency performance, and sustained financing. This consensus suggests coordinated action among industry, academia, and policymakers could effectively advance India’s AI infrastructure and ecosystem.

Differences
Different Viewpoints
India’s AI leadership focus – hardware infrastructure versus software/on‑prem AI solutions
Speakers: Renu Raman, Audience
India needs ultra‑low‑cost infant‑scale compute at population scale (Renu Raman) India should focus on software and on‑prem AI, chip building will take time (Audience)
Renu emphasizes building domestic, low-cost compute hardware (including an AMD CPU+GPU partnership) as essential for AI sovereignty [113-115][105-107], while the audience member argues that India’s advantage lies in software and on-prem AI, stating that chip design will take longer and the focus should be on software development [244-247].
POLICY CONTEXT (KNOWLEDGE BASE)
The India AI Impact Summit advocated a software-first, stack-centric approach to AI sovereignty, contrasting with hardware-centric ambitions, and the AGI roadmap further shifted emphasis toward algorithms over infrastructure [S43][S44]; Dell’s blueprint also stresses compute infrastructure, illustrating the ongoing debate [S63][S64].
Adequacy of funding for deep‑tech AI hardware ecosystem
Speakers: Renu Raman, Abhishek Singh
Sustained, multi‑decade investment is essential to build a sovereign AI hardware ecosystem (Renu Raman) Current funding gaps (20 crore for startups vs $100 M for single engineers) risk under‑investment in deep‑tech ventures (Abhishek Singh)
Renu asserts that long-term public and private investment will support the development of AI-ready power and a domestic semiconductor supply chain, citing a $250 billion hardware market from 10 GW of power [315-322][316-319], whereas Abhishek highlights a stark mismatch between modest government startup funds and massive private investments elsewhere, questioning whether sufficient capital will be available [343-352].
POLICY CONTEXT (KNOWLEDGE BASE)
Concerns about funding adequacy are reflected in analyses of global compute-divide financing models that call for credible incentive structures [S49], African policy recommendations for equipment financing schemes [S51], and observations on reduced foreign assistance affecting AI projects [S50].
Target latency for population‑scale AI services
Speakers: Renu Raman, Abhishek Singh
Aim for ~120 ms query latency for large‑scale Indian services (Renu Raman) Question feasibility of sub‑second or sub‑millisecond responses for 1.5 billion users at low cost (Abhishek Singh)
Renu proposes a realistic benchmark of 120 ms per query, referencing Google’s 20 ms goal as a historical target [300-304], while Abhishek asks whether sub-second (or even sub-millisecond) response times can be delivered to billions of users at a low monthly cost, indicating a more ambitious performance expectation [292-297].
POLICY CONTEXT (KNOWLEDGE BASE)
Target latency for population-scale AI services is informed by VR/XR latency targets of 10-20 ms [S65] and by the population-scale digital public infrastructure discussions that identify latency as a critical performance indicator [S61].
Approach to accelerate data processing workloads
Speakers: Jensen Huang, Renu Raman
Data processing still runs on CPUs; need accelerated data processing (Jensen Huang) Build a balanced CPU+GPU stack with AMD to handle AI workloads (Renu Raman)
Jensen points out that major data-processing platforms still rely entirely on CPUs and announces a forthcoming accelerated data-processing initiative [97-104], whereas Renu emphasizes a ‘happy blend’ of CPUs and GPUs through an AMD partnership to support AI workloads, suggesting integration rather than a shift solely to accelerators [105-107].
POLICY CONTEXT (KNOWLEDGE BASE)
Accelerating data-processing workloads through GPU-centric strategies is supported by Jensen Huang’s initiative to shift workloads toward GPU acceleration [S57] and by broader calls for heterogeneous compute to democratize AI access [S55].
Unexpected Differences
Hardware‑centric versus software‑centric AI strategy for India
Speakers: Renu Raman, Audience
India needs ultra‑low‑cost infant‑scale compute (Renu Raman) India should prioritize software and on‑prem AI, chip design will take time (Audience)
While both speakers are part of the same broader initiative, they diverge sharply on the primary lever for India’s AI leadership. Renu’s hardware‑focused roadmap contrasts with the audience’s software‑first stance, an unexpected split given their shared goal of AI advancement.
POLICY CONTEXT (KNOWLEDGE BASE)
The hardware-centric versus software-centric strategic split mirrors the layered sovereignty recommendation to focus on software stacks [S43] and the explicit statement that India’s strength lies in software rather than hardware [S64], underscoring the policy debate.
Overall Assessment

The discussion reveals moderate disagreement centered on strategic priorities (hardware vs software), funding adequacy, performance targets, and technical approaches to acceleration. Participants share a common vision of AI‑driven growth but differ on how to achieve it, reflecting divergent perspectives on investment, infrastructure, and feasibility.

Moderate – while there is consensus on the importance of AI and cost reduction, the differing views on hardware investment, latency goals, and funding mechanisms could lead to fragmented efforts unless reconciled, potentially slowing coordinated progress toward India’s AI ecosystem.

Partial Agreements
All participants agree that lowering AI inference and data‑processing costs is crucial for widespread adoption. Renu proposes ultra‑low‑cost, infant‑scale compute and a balanced CPU‑GPU stack [113-115][105-107]; Lalit stresses edge compute to keep farmer costs low [144-146]; Sandeep describes a venture‑builder approach that mitigates AI risks and improves cost efficiency [207-226]; Jensen calls for dedicated accelerators to move beyond CPU‑bound workloads [97-104].
Speakers: Renu Raman, Lalit Bhatt, Sandeep Kumar, Jensen Huang
Need to reduce AI inference and data‑processing costs (Renu Raman) Local compute lowers inference cost for agriculture AI (Lalit Bhatt) Venture‑builder model solves AI challenges including cost (Sandeep Kumar) Accelerated data processing needed to improve performance (Jensen Huang)
All agree on the importance of keeping AI workloads close to the data source. Renu describes Proximal’s model of bringing compute nearer to data and memory [130-133]; Michael highlights AI factories that keep data on‑prem to cut costs [118]; the audience member emphasizes that most data resides on‑prem and advocates software solutions that operate there [244-247].
Speakers: Renu Raman, Michael Dell, Audience
Compute should be brought close to data/on‑prem (Renu Raman) AI factories bring AI to the data, reducing costs (Michael Dell) 90 %+ data is on‑prem; focus on on‑prem AI solutions (Audience)
Takeaways
Key takeaways
AI is expected to affect ~95% of work, creating a massive demand for compute and productivity gains. India requires ultra‑low‑cost, infant‑scale compute infrastructure to serve its large population at scale. Current data‑processing workloads are CPU‑centric; accelerating these workloads with GPUs or custom silicon is essential. On‑premise AI factories (Proximal Cloud) enable data locality, reduce latency, and lower inference costs for enterprises. Hardware‑software co‑design is critical; partnership with AMD provides a balanced CPU + GPU stack with high‑capacity memory for LLMs. Custom silicon can offload LLM inference, improving performance and efficiency (as highlighted by ZetaVault). Key application domains demonstrated: agriculture (PharmEx), education and health sciences (UC San Diego), semiconductor fab analytics, and enterprise AI agents. Model selection and inference cost/quality are major challenges; Divium offers a quality‑first inference layer that auto‑optimizes model choice and upgrades. Latency benchmarks (≈20 ms for Google, ≈120 ms proposed for population‑scale services) are crucial targets; sub‑second response at low cost remains a long‑term engineering goal. Sustained, multi‑decade investment (potentially $250 B for 10 GW AI‑ready power) is needed to build a sovereign Indian AI hardware ecosystem. Open‑source models will play a role analogous to Linux in the next “distributed computing 3.0” era, co‑existing with closed models. Standardized evaluation metrics for Gen‑AI pilots are lacking, leading to unpredictable costs and low production rates.
Resolutions and action items
Proximal Cloud will continue its partnership with AMD to deliver a balanced CPU/GPU platform with high‑capacity memory for LLM inference. Proximal will work with UC San Diego and other Indian partners (e.g., CDAC, VVDN) to develop infant‑scale compute nodes for education, health, and agriculture use cases. Divium will be offered as the inference layer for partners, with ongoing deployments that have already demonstrated 30‑60% cost reductions. Instant System (venture‑builder) will support startups in addressing hallucinations, disambiguation, data‑privacy, and reliability challenges. Renu invited interested parties to engage for further collaboration; follow‑up meetings are implied but not formally scheduled. Collaboration with Indian OEMs (VVDN, Sanmina) is planned to develop domestic chassis and board manufacturing capabilities.
Unresolved issues
How to achieve sub‑second (or ~120 ms) query latency for 1.5 billion users at a price point of ~200 ₹ per month. Securing sufficient deep‑tech venture capital and government funding to bridge the gap between modest startup grants and the billions needed for large‑scale AI hardware development. Detailed roadmap and financing plan for building 1 GW to 10 GW of AI‑ready power capacity in India. Establishing industry‑wide standardized metrics for evaluating model quality, cost, and suitability across diverse use cases. Whether India will produce globally competitive AI‑hardware companies comparable to NVIDIA, SAP, or Palantir, and what business models will enable that.
Suggested compromises
Adopt a mixed CPU + GPU architecture (AMD partnership) rather than relying solely on GPU or CPU solutions. Support both closed‑source and open‑source model ecosystems, allowing innovation on both fronts. Leverage existing hyperscaler hardware while simultaneously nurturing domestic OEM and silicon design capabilities, rather than waiting for a fully indigenous supply chain. Use incremental, population‑scale latency targets (e.g., 120 ms) as a stepping stone toward the ideal 20 ms benchmark.
Thought Provoking Comments
We, as humanity, underestimate what can be done in 10 years but overestimate what can be done in two years. The big technology shifts happen every 30, 15, 7 years.
Sets a macro‑historical perspective that frames the entire discussion about long‑term planning versus short‑term hype, reminding listeners to think beyond immediate product cycles.
Established the thematic backdrop for the talk, prompting later speakers (e.g., Jensen Huang, Arya Bhattacharjee) to position their initiatives as part of a longer‑term wave rather than a fleeting trend.
Speaker: Renu Raman
The similarity between the microprocessor era of the 90s and today’s foundation‑model era: only a handful of ~150‑person teams can build world‑class models, and it now costs billions of dollars in GPUs.
Draws a concrete parallel that highlights the concentration of talent and capital required for cutting‑edge AI, making the abstract ‘model race’ tangible.
Shifted the conversation from generic market optimism to a realistic assessment of barriers, leading participants like Lalit Bhatt and Bharat to stress the need for specialized platforms (Divium) and cost‑effective inference solutions.
Speaker: Renu Raman
AI will impact 95 % of work – the blast radius is far larger than the SaaS era’s productivity gains.
Quantifies AI’s potential economic impact, turning a vague promise into a measurable claim that justifies massive infrastructure investment.
Prompted the audience to consider scale (population‑scale compute) and set the stage for later discussions on India’s 10 GW power target and the $250 B hardware spend.
Speaker: Renu Raman
Data processing (structured & unstructured) still runs on CPUs. We will soon announce a big initiative of accelerated data processing.
Highlights a blind spot in the AI hype – the massive, CPU‑bound data‑processing workload that will need acceleration, thereby expanding the scope of required hardware beyond GPUs.
Triggered Renu’s explanation of a “happy blend” of CPUs and GPUs with AMD, and reinforced the narrative that AI infrastructure must serve both traditional data workloads and new LLM workloads.
Speaker: Jensen Huang
90 % of Gen‑AI pilots never make it to production. The three killers are undefined quality, unpredictable costs, and constantly shifting model selection.
Identifies the practical, operational failure points that most enterprises face, moving the conversation from visionary tech to actionable challenges.
Led to a deeper dive into Divium’s solution (model‑quality evaluation, cost‑optimal routing) and sparked interest from the audience about real‑world deployment, influencing the subsequent Q&A focus.
Speaker: Lalit Bhatt (Divium)
India’s future in the AI/semiconductor wave will be driven by software and AI, not by building chips from scratch. On‑prem AI can cut fab productivity by 25 % – $10 M per day on a 7 nm line.
Provides a concrete national‑strategy viewpoint, aligning the discussion with India’s policy priorities and emphasizing immediate, high‑impact use cases over long‑term chip design.
Shifted the dialogue toward sovereign, low‑cost, infant‑scale compute for India, prompting Renu to discuss power‑to‑hardware economics and the role of local OEMs.
Speaker: Arya Bhattacharjee (Infosys)
Google’s 20 ms query‑response benchmark defined the modern web. For India we should aim for ~120 ms for any query at population scale.
Uses a historic performance target to set a concrete, aspirational metric for the Indian market, turning an abstract “scale” discussion into a measurable engineering goal.
Guided the conversation toward latency‑focused system design, influencing later remarks about network upgrades (800 Gbps Ethernet) and the need for specialized inference hardware.
Speaker: Renu Raman
Models are the new abstraction layer, just as hypervisors separated physical machines from VMs and OSes separated hardware from applications.
Frames the rise of open/closed AI models in familiar systems‑architecture terms, making the concept of “distributed computing 3.0” accessible and highlighting future innovation pathways.
Prompted Abhishek Singh’s question about open‑source models and led to a broader discussion on the ecosystem of open vs closed models, reinforcing the theme of layered abstraction.
Speaker: Renu Raman
If India can build a $250 B hardware stack (10 GW power), it can spawn its own SAP‑like, Palantir‑like companies – but the business model must achieve ~50 % gross margin, not the 30 % typical in India.
Links macro‑economic investment to concrete entrepreneurial outcomes, challenging Indian firms to aim for higher‑margin, globally competitive software businesses.
Steered the conversation toward the viability of Indian “unicorns” in the AI stack, encouraging participants to think about business models, not just technology, and setting up the final discussion on venture funding.
Speaker: Renu Raman
Venture funding mismatch: 20 crore for many startups vs $100 M for a single engineer. Do we have deep enough capital to build Nvidia‑scale companies in India?
Raises a systemic financing issue that underpins all technical ambitions, questioning whether the ecosystem can sustain the massive capital needs identified earlier.
Created a turning point where Renu turned the question back to the asker, highlighting the need for self‑reflection among founders and investors, and concluding the session with a focus on ecosystem‑wide collaboration.
Speaker: Abhishek Singh
Overall Assessment

The discussion was driven forward by a series of high‑level framing statements (Renu’s long‑term tech cycles, AI’s 95 % work impact) and concrete pain‑point revelations (Jensen’s CPU‑bound data processing, Lalit’s 90 % pilot failure). Each of these sparked new sub‑threads—hardware‑software blend, sovereign Indian compute, latency benchmarks, and financing challenges—that deepened the dialogue from visionary hype to actionable strategy. The most pivotal moments were when participants shifted from abstract potential to real‑world constraints, prompting the audience to consider not only what technology is possible, but how it can be built, funded, and scaled within India’s unique ecosystem.

Follow-up Questions
What is the future of India in AI and semiconductor? How can India capitalize and make a mark?
Understanding strategic pathways for India to become a leader in AI and semiconductor ecosystems is crucial for policy, investment, and talent development.
Speaker: Arya Bhattacharjee
What will open models do for distributed computing? Will we see Distributed Computing 3.0?
Exploring the impact of open‑source AI models on the next generation of distributed systems helps anticipate architectural shifts and ecosystem opportunities.
Speaker: Abhishek Singh
Is sub‑millisecond/sub‑second query processing at population scale (≈1.5 billion users) feasible at low cost (≈200 rupees/month)?
Achieving ultra‑low latency at massive scale is key for consumer‑facing AI services in India; feasibility analysis informs infrastructure and algorithm design.
Speaker: Abhishek Singh
What kinds of corporations can emerge from India’s AI/semiconductor push? Could we see companies akin to NVIDIA, Palantir, SAP, or Oracle?
Identifying potential new industry champions guides ecosystem building, talent pipelines, and investment focus.
Speaker: Abhishek Singh
Do Indian venture capitalists and private equity have sufficient deep pockets to fund massive AI/semiconductor ventures comparable to global players?
Funding depth determines whether India can sustain the multi‑billion‑dollar hardware and software investments needed for a sovereign AI stack.
Speaker: Abhishek Singh
How will a 10 GW AI infrastructure business materialize in India? What are the pathways for that business to come to India?
Clarifying the supply‑chain, manufacturing, and financing routes for large‑scale AI compute capacity is essential for national planning and private sector participation.
Speaker: Audience (unidentified)
What is the optimal memory hierarchy (number and types of memory) for AI inference systems?
Memory architecture directly affects performance, power, and cost; research is needed to decide between single vs. multiple memory types for inference workloads.
Speaker: Renu Raman
How should inference‑only distributed systems be architected differently from training systems?
Inference workloads have distinct scalability and latency requirements; defining a dedicated architecture could improve efficiency and cost.
Speaker: Renu Raman
How will the coexistence of open and closed AI models shape the abstraction layer analogous to hypervisors in cloud computing?
Understanding this dynamic will inform standards, interoperability, and competitive strategies for model providers.
Speaker: Renu Raman
What is the optimal deployment strategy for AI‑ready geolocal data centers in India?
Regional data centers are critical for sovereignty and latency; research is needed on location, capacity, and partnership models.
Speaker: Renu Raman
What latency benchmark (e.g., 120 ms) should be targeted for query responses at national scale, and what resources are required to meet it?
Setting realistic performance targets guides infrastructure investment and algorithmic optimization for mass‑market AI services.
Speaker: Renu Raman
What does the investment and manufacturing ecosystem need to look like to build 10 GW of AI compute capacity in India?
Analyzing capital requirements, OEM participation, and domestic fab capabilities is vital for achieving sovereign AI compute at scale.
Speaker: Renu Raman
How can Indian AI companies achieve higher gross margins (e.g., 50 %) compared to current averages (~30 %) and approach models like Palantir’s 95 %?
Exploring business‑model innovations and cost structures can make Indian AI firms globally competitive and financially sustainable.
Speaker: Renu Raman
How can AI be applied in semiconductor fabs for real‑time defect detection, yield improvement, and design feedback?
AI‑driven fab analytics could dramatically reduce costs and improve yields; research is needed on data pipelines, edge inference, and integration with design tools.
Speaker: Renu Raman
How can graph databases be leveraged to organize enterprise data (email, documents, Teams) for AI applications?
Effective data graphing underpins many AI use cases; studying methods to build and maintain such graphs at scale is essential for enterprise AI adoption.
Speaker: Renu Raman
What are the details and implications of the upcoming accelerated data processing initiative announced by Jensen Huang?
The initiative could shift a large portion of data‑processing workloads from CPUs to accelerated hardware, impacting software stacks, cost models, and market dynamics.
Speaker: Jensen Huang

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.