Open Internet Inclusive AI Unlocking Innovation for All

20 Feb 2026 16:00h - 17:00h

Open Internet Inclusive AI Unlocking Innovation for All

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel, moderated by Rahul Mathan, opened by stressing that AI should not be confined to a handful of companies in a single postal code and that democratizing the technology requires new infrastructure [15-21]. Matthew Prince explained that today’s AI is expensive because it relies on massive numbers of NVIDIA GPUs originally designed for gaming and cryptocurrency, which consume huge power and are costly [22-30]. He added that only a tiny global talent pool knows how to build and run large models, further driving high salaries and limiting broader participation [31-33]. Prince argued that rising enrollment in computer-science and AI courses, expanding chip production, and competition among startups will drive down hardware costs, making AI models more of a commodity [44-53][54-57]. He predicted that within five years a frontier-level specialized model could be built for under $10 million, a dramatic drop from today’s billions-dollar investments [60-62].


Rajan Anandan countered that India does not need to chase AGI; instead it is deploying billion-parameter models optimized for Indic languages, such as SARVAM, which already outperform global voice-AI at a fraction of the cost [75-83][84-92]. He emphasized building a sovereign AI stack, citing recent investments in Indian GPU and memory startups and alliances with firms like Paxilica to reduce dependence on foreign chip suppliers [108-119][120-128]. When asked about open-weight models, Rahul noted security concerns, while Prince defended openness, saying AI-doom narratives may be a strategy for incumbent firms to capture regulation and that regulation should target behavior rather than model access [165-172][176-204]. Rajan agreed that open source is essential but argued that the economics of training large models make fully open releases difficult, and that lowering inference costs is the key to affordable AI for India’s billion users [217-227][240-248].


Prince also warned that AI could amplify cyber-attacks, but highlighted that his company uses machine learning to detect threats faster than humans and expects overall online security to improve over the next decade [264-282]. He warned that dominant search engines’ control over web indexing gives them a data advantage, urging regulators to ensure equal data access and anticipating a new internet business model that rewards content creators rather than traffic [345-368][395-404]. The discussion concluded that achieving AI democratization will depend on cheaper hardware, open-source ecosystems, sovereign data and chip strategies, and nuanced regulation that balances innovation with security [58-62][217-227][176-204][345-368].


Keypoints


Why AI is currently hard and expensive, and how those barriers might fall.


Matthew explains that AI’s cost is driven by a reliance on a single chip supplier (NVIDIA) and massive power needs, plus a tiny pool of specialized talent [22-31]. He notes that enrollment in CS and AI courses is soaring, which should broaden expertise [44-46]. He also predicts that chip competition and economies of scale will drive down per-unit costs, making frontier-level models affordable (≈ $10 M) within five years [58-62].


India’s distinct AI roadmap: low-cost, language-focused models and a sovereign tech stack.


Rajan stresses that India does not need trillion-parameter AGI; instead it needs “highly performant, extremely low-cost” models of 1-200 B parameters for 1.4 billion users [75-81]. He cites home-grown models (Sarvam) that outperform global voice-AI at a fraction of the cost [82-84] and outlines rapid growth in Indian chip design and GPU startups, as well as the push for a sovereign hardware and compute stack [108-119]. He argues that India will win on the application layer, leveraging massive smartphone penetration and local-language demand [121-132].


The open-source / open-weights dilemma.


Rahul raises the tension between the need for open models to enable rapid innovation and the security risks of releasing highly capable weights [165-174]. Matthew argues that restricting access is a business strategy masquerading as safety concerns and that a more open ecosystem will ultimately prevail [176-204]. Rajan adds that the economics of trillion-dollar model training make pure openness infeasible, yet open models remain “absolutely critical” for the ecosystem [217-252].


Regulation, safety, and the narrative of AI risk.


Matthew suggests that “AI doomers” may be motivated by a desire to capture regulatory advantage, warning that over-regulation could stifle competition [180-197]. Rahul points out the parallel with nuclear-style regulation proposals and the broader societal fear of AI-driven cyber threats [208-212]. Both agree that human-like legal frameworks (criminal code) may be more appropriate than engineering-centric rules [199-202].


Data access, web crawling, and the future internet business model.


Matthew highlights the asymmetry in web indexing (Google sees far more pages than competitors) and warns that AI could upend the traditional traffic-based monetisation model of the internet [345-403]. He calls for new compensation mechanisms for content creators and suggests that the next five years will see a “new business model” that rewards knowledge creation rather than mere traffic [395-404].


Overall purpose / goal


The panel’s aim was to explore how to democratise artificial intelligence-making the technology, infrastructure, models, and data accessible beyond the current concentration in a few “postal-code” companies-while addressing the technical, economic, regulatory, and societal challenges that this transition entails, with a particular focus on India’s role and opportunities.


Overall tone


The discussion begins with a technical, analytical tone, outlining AI’s cost drivers. It then shifts to a national-strategic, optimistic tone as Rajan outlines India’s emerging capabilities. Mid-conversation the tone becomes critical and cautionary, debating open-source risks and regulatory capture. Toward the end it moves to a forward-looking, hopeful tone, envisioning new internet business models and collaborative solutions. Throughout, the speakers remain collegial but the emphasis oscillates between optimism about rapid progress and concern over concentration of power and safety.


Speakers

Announcer


– Role/Title: Event announcer / moderator


– Areas of expertise: (not specified)


– Sources: [S3][S4][S5]


Rahul Matthan


– Role/Title: Moderator; Partner at TriLegal (board member, Bangalore office), leads technology, media & telecom practice


– Areas of expertise: Legal insight, policy, technology, media, telecom, high-value TMT transactions


– Sources: [S9][S10][S11]


Matthew Prince


– Role/Title: Co-founder and CEO of Cloudflare


– Areas of expertise: Internet infrastructure, cloud security, AI, web performance, networking


– Sources: [S6][S7][S8]


Rajan Anandan


– Role/Title: Managing Director of Peak15 Partners (formerly founder of Sequoia Capital India in Southeast Asia)


– Areas of expertise: Technology investment, AI, semiconductor ecosystem, startup ecosystem, digital sovereignty, venture capital


– Sources: (information derived from transcript)


Audience


– Role/Title: Audience members (questioners)


– Areas of expertise: Varied; examples include


– Yuv – individual from Senegal [S12]


– Professor Charu – public administration scholar [S13]


– Dr. Nazar – (role not clearly specified) [S14]


– Sources: [S12][S13][S14]


Additional speakers:


(None identified beyond those listed above)


Full session reportComprehensive analysis and detailed insights

Moderator Rahul Mathan opened the session by recalling Matthew Prince’s closing remark from his keynote – that the transformative power of artificial intelligence should not be confined to “a handful of companies in the same postal code” – and asked the panel to discuss what infrastructure would be needed to democratise AI given today’s technical and economic barriers [15-21][14].


Matthew Prince began by explaining why AI is currently hard and expensive. He noted that modern AI workloads require massive numbers of GPUs, a market dominated by NVIDIA, whose chips were originally designed for gaming consoles and later repurposed for Bitcoin mining rather than for AI, making them power-hungry and costly [23-30]. Prince also highlighted the scarcity of specialised talent – only a tiny global pool of engineers can design, train and operate large models, driving up salaries and limiting broader participation [31-33].


He then pointed to several forces that could erode these barriers. Enrolment in computer-science and AI-theory programmes has surged worldwide, expanding the talent pipeline [44-46]. The silicon market, after successive shortages, is now experiencing a “glut”, and a growing number of startups, incumbents and hyperscalers are entering GPU production, which should drive down per-unit compute costs [50-54]. As models become more of a commodity, Prince argued that the cost of building frontier-level specialised models could fall to “$10 million or less” within five years [58-62].


Rajan Anandan shifted the focus to India’s distinct AI strategy. He stressed that India does not need to chase artificial general intelligence; instead the priority is to develop “highly performant, extremely low-cost models” of one to two-hundred billion parameters that can serve 1.4 billion people [75-81]. He cited the home-grown SARVAM system, which already delivers state-of-the-art speech-to-text and text-to-speech in Indic languages at a fraction of the cost of global competitors [82-84].


To sustain this trajectory, Rajan outlined the need for a sovereign AI stack. India accounts for roughly 20 % of the world’s semiconductor designers [108-109] and now hosts 35-40 semiconductor startups ranging from low-power 20 nm system-on-chips to GPU designers such as Agrani and memory firms like C2I, both of which have received fresh investment [110-113]. He argued that a “sovereign AI stack” – covering chips, compute and data – is essential because “our friends are no longer friends, or sometimes they are, sometimes they aren’t” [113-116]; strategic alliances such as the recent partnership with Paxilica are part of this approach [117-119]. He also noted that Indian conglomerates Adani and Reliance announced a combined $100 billion commitment to AI infrastructure at the model layer this week [120-124].


Rajan further emphasized voice-AI economics, noting that current Indian voice-AI costs about 3 rupees per minute, while SARVAM can already deliver sub-rupee rates; to achieve mass adoption the cost must fall to roughly 5-10 paisa per minute [239-245][240-245].


He highlighted a startup, Cloud Physician, which has amassed proprietary ICU data from tier-2/3 towns and used it to build a dozen specialised healthcare models now being commercialised in the U.S. [250-255]. Rajan also pointed out that India has collected less than 1 % of the data needed for AGI, underscoring the opportunity for data-collection startups and the importance of smart data regulation [260-267].


The panel then debated the open-source/open-weights dilemma. Rahul warned that releasing highly capable models as open weights could enable “malicious fine-tuning” and other security threats [165-174]. Prince responded that attempts to restrict access are often a “business strategy” masquerading as safety concerns, noting that incumbents may deliberately amplify AI-doom narratives to capture regulation and preserve market dominance [176-197], and he maintained that a more open ecosystem will ultimately prevail [198-204]. Rajan agreed that openness is “absolutely critical” for the ecosystem [217-221] but cautioned that the economics of training trillion-dollar models make fully open releases untenable, suggesting new economic pathways are needed to reconcile openness with investment recovery [222-232][217-227].


Regulation and safety were further explored. Prince suggested that regulators should focus on the behaviour of systems – applying criminal-code principles to AI rather than trying to control deterministic outputs [199-202]. He also warned that the “AI-doom” narrative may be a tool for regulatory capture, urging caution against over-regulation that could stifle competition [180-197]. Rahul compared AI oversight to nuclear regulation, proposing an “IAEA for AI” [208-212], while Prince reiterated that regulation should target system behaviour [199-202]. The discussion also covered AI’s dual security role: AI can amplify phishing, social-engineering and sophisticated breaches (e.g., the SalesLoft incident) [266-276], yet Cloudflare’s machine-learning-driven threat detection shows how AI can make the internet more secure, with Prince predicting that “in ten years we are more secure online than we are today” [277-286].


Data-access inequality was identified as another source of asymmetry. Prince warned that Google indexes far more of the web than competitors – roughly six pages to Microsoft’s one, three-and-a-half pages to OpenAI’s one, and ten pages to Anthropic’s one – giving it a decisive training advantage [345-353]. He argued that either regulators must force Google to share its index on equal terms [358-366] or the industry must devise mechanisms such as “pay-to-crawl” to level the playing field. He also cited DeepSeek’s breakthrough pruning algorithm, which efficiently discards large portions of the computation tree, allowing models to run on far fewer chips and illustrating how constraints can drive efficiency [380-387].


When the audience posed questions, several themes resurfaced. On trustworthiness, Prince replied that AI is already “more trustworthy than most humans”, citing self-driving cars that are statistically safer than 99.99 % of human drivers, and suggested that trust should be measured against human performance rather than an idealised perfection [450-466][467-470]. On creator compensation, he explained that scarcity (e.g., blocking AI crawlers) forces publishers to negotiate higher licensing fees, as demonstrated by Reddit’s 7× higher payout compared with the New York Times [474-477]. Rajan highlighted the rapid growth of consumer-AI startups in India, noting that “India today has more consumer AI startups than the US” and that recent seed investments are targeting education, healthcare and entertainment for the country’s 900 million internet users [479-486].


In sum, the panel converged on three pillars for AI democratisation: (1) falling hardware costs and model commoditisation – exemplified by Prince’s $10 million frontier-model prediction and India’s emerging chip ecosystem; (2) open-source/open-weights as essential for a healthy ecosystem, tempered by the economics of trillion-dollar training runs; and (3) proportionate regulation that addresses data monopolies, security risks and the shift away from traffic-based monetisation. Unresolved issues include how to fund fully open models while preventing malicious fine-tuning, designing robust creator-attribution and compensation frameworks, finalising India’s sovereign AI-stack roadmap, and defining the next internet business model that rewards knowledge creation rather than mere traffic.


Session transcriptComplete transcript of the session
Announcer

Very few individuals have done more to bring revolutionary and transformative technology into the hands of millions than Matthew and Rajan. Matthew Prince is the co -founder and CEO of CloudFlare, a World Economic Forum on Foreign Relations, Forum Technology Pioneer, and a Council on Foreign Relations member. He has degrees from Harvard, Chicago, and Trinity College, and co -created Project Honeypot, the largest community tracking online fraud and abuse. Matthew’s founding mission for CloudFlare was to help build a better Internet, a goal that has become increasingly critical in the age of artificial intelligence. Rajan Anandan is one of India’s most influential technology leaders and investors, currently serving as Managing Director of Peak15 Partners, formerly Sequoia. He is the founder of Sequoia Capital India in Southeast Asia, where he focuses on backing founders building transformative, technology -led businesses.

With decades of experience across entrepreneurship, investing, and global technology leadership, Rajan has played a pivotal role in shaping India’s startup and digital ecosystem. Orchestrating this conversation is Rahul Mathan, who brings the perfect blend of legal insight, policy depth, and the ability to ask the questions everyone else is thinking. Rahul is a board member and partner in TriLegal’s Bangalore office and has their technology, media, and telecom practice. He has extensive experience advising on high -value TMT transactions in the country. He has worked with companies across sectors, from telecom majors to Internet and data service providers, offering advice on regulatory matters and operational issues. So please join me in welcoming three awesome leaders on stage, and with that, the stage is yours.

Thanks, Rahul.

Rahul Matthan

And since I haven’t worked with you, I’m going to square that circle. Matthew, I just heard your keynote up in the big 3 ,000 -seater hall, and you ended with a very powerful statement, which is that that this wonderful AI technology should not be built by a handful of companies in the same postal code. And that, in many ways, seems to be the driving motivation for having this discussion. But it’s easier said than done in that AI is a very big and complicated stack. And a lot of that stack actually involves complex hardware. And it’s hard, really, to move that hardware around the Internet. So if we are to democratize AI and if we are to come up with the infrastructure construct that would democratize AI, what would that look like?

And what is your idea, your vision for how this would be, if not now, but sometime soon?

Matthew Prince

Yeah, so let’s talk first about why AI is hard and expensive today. So the first thing is AI requires lots and lots and lots of chips. And the fifth thing is AI requires lots and lots of chips. And the sixth thing is AI requires lots and lots of chips. largely produced today by one manufacturer, NVIDIA, that use a ton of power and are very, very expensive. They were never built to do this. If we’re totally honest, the NVIDIA chips were built to power gaming consoles, right? And then for a while to mine Bitcoin and then magically to create a superintelligence. But if you had started with, let’s create the superintelligence, you would have designed those chips somewhat differently today.

That’s challenge one that keeps AI very, very hard. Challenge two is that it requires a real specialized set of knowledge. There’s a very small set of people in the world that know how to build these models and how to run these systems. And so you have to ask, why is that not something where everyone knows it? If you had known that you could specialize in this in school and literally make $100 million, there’s a year. We would all have studied AI, right? And yet if you go back just five years ago, the people who were studying AI were kind of the weirdos. Why was that the case? Well, because AI was one of these fields that kind of had promise in the 70s and had promise in the 80s and had promise in the 90s.

And then everyone was kind of like, you know what? We’re tired of this. And so the AI professor was kind of shunted off to the side. And so if those are the things that today make AI extremely expensive, the question is, are those things permanent states or are they going to change? Well, we can measure one of them already. Already, if you look in enrollment in computer science programs across the world, it is up dramatically, even though supposedly there’s no future for computer scientists, in just the last two years. And then secondly, the enrollment in specifically AI theory courses is off the top. Every university that used to sort of shutter their course is now standing it up and building it like.

like crazy. And so I think that over time, we’re gonna have more and more people who are able to do this. And so having to pay enormous salaries for those people, that’s probably not going to be the future. On the chip side, you know, if you have literally a company going from being an obscure gaming company, to the most valuable company in the world, obviously, a whole bunch of people are going to chase after that. And if you look at the history of silicon, anytime there has been a silicon shortage, it turns into a silicon glut over time. And with GPUs, it’s kind of had hit after hit after hit. I think that what we’re seeing, at least is that both from startups, as well as incumbent players, as well as from the hyperscalers and other players, they’re getting involved.

There are so many people who are making this this silicon, that no matter what the price per unit of work done is going to come down. The other thing that I think is encouraging is that if we look at the actual AI models themselves, it doesn’t appear that this is necessarily a one company is running away with it. It’s sort of like Google gets a lead, and then Anthropic passes them, and then OpenAI passes them, and then someone else passes them, and then Google does it again, and it keeps leapfrogging itself. That, to me, suggests that the actual model making is more likely in a steady state in the future to be something like a commodity.

And if that’s the case, if the cost of creating the models is going to come down, if the models themselves are more commodities, I think that we can’t assume that the literally hundreds of billions, if not trillions of dollars that are going into building the leading AI companies today, that that might come crashing down. And my prediction would be that you’ll be able to build models. Models that will be on the frontier. They’ll be more specialized, but they’ll be on the frontier. for tens of millions of dollars in the not -so -distant future. I’ll put a date out there. In five years, you’ll be able to build a frontier -like model within a specialty for $10 million or less.

Rahul Matthan

Rajan, about a year ago, one of these companies and one of these postal codes were here, and you asked a question, you know, what will it take for India to compete? And you were told it’s hopeless, don’t compete. And yet at the summit, we’ve come out with a model that’s, by all accounts, haven’t yet played with it competitive. So what Matthew is saying seems to be working out, but he’s putting a five -year timeline. I would argue that perhaps we could be more aggressive with that timeline. So what is your view on this as someone who is actually, you know, in India, working with some of these really smart people who are working under constraints?

but are yet putting out some fairly impressive models, interesting use cases. What’s life like at the other end of the absolute front? I mean, what people call the absolute frontier of these models. What is life at the other end of it where there are many different applications, many different use cases, and many different types of models? Yeah.

Rajan Anandan

Firstly, hi, everyone. Great to have all of you here. So I think the first thing is, look, Matthew, I don’t know whether you’ve been following a company called Sarvam. I think, firstly, it’s important that India is not trying to get to AGI. With 1 .4 billion humans and a million Indians turning 18 every month, AGI is not the thing that we need. Our focus really is to uplift 1 .4 billion Indians. And I think our ecosystem, our innovators, our government, our investors, our technologists, our engineers, our engineers, our engineers, our engineers, our engineers, are all of the view that we have. But to really do that, you don’t need trillion or five trillion parameter models. What you need are highly performant, extremely low cost models that are a billion parameters to maybe 100 or 200 billion parameters.

And actually, for the amount that you mentioned, we have launched 30 and 100 billion parameter models that are SOTA in Indic languages. In fact, I don’t know whether you know this, Matthew, but if you look at voice AI in Indic languages, SARVAM today is both SOTA in speech -to -text and text -to -speech and is a fraction of the cost of the global models, including the global leader in voice AI. So I think what you’re going to see is the government’s actually – and by the way, the reason SARVAM is able to do that, because of tremendous support from the government, but it’s not just SARVAM. There are 12 large and small language models. By the way, just – I don’t know, I don’t know.

a clarification because when Sarvam launched, I think somebody said India is really good at small language models. The last time I checked Chakchipiti and Gemini, anything above 30 billion is actually a large language model. So we are actually in the large language model race. Just a clarification there. So basically, we have 12 companies, actually 11 companies and Bharat Chen, which is part of IIT Bombay, that are building these models. I think this number goes to 15, 20 very, very quickly. And I would say actually well within this year, Matthew, in many, many things that India needs, right? We need to uplift 100 million farmers, and for that, we need to build basically models that work on feature fonts, right, in local languages.

That’s done, right? That was actually launched on Wednesday and so on. So I think that’s the first thing. And now when you ask this other question of, look, true frontier, which is, you know, the frontier today, let’s call it a few trillion, maybe it’ll go to 5 trillion, 10 trillion. I think it’s going to be very, if you define frontier that way, part of it is also definition of frontier and more importantly, what’s the objective, right? If you define that frontier that way, I think it’s going to, it’s not going to be, Indians are not going to be able to do it with these set of architectures. But what we are going to do is to Matthew, to the point that you made, which is, look, LLMs are the most inefficient compute machines ever.

I mean, this is the, you know, these are not efficient architectures, right? We believe that this is the beginning, not the end. We believe there’s going to be many more to come after transformers. And I think that is where the bets are going to be made. In fact, I mean, you know, Jan Lukun is here and he sort of said that, look, you know, this is not sort of the, this is not going to lead us to AGI. We don’t really want to get to AGI, but even if you just look at where AI is going to go. So I’d say at the model layer this week, India entered the race, but we are going to play this race differently, which is IE.

We’re not going to try to build trillion parameter models and we’ll do it at super, super low cost. Now, coming to the chip player, that’s harder for India, but I don’t know whether you know this, Matthew, but 20 % of the world’s semiconductor designers are in India. Four years ago, we had no semiconductor startups. Today, we have about 35 to 40. They span the spectrum from low -power, call it 20 nanometer chips, these all SoCs, all the way up through – actually, two weeks ago, we announced an investment in a GPU company. It’s a very seasoned team, Intel, AMD, a company called Agrani. Monday, this week, we announced an investment in a company called C2I, which is going to make memory, focus on memory.

So we’ll see even at the chip player, because what is very clear to us and I think to many in India is we have to have the sovereign stack. Our friends are no longer friends, or sometimes they’re friends, or sometimes they’re not. And as India, we just need to have the sovereign stack. And we can’t – we, of course, are going to have alliances. And today, I think a very important alliance. This alliance was announced with Paxilica and so on. But we’ve got to actually have a sovereign stack. So whether it’s the chip layer, whether it’s the compute layer, I think it’s great that both Adani, Reliance announced $100 billion investments into AI infra this week at the model layer.

Where we have excelled is on the application layer. And what I can tell you is at the application layer, I joined Google in 2011. At the time, India had 10 million connected smartphone users, no venture capital, and no unicorns. Today, we have a lot of venture capital. We have 900 million smartphone users. We don’t have enough capital, but we have enough venture capital, and we have 125 unicorns. At the application layer, I can confidently say, whether it’s consumer, whether it’s enterprise, Indian companies will win. Because the traditional formats of consumer consumption, which is called search, or now Gemini, ChatGPT, et cetera, will scale to, in my view, will probably scale to 200 or 300 million. They’re not going to scale to a billion Indians.

To scale to a billion Indians, you’ve got to. You’ve got to have image, you’ve got to have video, you’ve got to have highly local language, and it’s got to be. ultra, ultra low cost. So I do think we have a shot. I think what you’ve just described, we ship this week. I think by the end of the year, we’re going to ship 100x more because, Matthew, what’s happened in India is we don’t, we can’t, we’re trying to do things differently. Payments is an example, which I think the whole world knows about. And you’ll see that playing out in many other things. And I’ll last end with this. Matthew, 2015, India had two space tech

Matthew Prince

And I would just say, don’t sell yourself short. Like you may not, India may not need AGI, but India may still build AGI. Right. And I think that the thing that actually might end up holding back the biggest AI companies is that they are so unresourced constrained. If you look at what was the biggest over the last two years, innovation that really drove AI forward, it wasn’t anything that Google or OpenAI or anyone else did. It was actually DeepSeek, and DeepSeek’s ability to say that within the constraints of the chips that they had access to, that they would – they had two incredible innovations. They would prune the tree more efficiently, and they’d be able to process that pruned tree much more quickly.

I wish DeepSeek had been an Indian company, not a Chinese company. It would have ended a little bit more constrained. I think we have a thought room. But it is – I actually think those places with constraints, and I would not be scared away if you’re an Indian AI company by hearing the hundreds of billions of dollars that the big U .S. AI companies are pouring in. That seems like an asset. That seems like an advantage that they have. But in some ways, it’s blinding them to what will be the real innovations that cause AI to become more efficient, that cause AI to become – more scalable, and there is no way that the long -term solution to this is you have to turn up a mothballed nuclear power plant.

So we’re going to get more efficient, and I would bet that that efficiency comes from places just like this.

Rahul Matthan

So if I can push back to both of you, I think there’s – not that I disagree with any of this, but to say that – so one of the things that DeepSeek did was it came up with this reasoning model. I guess other people were working on it, but they did a really good job of doing reasoning really well and really powerfully. I mean the real – the DeepSeek innovation was being able to say that you’ve got to build this giant tree if you’re building an AI model. And they were able to say probabilistically there’s a whole bunch of branches on the tree that we can ignore. Like there’s a bunch of things in your life that have happened to you that your brain is just really good at forgetting about, whereas there are a few just salient moments that have formed who you are as a person.

What DeepSeek did is did a better job of pruning that. The big US AI models don’t have to do that because they can just – well, let’s just buy another H200, right? And let’s just keep throwing more money at the problem. By having the constraints and the specialization. in this case, the memory constraints, it forced DeepSeek to come up with a better pruning algorithm, which allowed them to then just be able to deliver AI at a much, much, much more efficient level. And I suspect Sarvam did something similar because, I mean, I spoke to Pratyush and he said that one of the things that the big guys have been coming around when we told him was how do you do this with 15 people?

And it’s certainly some of those constraints that they’re at work. I wanted to talk along similar lines on this idea of open source, open weights, perhaps as a stick with open weights because open source is a controversial definition. A lot of the early models at that time, there was a lot more open weight stuff coming out. Of late, that’s gone down. And the power of open weights, of open weights models and perhaps open sources different is that you can actually, I mean, develop open weights and you can actually develop open weights and you can actually can actually tinker around with the model. and customize it to their use case. But increasingly, we’ve seen a sort of a drop -off other than the Chinese models, Kimi and Qen, which are still open -weight.

I wanted to just discuss among the two of you, perhaps from different perspectives, maybe the use case perspective and perhaps just the whole internet infrastructure perspective, how important open -weight is. And some of the backroom chatter I’ve been getting is as these models get more performant, it becomes increasingly dangerous to put out highly performant models as open -weights because it’s something that OpenAI called malicious fine -tuning and the fact that as these models become better, it’s easier to undo the fine -tuning guardrails that have been established so these models don’t do bad things. And so that’s why it won’t be delivered. So I know that the ecosystem needs open -weights because not everyone has the time to do the training and the time to do the training.

And someone just perhaps wants to do the pre -training and get the model. out. But I’m also hearing from the other side that open weights has this fundamental security challenge. And I know, but we go to security separately, but just on open weights, what’s the, you know, what’s the way We thread this needle between these two things?

Matthew Prince

Well, I think that, okay, let’s, let’s, let’s, I’m going to tell a story. I don’t know that 100 % of the story is right, but I think it adds up to something that approximates what’s right. Let’s imagine that over time, you are one of these major model makers, you’re open AI, you’re, you’re anthropic, you’re, you’re Google. And you look at this and you say, huh, if we keep playing this out, then this is a commodity. And the only way that we win is if we restrict as many people from getting into the game as we possibly can. So how do you do that? I mean, one of the best ways to do that is just to scare everyone.

that if everybody has this technology, that the world is going to end. And so the next time you come across an AI doomer and they say, if everyone has this, the world is going to end, just keep pushing them. Just be like, okay, and then what happens? And then what happens? And then what happens? And basically, you know, the most likely, the scariest scenario is these things can design very bad maybe pathogens or other malicious vectors, biological vectors, that could then get synthesized and spread around society, to which I say, well, then shouldn’t we be regulating the synthesizers, not regulating the sort of technology that’s out there? But, again, it gets to be very hand -wavy.

But if you think about it as a strategy, if you believe that these ultimately are commodities, then what you want to do is actually regulate. Regulate them in order to make it so that yours is the only company that can be safely trusted to handle this. And I think that that’s, again, somewhat cynically, a lot of the explanation for why the people that are building these horribly dangerous, scary things keep telling you how horribly dangerous and scary they are. I’ve never seen another industry that has done that. You never – you don’t see like the automobile industry be like, you know, this could plow through a crowd of people and kill and be used in a mass murder event, right?

You don’t – that just doesn’t make any sense. And so the only way that I can make sense of that world is if, from a business perspective, if it’s actually trying to do some sort of regulatory capture. And so I am pretty discounting on what the risks are here. I tend to think that more open is going to win, and I tend to think that the Chinese approach right now is the smartest approach to take on what looks like this enormous kind of just money machine which the U .S. is creating. And so it’s – I think that as India thinks about how it’s going to regulate AI – I would be careful about listening to sort of the AI doomers.

I would be especially careful about trying to regulate the output of what is fundamentally at least a pseudo non -deterministic system. We have built machines that act like humans, and yet we think we can regulate them like machines. The better way to regulate them is actually more like humans. Look to the criminal code, not the engineering code, in order to figure out what that regulation should look like. And so I am very much pro -open. I think we should think about what these risks are and what these dangers are. We should definitely be testing and looking for those. But I tend to think that they are somewhat overblown. And if you want to understand why they are somewhat overblown, I would argue it is because it’s a strategy in order to keep the people , they’re currently in the lead going forward.

Rahul Matthan

I think you may be absolutely right, because on that big stage that you were at just a short while ago, yesterday, there was a call by one of these companies for an IAEA for AI, that it should be regulated like nuclear technology. And the other example I keep giving is, you know, at the turn of this last century, people were just walking around the streets and getting electrocuted, because electricity is highly dangerous. And yet we sit in this room, which literally the walls are buzzing with electricity, and we’re completely safe. And this is the nature of all technologies. But on the positive side, Rajan, all the AI deployers that we have in India, a large number of them are relying on open source.

And if the open source pipeline starts, to diminish, where are they going to go? I mean, Sarvam can certainly deliver these models, but how important is it actually to the community of people that are, I mean, AI and chat GPT and all this is all well and good, but it’s really those applications, the voice applications that people need. How dependent are they on open source? What can we do to continue to keep this open?

Rajan Anandan

Look, I mean, firstly, as Matthew said, look, if you invest a trillion dollars, okay, you can’t give it away for free. It’s as simple as that. It’s just economics. So you can position it any which way you want, but fundamentally it’s about economics, right, and how do you build a business, especially if you have to invest so much. No, look, open source is absolutely critical. You know, I think the – I mean, Lama is the most recent example, right, where the reality is if you’re going to launch the next state -of -the -art version of Lama, it’ll be closed because otherwise how are they going to monetize this, right? And especially if you have – you know, they’re spending $80 billion, $100 billion.

There are other ways to make money. They don’t make money. Even for them, $100 billion a year is kind of a lot, you know. So anyway, coming back, look, it’s super important to the ecosystem. I don’t know. I don’t have the answer to that. I think in March. In March, you’ll see one of the big companies make a big announcement, massive, massive announcement on their commitment to open source. But it is, you know, I think the only way you do this is there has to be a different path, okay, because if you’re going to have to invest hundreds of millions of dollars to build a new model or billions of dollars to build a new model or tens of billions of dollars, that doesn’t, that’s not really open.

You can’t keep those models open, right? So to the first question that you ask in Matthew’s response, you really need to have a different way of doing this, right? So actually, by the way, if you look at voice AI, for instance, I’ll give you some data. You know, India has very low labor costs. If you look at human cost of voice today in India, it’s five rupees a minute to about 20 rupees a minute. Five rupees a minute is the lowest you can get. Amex would be probably 40 or 50 rupees a minute. Today’s voice AI costs about three rupees a minute. So already, and that’s why you’re beginning to see voice AI really begin to take off in India, right?

But even with today’s SOTA model, you can get to about maybe a rupee. Right now, the question is, even at a rupee, now you’re one, you’re twenty one fifth the cost of humans. So it’s going to really take off. But if you want to make voice the primary medium through which one point four billion Indians will access AI, that’s still too expensive. Right. You’ve got to get it down to maybe five paisa or ten paisa. And that’s actually not about open source. It’s about compute and it’s about the cost of inference. So if you ask me, open source is really, really important. Important. But we have to find a way to get the cost of inference down.

Obviously, model size, all of these things matter. And, you know, we can talk about that as well. But a short answer is, look, it’s really important. But it is not clear to me how you do this, especially in the current game that we’re in. Because anybody that wants to be at the frontier, the way a frontier is defined today actually has to go out and invest. Right. And honestly, I don’t know how the Chinese are doing it because, you know, it’s a bit opaque as to exactly how much are they investing. You’re right. It’s kind of a hedge fund.

Matthew Prince

Which is basically what deep seek is and they have this on the side META is the fascinating question here because it took me a really long time to understand META’s strategy like why are they doing all this VR why are they doing all of this AI what they learned was the lesson that if you are caught on the wrong side of a platform shift and you then become beholden to some other platform where in the past they were on the web and that was fine and social worked and no one controlled the underlying platform and then the platform shifted and all of a sudden it was on mobile and they were beholden to both Apple and Google that put them on a back foot and it really limited their business.

So they are so desperate to whatever the next platform shift is stay in front of that platform shift and so for a while that looked like it might be VR that was less likely today although never count these technologies out The real next platform shift is almost certainly going to be AI. And so if you control the social graph, which is an unreplicable kind of asset that they have, they need to be – they need to make sure that whatever the next platform is, that they control that or at least have an equal seat at the table of everyone else. And so if they continue to invest in open source and you’re like, why are they spending so much money in order to do this?

It’s to make sure that as the next platform shifts, that it’s going to be – that they aren’t in the same back -footed position that they were with Apple and Google. That would be my analysis of META.

Rahul Matthan

I don’t want to comment either way, that AI is going to accelerate cyber attacks because agentic swarms, et cetera, can do things much more dangerous. So what’s the evidence that you have for this?

Matthew Prince

Yeah, so I think this is sort of a long -term good news, short -term scary headline story. The long -term good news – let’s start with short -term scary headline. There are going to be a whole bunch of scary headlines of bad things that AI does. There will be a story about an Indian family who lost all their money because they wired it to some criminals that made it seem like their daughter had been kidnapped. I mean, we’re already seeing the level and sophistication of phishing scams go through the roof in terms of what is being done. And so the bad guys are going to use that to attack. The other thing that we’re seeing is – so there was an example.

There was a company called SalesLock. It had a program called Drift, a piece of software that was connected into hundreds of thousands of Salesforce instances. SalesLoft got breached by a Russian hacker. The Russian hacker didn’t understand how Salesforce worked, so they kind of fumbled around for a really long time. Had they just used AI, which is what we’re now seeing a lot of North Korean and Chinese hackers do, they would have been able to just be instantly knowledgeable on how to get as much information out of Salesforce as quickly as possible, and the breach could have been orders of magnitude worse. So those are the bad stories, and there’s going to be real hardship and real pain that’s caused from it.

The counter to that is that folks like us, I was just with Nikesh from Palo Alto Networks, Jay from Zscaler is here. We’re all using AI. We’re all using AI in our own systems to make them smart. In fact, Cloudflare, we would have never described ourselves this way, but the whole theory of the company was let’s get as much Internet traffic flowing through a machine learning system to be able to predict where security threats were in the same way that three years ago we all looked at ChatGPT and were like, whoa, that’s amazing. internally about three years ago was the first time the system said, bloop, here’s a new threat that no human has ever identified before.

And that went from being something that happened once in the first 15 years of CloudFlare’s history to now it’s happening on an incredibly regular basis where the machine learning is able to win. And so I think the good news is that the good guys will always have more data than the bad guys do. Again, a caveat to regulation preventing us from using it in order to do cybersecurity in various ways. But largely, we’re able to do that. And I think that we will actually use AI in order to stay ahead of these threats. That’s what we’re seeing. It’s going to require some change in any part of your life where you are today relying on what someone looks like or what they sound like in order to verify who they are and give them access to anything, secure, confidential.

That’s got to change. And so the simple thing that you should all do with your immediate family at your next holiday meal. is decide on a family password. And that seems silly, but I guarantee you at some point, some hacker is going to call up and say, hey, your son or your dad or your grandmother or whatever needs money. And if you say, hey, what’s the family password? And they say, I don’t know, Aardvark, you’ll know that it’s a scam, right? So it’s a simple thing that you can do. And it’s going to be these simple things, which I think are going to get translated. And I think businesses have got to go away from, oh, the person looked right, so we let them in the door.

Like that can’t happen in the cyber world. And so we’re going to have to lock systems down. There are going to be some scary stories, but I would predict again that in 10 years, we are more secure online than we are today.

Rahul Matthan

Rajan, I wanted to talk about data because a lot of the conversation is around how the models that we have, CERN accepted, are models that are largely built in the West and therefore are Western systems. And I think that’s a very important part of the question. And I think that’s a very important part of the question. And I think that’s a very important part of the question. And I think that’s a very important part of the question. And I think that’s a very important part of the question. And I think that’s a very important part of the question. And I think that’s a very important part of the question. And I think that’s a very important part of the question.

And I think that’s a very important part of the question. And I think that’s a very important part of the question. and I know there’s something of a land grab going on for the data so as far as the data companies in India are concerned the companies are actually hoovering up the data, annotating it, making it ready what’s the business model for them? Are they feeding this all up back to that one pin code in the US or what’s the negotiating part and I ask this question because we have a lot of data in this country but I know that there are countries in Africa where the deal is already done and the data is out the door there are deals I know for 25 years worth of medical data out of Africa in exchange for setting up an EHR system because that’s a deal they’ve done.

And I was wondering whether we are thinking about this in a nuanced way and then I know Matthew you’ve got some ideas on crawling, I’ll come to you on that. What is actually happen on the ground with this?

Rajan Anandan

I think the first thing is we don’t have as many I mean there are there are you know initiatives or NGOs like AI for Bharat that are collecting data but if you look at the leading global data companies they’re not Indian right India probably has a handful of startups that are that are that are actually in the quote unquote business of AI data for AI so first and foremost I mean because these companies are global you’re absolutely right all the data that Indians are generating is actually going to you know going to those sort of few handful of companies now that being said look firstly you know for for Indian companies to actually keep the data here we have to have model companies right like you know what I mean otherwise you have to sell it because if you’re in the data business you have to sell it to somebody but I think the benefit we have is you know like honest like you know we’ve only collected probably one less than one percent of the data we actually need if you really want to get to AGI right like if you look at but physical intelligence and things like that.

And India really has a competitive advantage. In fact, we’ve been looking for startups we could find, fund that would basically do all kinds of data collection for robotics and things like that. So that’s number one. I think number two, we’re also beginning to see companies that are leveraging their proprietary data in a very, very interesting way. I’ll just give you one example. So we have a company called Cloud Physician. It’s an Indian startup. They run these remote ICUs in tier two, tier three towns. In India, they’ve been doing that for four or five years. They’ve got this extraordinary amount of proprietary data that they’ve now used to actually build about a dozen or so specialized models in healthcare.

And now they’re actually taking those models to market in the U .S. And the kind of data that they have, which they’ve collected over four or five years for a different, for a sort of a healthcare delivery business, if you will, has been very valuable. So, you know, in our portfolio, we only have a handful of companies in different spaces that are using. And data is an advantage to actually build a final proposition, which is usually tied to some sort of domain model or something like that. But I do think, you know, we need probably some sort of – firstly, we need a lot more innovation around this. I’m surprised we don’t have more companies that are actually trying to sort of build businesses around this India’s data advantage.

And second, we need to have – I do think we need to have some smart regulation. I don’t know where the regulatory framework is on data. I think that’s going to be super, super important. I do know, like, AI for Bharat, et cetera, are being quite thoughtful about who they share data with, which is great. So, yeah, that’s sort of where it is. But it’s a huge opportunity for India. I don’t – you know, my real view is, like, look, basically, you know, it’s all the data on the Internet. That’s accessible to everybody. You just need, like, literally large amounts of capital. Most of the data that we need to get to AGI we don’t have yet.

And we have 1 .4 billion people. Well, created.

Rahul Matthan

But, Matthew, you – You wanted to intervene in that whole thing. You had – is something called – maybe you didn’t call it, maybe the media started to call it pay -to -crawl, and you may have something more sophisticated like an AI audit or something like that. What’s the idea behind that? Because that’s also part of this democratization of AI as I see it.

Matthew Prince

So firstly, correcting a little bit of a misconception, all the money in the world and you still can’t even crawl the Internet. So how much less of the Internet does Microsoft see than Google? Microsoft Bing, they’ve thrown a ton of money at it. For every six pages that Google sees, Microsoft sees one. OpenAI knows how much of an advantage that is. So every 3 .5 pages that Google sees, OpenAI sees one. But that means that two -thirds of the Internet is hidden to – the most sophisticated model. Anthropic, it’s almost 10 to 1 in terms of what’s there. And so if you want to ask why did Gemini just leapfrog open AI, I don’t think it’s the chips. I don’t think it’s the researchers.

I actually think it’s the data. And I think getting access to data is important. And so if we want to have a level playing field, there’s a real risk that Google is going to leverage the monopoly position they had indexing the Internet yesterday in order to win in the AI market tomorrow. And that’s something that we’re really concerned about. And I think we have to do one of two things. We either have to bring Google down and say that they have to play by the same rules as all the other AI companies. That’s something that you could do from a regulatory perspective. And that’s something that the U .K. is looking into. Canada is looking into.

Australia is looking into. The alternative is how do we give all the other AI companies the same access that Google does. And that’s, I think, an opportunity to also solve some of the democratization challenges out there. One of the things I really worry about. is that AI is going to disrupt the fundamental internet business model. The fundamental internet business model was create content, drive traffic, and then sell things, subscriptions, or ads. That was it. I don’t care if you’re B2B, B2C. I don’t care if you’re a media company. That was it. Create great content, drive traffic, sell things, subscriptions, or ads. AI doesn’t work that way. So just take a media company. If AI scrapes your ads and takes it, let’s say it’s the New York Times, or the Times of India, or whatever it is, you can now go to your AI and just say, show me all, summarize all the articles from the New York Times that would be of interest to me.

And you’re going to read it there. Now, that’s great for you as a user. It’s better as a user experience. So it’s going to win. But now the Times of India isn’t selling a subscription or an ad. Now the New York Times isn’t getting anyone to click on an ad. And that’s going to make it harder. And to make this clear how much harder it’s gotten. Ten years ago, for every two pages that Google scraped on the Internet, they sent you one unique visitor. And then you could monetize that visitor again by selling them things, subscriptions, or ads. Today, what is it? 50 to 1. Actually, excuse me, 30 to 1 in Google’s case, 50 to 1 in Bing’s case. That’s the good news.

In OpenAI’s case, it’s 3 ,500 to 1. In Anthropic’s case, it’s half a million to 1. They take half a million pages for every one page they give back. So AI takes, but it doesn’t always give back. And if the currency of the Internet has been traffic, that traffic is gone. And it’s getting harder and harder to then make money through the traditional business model of the Internet. So one of two things happens. One is, well, the Internet just dies. But that’s not going to happen because the AI companies need the content. They need the information. They need the things that are out there. And so the Internet. The alternative is a new business model emerges. So what happens?

and that’s what’s going to happen over the course of the next five years a new business model is going to emerge for the internet and think how exciting that is think how rare new business models for something as grand and as large as the internet are how often they emerge almost never and yet we’re all going to live through it and that’s an incredible opportunity and i don’t know quite what it is but it has to be some way that the people who are creating the content and creating the value get compensated for the things that they are creating and what the encouraging version of this is to think about the music industry the entire music industry 22 years ago was valued at 8 billion us dollars which is a lot of money but it’s not that much money because that was the beatles and rolling stones and like everything right why was it that well because napster and grokster and kazan all these things had commoditized they were basically taking a music and musicians weren’t getting paid for it and they were getting paid for it and they were getting paid for it and they weren’t getting paid for the music anymore what changed one day steve jobs walked on stage and he said it’s going to be 99 cents per song right itunes launched almost 22 years ago to this day and that wasn’t the business model that won but at least was a business model and it started the conversation and that evolved into what is the business model that won which is something closer to spotify which is now i don’t know what it is in india but in the u .s it’s like ten dollars a month and what’s incredible is that spotify last year sent over 12 billion dollars to musicians more than the entire music industry was worth 22 years ago and that’s just spotify there’s apple music and and uh title and tiktok and youtube and tons of people there’s more money going into music creation today than at any other time in human history by an order of magnitude now different winners and losers and we can debate whether or not the right people are winning the right people are losing but there is more money going into music creation today than any time in human history and so as we figure out what the next business model of the internet is going to be, let’s try not to make it one that’s worse.

Let’s try and learn the lessons because traffic was always a terrible proxy for quality. So let’s actually find something that is a proxy for quality and let’s reward the people who are creating that. And the good news is, I think that’s what everyone in this room wants, but it’s what Sam wants. It’s what Daria wants. It’s even what Elon probably wants. And that’s the sort of thing that is actually going to drive not only a healthier internet ecosystem, but I actually think that a lot of what’s wrong with the world today is that we have monetized traffic. And what that has meant is we have monetized basically making people emotional or angry or whatever it gets to click on things, which is part of what’s driven society apart in a lot of ways.

I think if instead what we monetize and what we reward is the creation of human knowledge. That’s what the AI companies want. That’s what we all want. And I think that’s what we can actually do to actually bring our society back together in.

Rahul Matthan

I want to turn it over to the audience for questions. I don’t want to be the only one asking questions. I’ll take – hands are going. I’m going to take three questions at a time.

Matthew Prince

I like Indian audiences. They ask questions. Like you go to the UK and everyone just sits on their hands.

Rahul Matthan

No, no. Indian audiences are very, very – now we’ll have to shut them up because we don’t have time. I’m going to take this one. I’m going to take that one. I’m going to take this one, right? So first up here, yeah? And I have a rule, a question, not a statement. So it has to end with your voice going up a little bit. Then I know it’s a question.

Audience

Sir, this is for you. You’ve touched upon a lot of interesting topics across domains. First of all, I remember you talking about the deterministic AI outcomes. Now AI having crossed the threshold –

Rahul Matthan

Give me the question.

Audience

Okay. So how – So what, in your view, would make AI trustworthy? Is it something to do with explainability, deterministic AI, and what would be the pathways?

Rahul Matthan

Let me get a couple more. Otherwise, we won’t get through. The lady at the back there. So one is, I’ll keep track of it. How do we make AI more trustworthy?

Audience

My question is for Matthew. So you mentioned about the paper crawl. We see robots .txt getting ignored. My question for you is, what makes you believe that AI companies would be equally invested in a creator -based compensation when AI creates the Internet and is not giving back attribution or compensation?

Rahul Matthan

Trustworthy, and how do creators get paid? And attribution. I think she also wants to do attribution. This gentleman here.

Audience

Hi. My question is to Rajan. So, Rajan, what do you think you were explaining about the consumer and vertical part in the application layer? So what do you think? Where are we in terms of investment from a venture capital side point of view in terms of how can we match the Y Combinator and AI 16Z level in terms of investments?

Matthew Prince

Great. So AI is already more trustworthy than most humans. The simple fact is that AI is a better driver than ninety nine point nine nine percent of humans that are on the road today. Literally, since I started talking within a kilometer of where we’re sitting, there was an accident where between two cars. I mean, we just know that’s happening. We’re sitting in Delhi. Right. You will not be able to find any news about that anywhere on any publication anywhere on Earth. And yet, if one of those two cars had been a self -driving car, it would have been front page news around the world. There are expectations for AI are too high. We have built a system that acts like humans and we need to think of it as acting like humans.

The smartest CEO that I know in terms of doing this is Robin Vince at BNY Mellon. In their case, they actually have AI employees. The AI employees get an employee number. They get an email address. They get a quarterly review. They can get fired if they don’t do a good job. They can get promoted if they do a good job. I asked if there are any AIs that are supervising humans. He said, not yet, but it’s inevitable. That’s the way to think of it, right, is that they act like humans because they are like humans. And, again, we are all fallible, and we’re all going to make mistakes, but already we see in certain disciplines like driving, AI is better than human beings are.

In terms of getting paid, I think the empirical evidence is that when you’ve actually seen – forget robots at TXC. That’s like a no trespassing sign. Anyone can ignore it. When you actually block the AI agents, which is what we have done, then they come to the table. And so with big publishers like Condé Nast, DotDash, Meredith, and others, where starting July 1st, we said all of the AI companies are blocked, they actually came to the table, and they were able to get paid. Things done. In the case of Reddit, Reddit was willing to block everyone. including even Google. And as a result, they got the public number is seven times as much for the Reddit corpus licensing that than the New York Times did, even though the two corpuses are about the same.

So again, I think that the first step in any market is having some level of scarcity. As long as you’re making it easy for anyone to take your data, then you’re not going to get paid for it.

Rajan Anandan

Yeah, I think on the question on consumer AI, I don’t know, very few people know this, but India today has more consumer AI startups than the US. In fact, on Tuesday this week at the Pitchfest, just our firm, one firm, we announced five new seed investments in AI companies. Four out of the five are consumer AI companies, right? And the reason is, and we think this is going to explode because we have 900 million Indians on the internet, 850 million of them are active every day, seven hours a day on the internet. and every space has potential for tremendous innovation, right? If you take education, education hasn’t been accessible to a large part of online education because it’s just been too expensive, right?

But today with AI, you can have a 99 rupees a month plan with an AI tutor across. In fact, the fastest growing AI education company in the world is in India and nobody’s really heard of it because actually, fortunately, these guys are all just being stealth and just building, which is very good. So I think it’s a great time to be building consumer. Actually, it’s a great time building AI companies, but especially in consumer AI, we’re going to see some breakouts. Look, the world’s leading consumer AI companies in education, healthcare, entertainment, et cetera, will be either here or in China. They won’t be in the Western world because we just need it.

Rahul Matthan

The one beautiful thing about this summit is there have been so many wonderful, rich, diverse conversations. This is one of them. Matthew, Rajan, thank you so much. Thank you all for being such a good audience. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (22)
Factual NotesClaims verified against the Diplo knowledge base (2)
Additional Contexthigh

“Modern AI workloads require massive numbers of GPUs, and the GPU market is dominated by NVIDIA, whose chips are power‑hungry and costly.”

The knowledge base notes that AI requires many chips and that the market is largely supplied by a single manufacturer, NVIDIA, highlighting the dominance and scarcity of GPUs [S1] and discussing the NVIDIA monopoly [S86]; however it does not mention the chips’ origins in gaming consoles or Bitcoin mining, so that detail is not corroborated.

Confirmedhigh

“Only a tiny global pool of engineers can design, train and operate large AI models, driving up salaries and limiting broader participation.”

The knowledge base confirms a very limited pool of experts worldwide, citing a tiny pool of specialists and estimating roughly 1,000 engineers capable of training extremely large models, which aligns with the report’s statement [S89] and [S51].

External Sources (102)
S1
Open Internet Inclusive AI Unlocking Innovation for All — Hi. My question is to Rajan. So, Rajan, what do you think you were explaining about the consumer and vertical part in th…
S2
https://dig.watch/event/india-ai-impact-summit-2026/open-internet-inclusive-ai-unlocking-innovation-for-all — Very few individuals have done more to bring revolutionary and transformative technology into the hands of millions than…
S3
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi — -Announcer: Role/Title: Event announcer/moderator; Area of expertise: Not mentioned
S4
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Takahito Tokita Fujitsu — -Announcer: Role as event announcer/host, expertise/title not mentioned
S5
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Cristiano Amon — -Announcer: Role/Title: Event announcer/moderator; Areas of expertise: Not mentioned
S6
https://dig.watch/event/india-ai-impact-summit-2026/open-internet-inclusive-ai-unlocking-innovation-for-all — Very few individuals have done more to bring revolutionary and transformative technology into the hands of millions than…
S7
Protecting Democracy against Bots and Plots — In summary, Cloudflare utilizes AI and machine learning to anticipate and address threats and vulnerabilities, while pro…
S8
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Matthew Prince Cloudflare — -Matthew Prince- CEO, Cloudflare (formerly a professor who taught history) -Moderator- Event moderator/host Thank you….
S9
Fireside Conversation: 01 — -Rahul Matthan: Role/Title: Partner at Tri Legal, conversation moderator; Areas of expertise: Legal matters (implied fro…
S10
Open Internet Inclusive AI Unlocking Innovation for All — Very few individuals have done more to bring revolutionary and transformative technology into the hands of millions than…
S11
Keynote-Rishad Premji — -Rahul Mattan: Role/Title: Discussion moderator; Area of expertise: Not specified
S12
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S13
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S14
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S15
Multi-stakeholder Discussion on issues about Generative AI — He said that their current hardware technology is too energy consuming and expensive. This signifies the significance o…
S16
Defending the Cyber Frontlines / Davos 2025 — – Matthew Prince: CEO of Cloudflare Matthew Prince: Absolutely. So Cloudflare’s, our mission is to help build a bette…
S17
Global Perspectives on Openness and Trust in AI — Corporate rhetoric has become sophisticated in adopting inclusion language while ultimately promoting closed platforms a…
S18
UK NCSC: AI will escalate the frequency and impact of cyberattacks — The UK’s National Cyber Security Centre (NCSC), a division of GCHQ, hasissuedan assessment focusing on the imminent infl…
S19
The intellectual property saga: The age of AI-generated content | Part 1 — The intellectual property saga: AI’s impact on trade secrets and trademarks | Part 2 The intellectual property saga: app…
S20
Certifying humanity: Labeling content amid AI flood — For much of the public debate around artificial intelligence, attention has been fixed on capability: how powerful model…
S21
Keynote interview with Geoffrey Hinton (remote) and Nicholas Thompson (in-person) — Machines could potentially outperform humans in cognitive tasks
S22
Enhancing rather than replacing humanity with AI — People’s judgment remains crucial, particularly for decisions that involve values, context, or individual circumstances.
S23
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — India focuses on smaller models for specific use cases rather than chasing trillion-parameter models
S24
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — Absolutely, Ankit, just trying to, this is something which I know two years back when we said that I’m putting 8000 GPUs…
S25
Semiconductors — Governments worldwide are increasingly recognizing the strategic importance of semiconductors. Policies are being develo…
S26
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Factors such as restricted access to computing resources and data further impede policy efficacy. Nevertheless, the cont…
S27
What policy levers can bridge the AI divide? — – **Affordability**: Internet costs exceeding recommended percentages of income Lacina Kone: H.E. Mr. Solly Malatsi to …
S28
Laying the foundations for AI governance — – The four fundamental obstacles identified by the moderator: time, uncertainty, geopolitics, and power concentration A…
S29
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Anne Flanagan: Hello, apologies that I’m not there in person today. I’m in transit at the moment, hence my picture on yo…
S30
Building Trustworthy AI Foundations and Practical Pathways — “India has scale, India has linguistic diversity, but India also has a lot of different things.”[63]. “In many regions o…
S31
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — This comment provides crucial context about India’s position in the global AI ecosystem, distinguishing between applicat…
S32
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — And these vision models are actually very good for document digitization. They’re very good at language layout understan…
S33
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — “What raw material is needed for AI?”[9]. “sovereign AI comes to India, we’ll have the control”[56]. “Indian government …
S34
To share or not to share: the dilemma of open source vs. proprietary Large Language Models — Cost reduction in technology deployment In sum, this analysis illustrates that open source software serves not merely a…
S35
Open Forum #70 the Future of DPI Unpacking the Open Source AI Model — – **Satish** – Has long background in open source, presently part of ICANN and DotAsia organization Audience: My name i…
S36
Cybersecurity regulation in the age of AI | IGF 2023 Open Forum #81 — Gallia Daor:Sure. Thank you. So indeed, in 2019, the OECD was the first intergovernment organization to adopt principles…
S37
From summer disillusionment to autumn clarity: Ten lessons for AI — Additionally, the EU’s long-negotiated AI Act imposes strict rules on AI systems (e.g. high-risk systems must meet safet…
S38
Scaling Enterprise-Grade Responsible AI Across the Global South — “And those engineered systems might require, for example, yes, human in the loop or on the loop, for sure, but also agen…
S39
What is it about AI that we need to regulate? — What is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed…
S40
Governance of the Domain Name System and the Future Internet — 1100    Discussion of presentation by Drake 1600Net neutrality, privacy and innovative business models Day 1 – Monday …
S41
Slow politics for fast digital developments — For example, in data policy, many governments are lobbying for wide access to Internet data in order to ensure the prote…
S42
Policy Meets Tech – Journey Diary — Convergence issues: How can the risks from this business model be mitigated in light of newer models linked to artificia…
S43
INCREASING ACCESS TO DATA ACROSS THE ECONOMY — –  Primary research to fill gaps in the existing evidence base on the issues that prevent data sharing, and in particul…
S44
The open-source gambit: How America plans to outpace AI rivals by democratising tech — The AI openness approach will spark a heated debate around the dual nature of open-source AI. The benefits are evident i…
S45
To share or not to share: the dilemma of open source vs. proprietary Large Language Models — Bilel Jamoussi:Great. Thank you. Google has a history of both open source contributions and proprietary developments. Ar…
S46
Open Internet Inclusive AI Unlocking Innovation for All — The discussion revealed sophisticated understanding of the tensions surrounding open-source AI development. Prince offer…
S47
WS #208 Democratising Access to AI with Open Source LLMs — Audience: Is it working now? Yes, perfect. Hi. Thank you very much for your panel and the interesting discussion th…
S48
High Level Session 3: AI & the Future of Work — A significant tension emerged around data ownership and worker compensation. Actor and entrepreneur Joseph Gordon-Levitt…
S49
Socially, Economically, Environmentally Responsible Campuses | IGF 2023 Open Forum #159 — Data ownership and governance concerns are major obstacles in today’s digital landscape. Microsoft recognizes the growin…
S50
Submission by the South Centre to the Draft Issues Paper on Intellectual Property Policy and Artificial Intelligence (WIPO/IP/AI/2/GE(20/1) — Among the possible reasons against new rights in data: data may be already sufficiently protected under existing …
S51
Discussion Report: Sovereign AI in Defence and National Security — Faisal advocates for a strategic approach where countries focus their limited sovereign resources on the most critical c…
S52
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — India’s strategy focuses on deploying smaller, sector‑tailored models that consume less energy and cost, rather than pur…
S53
The State of the model: What frontier AI means for AI Governance — ## Presentation Interruption and Conclusion ## Technical Challenges and Limitations ### Current System Problems ### D…
S54
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — In addition to their internal initiatives, Microsoft recognises the need for active participation from private companies…
S55
Can we test for trust? The verification challenge in AI — Painter describes how frontier safety policies create a framework for companies to set conditional red lines based on sp…
S56
Open Forum #70 the Future of DPI Unpacking the Open Source AI Model — – **Satish** – Has long background in open source, presently part of ICANN and DotAsia organization Audience: My name i…
S57
The Global Power Shift India’s Rise in AI & Semiconductors — High level of consensus with complementary perspectives rather than conflicting views. The speakers come from different …
S58
India faces AI challenge as global race accelerates — China’sDeepSeekhas shaken the AI industry by dramatically reducing the cost of developing generative AI models. While gl…
S59
International multistakeholder cooperation for AI standards | IGF 2023 WS #465 — Florian Ostmann:Thank you, Matilda. So with that set out in terms of what kinds of standards we are focused on and why w…
S60
State of play of major global AI Governance processes — SHAN Zhongde:Thank you very much. It’s a very important initiative worldwide. And we are going to promote development. S…
S61
How to make AI governance fit for purpose? — – Jennifer Bachus- Chuen Hong Lew – Jennifer Bachus- Shan Zhongde- Chuen Hong Lew Innovation should be prioritized ove…
S62
Laying the foundations for AI governance — – **Industry perspective on regulation**: Companies, particularly startups, actually want regulation but need clarity an…
S63
Open Internet Inclusive AI Unlocking Innovation for All — “largely produced today by one manufacturer, NVIDIA, that use a ton of power and are very, very expensive”[2]. “And so i…
S64
The Dawn of Artificial General Intelligence? / DAVOS 2025 — Yoshua Bengio: Yeah, I have a comment about values and trying to make AI behave morally. This question has been studi…
S65
What policy levers can bridge the AI divide? — – **Affordability**: Internet costs exceeding recommended percentages of income Lacina Kone: H.E. Mr. Solly Malatsi to …
S66
Setting the Rules_ Global AI Standards for Growth and Governance — This comment helped explain the seemingly paradoxical situation of competitors collaborating on standards by revealing t…
S67
From Innovation to Impact_ Bringing AI to the Public — Yes. So do you think the whole banking system will become redundant? Because today if I have to make a transaction, I’ll…
S68
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — Addressing potential concerns about technological nationalism, Mazumdar-Shaw emphasised that “sovereignty is not isolati…
S69
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — And these vision models are actually very good for document digitization. They’re very good at language layout understan…
S70
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — This comment provides crucial context about India’s position in the global AI ecosystem, distinguishing between applicat…
S71
To share or not to share: the dilemma of open source vs. proprietary Large Language Models — Bilel Jamoussi:Great. Thank you. Bilel Jamoussi:Great. Thank you. Google has a history of both open source contribution…
S72
Democratizing AI: Open foundations and shared resources for global impact — El-Assady emphasised the crucial distinction between “open source” and “open weight” models. Unlike models that merely s…
S73
Driving Social Good with AI_ Evaluation and Open Source at Scale — And obviously just because you open source the software doesn’t mean that the data that’s produced with it is open data….
S74
Open Forum #70 the Future of DPI Unpacking the Open Source AI Model — Audience: My name is Satish and I have a long background in open source. I am presently part of ICANN and DotAsia organi…
S75
Scaling Enterprise-Grade Responsible AI Across the Global South — “And those engineered systems might require, for example, yes, human in the loop or on the loop, for sure, but also agen…
S76
From summer disillusionment to autumn clarity: Ten lessons for AI — Additionally, the EU’s long-negotiated AI Act imposes strict rules on AI systems (e.g. high-risk systems must meet safet…
S77
What is it about AI that we need to regulate? — What is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed…
S78
WS #187 Bridging Internet AI Governance From Theory to Practice — Vint Cerf: First, I have to unmute. So thank you so much, Alex. I always enjoy your line of reasoning. Let me suggest a …
S79
WS #82 A Global South perspective on AI governance — AUDIENCE: Ends up. We cannot hear. Rely on ISO 31,000 is what they see as the kind of framework for risk assessments…
S80
A tipping point for the Internet: 10 predictions for 2018 — Figure 1. Current Internet business model In the current Internet business model (Figure 1) user data is collected, pro…
S81
Slow politics for fast digital developments — For example, in data policy, many governments are lobbying for wide access to Internet data in order to ensure the prote…
S82
AI and Digital in 2023: From a winter of excitement to an autumn of clarity — At theeconomic level, the internet business model is based on data. The role of tech companies which process user data, …
S83
Digital business models — In this business model, user data is the core economic resource. When searching for information and interacting on the i…
S84
The Future of the Internet: Navigating the Transition to an Agentic Web — – Aman Bhutani- Malte Kosub Competition and Market Structure Development | Economic | Sustainable development Leurent…
S85
9821st meeting — Ecuador:Mr. President, I thank the United States for convening this important meeting. I also thank the Secretary Genera…
S86
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — Losing grip of the NVIDIA monopoly NVIDIA’s dominance is not as unshakeable as it appeared at the start of the year, du…
S87
The challenges of introducing Generative AI into the marketplace — I have been hearing a lot about the shortage of powerful GPUs for AI lately. It seems like the demand is much bigger tha…
S88
From KW to GW Scaling the Infrastructure of the Global AI Economy — Good morning to all of you. As Rakesh has already introduced, two companies are planning for a lot of things together. A…
S89
How Multilingual AI Bridges the Gap to Inclusive Access — Capacity development | Artificial intelligence Data, talent, and compute constraints in building multilingual models H…
S90
!” — To summarize, one would normally expect technological change to increase youth wage inequality – and to a lesser extent …
S91
Comprehensive Report: AI’s Impact on the Future of Work – Davos 2026 Panel Discussion — Examples of complex dialysis machines sitting idle due to lack of trained nurses, aircraft manufacturers lacking mainten…
S92
Agenda item 5: Day 2 Afternoon session — He pointed out that capacity building should be tailored to these specific gaps. The importance of civil society and reg…
S93
Ray Dalio warns of global breakdown behind market turmoil — Billionaire investorRay Daliohas warned that the recent market turbulence is part of a larger global crisis. The turmoil…
S94
Debating Education / DAVOS 2025 — The discussion revealed significant obstacles to reforming higher education institutions. Lawrence H. Summers provocativ…
S95
AI Meets Cybersecurity Trust Governance & Global Security — The discussion revealed tension between regulatory and market-based approaches to AI security. Tiwari argued that “polic…
S96
Keynote-Bejul Somaia — “When intelligence becomes abundant, when a founding team of five can do the work that previously required 50, when ever…
S97
https://dig.watch/event/india-ai-impact-summit-2026/keynote-rishad-premji — Government initiatives to train 10 million young people in AI, along with industry partnerships with universities, are e…
S98
National Strategy for Artificial Intelligence — Subjects that can be classified as artificial intelligence are part of several study programmes, but are most common in …
S99
AI adoption soars in the UK but skills gap looms — AI adoption in the UK hasgrown rapidly, rising by 33% over the past year. According to a new report from AWS, 52% of UK …
S100
Shaping Investment: Spurring Investment in Cyber Sector Start-Ups — Shoaib Yousuf:It’s absolutely an opportunity. It’s absolutely an opportunity. However, the challenge is the scalability …
S101
https://dig.watch/event/india-ai-impact-summit-2026/indias-roadmap-to-an-agi-enabled-future — It is same that in primary sector, there will be a lot of silicon.
S102
The reality behind AI hype — As governments and tech leaders gather at global forums such as the AI Impact Summit in New Delhi, one assumption domina…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
M
Matthew Prince
8 arguments186 words per minute4453 words1431 seconds
Argument 1
AI hardware dependence on NVIDIA GPUs makes AI expensive and inefficient (Matthew Prince)
EXPLANATION
Matthew explains that current AI systems rely heavily on NVIDIA GPUs, which are costly, power‑hungry, and were originally designed for gaming and cryptocurrency mining rather than AI workloads. This hardware dependence drives up the expense and complexity of building AI models.
EVIDENCE
He notes that AI requires “lots and lots of chips” largely produced by NVIDIA, which consume a lot of power and are very expensive, and that these chips were never built for AI but for gaming consoles and Bitcoin mining before being repurposed for superintelligence [23-30].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multiple sources note that current AI hardware is dominated by NVIDIA GPUs, which consume a lot of power and are very expensive, raising concerns about cost and efficiency [S1][S15].
MAJOR DISCUSSION POINT
Hardware dependence on NVIDIA GPUs inflates AI costs
AGREED WITH
Rajan Anandan
DISAGREED WITH
Rajan Anandan
Argument 2
Chip prices will fall and AI models will become commoditized, enabling frontier models for $10 M in five years (Matthew Prince)
EXPLANATION
Matthew predicts that as more companies enter the silicon market and supply constraints ease, the cost per unit of AI compute will drop, turning AI models into commodities. He forecasts that frontier‑level specialized models could be built for $10 million within five years.
EVIDENCE
He cites the historical pattern of silicon shortages turning into gluts, the entry of many startups and incumbents into GPU production, and the expectation that unit costs will decline [50-54]. He also points to the competitive dynamics among model builders (Google, Anthropic, OpenAI) suggesting a commodity market, and then states his prediction of building frontier models for $10 million in five years [60-62].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Prince himself asks whether the high-cost, high-power hardware situation is permanent or will change, hinting at future price reductions and commoditization [S1].
MAJOR DISCUSSION POINT
Future drop in chip prices will commoditize AI models
AGREED WITH
Rajan Anandan
DISAGREED WITH
Rajan Anandan
Argument 3
The “AI doom” narrative is used to capture regulation; greater openness will ultimately benefit competition (Matthew Prince)
EXPLANATION
Matthew argues that the alarmist “AI doom” story is a strategic tool for incumbent firms to push for regulatory capture that limits competition. He believes that more openness, including open‑weight models, will ultimately favor competition and reduce the power of dominant players.
EVIDENCE
He describes how companies scare the public about existential risks to restrict entry, likening it to regulatory capture, and asserts that being more open will win in the long run, noting the Chinese approach as smarter and warning against AI doomers influencing regulation [176-207].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of corporate rhetoric describe how firms adopt inclusive language while ultimately promoting closed platforms, providing a counter-perspective to Prince’s openness argument [S17].
MAJOR DISCUSSION POINT
AI doom narrative serves regulatory capture; openness benefits competition
AGREED WITH
Rajan Anandan
DISAGREED WITH
Rajan Anandan, Rahul Matthan
Argument 4
AI will amplify phishing, social‑engineering, and cyber‑attack capabilities, creating short‑term security headlines (Matthew Prince)
EXPLANATION
Matthew warns that AI will make phishing and social‑engineering attacks more sophisticated and widespread, leading to alarming headlines in the near term.
EVIDENCE
He gives examples of increasingly sophisticated phishing scams, a breach at SalesLock where a Russian hacker could have used AI to quickly understand Salesforce, and predicts a surge in such AI-enabled attacks [266-277].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The UK National Cyber Security Centre warns that AI will significantly increase the frequency and impact of cyber-attacks, supporting concerns about AI-enabled phishing and social engineering [S18].
MAJOR DISCUSSION POINT
AI will intensify cyber‑attack threats
Argument 5
AI‑driven threat detection can make networks more secure than human‑only defenses (Matthew Prince)
EXPLANATION
Matthew counters the previous point by highlighting that AI can also strengthen security, as machine‑learning systems can detect novel threats faster and at scale, giving defenders an advantage over attackers.
EVIDENCE
He describes Cloudflare’s ML system that processes massive internet traffic to predict security threats, noting that such detections have become regular and more effective than earlier human-only methods [278-283].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Cloudflare reports stopping over 220 billion attacks daily using AI-driven machine-learning systems, illustrating how AI improves cyber-defense [S16].
MAJOR DISCUSSION POINT
AI improves cyber‑defense capabilities
Argument 6
AI will erode traffic‑based monetization; a new model that directly compensates content creators is needed (Matthew Prince)
EXPLANATION
Matthew explains that AI’s ability to scrape and summarize content reduces the value of web traffic, undermining traditional ad‑based revenue models. He calls for a new business model that rewards creators directly for the knowledge they generate.
EVIDENCE
He outlines how AI can ingest entire news sites and deliver summaries, eliminating the need for users to visit the original site, which collapses traffic-based monetization; he then draws a parallel with the music industry’s shift from piracy to streaming, arguing a similar transformation is required for the internet [368-410].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Prince argues that AI’s ability to ingest and summarize content will collapse traffic-based revenue, calling for new creator-centric business models; this transformation is discussed in the broader context of internet economics [S1].
MAJOR DISCUSSION POINT
Need for creator‑centric internet revenue model
Argument 7
Creating scarcity (e.g., licensing agreements) can force AI firms to pay for copyrighted corpora (Matthew Prince)
EXPLANATION
Matthew suggests that by making data scarce—through licensing blocks or scarcity mechanisms—content owners can compel AI companies to pay for access to copyrighted material, thereby ensuring compensation for creators.
EVIDENCE
He recounts how blocking AI agents forced companies like Reddit to negotiate licensing deals that paid seven times more than the New York Times, illustrating that scarcity can drive payments for data use [474-477].
MAJOR DISCUSSION POINT
Scarcity can be leveraged for creator compensation
DISAGREED WITH
Rajan Anandan, Audience
Argument 8
AI systems can be more trustworthy than most humans in safety‑critical tasks, suggesting that AI may outperform human judgment in certain domains.
EXPLANATION
Prince argues that AI already demonstrates higher reliability than the vast majority of human operators in areas such as autonomous driving, indicating that AI can be a more dependable agent in specific contexts.
EVIDENCE
He states that “AI is already more trustworthy than most humans” and gives the example that a self-driving car would be safer than 99.99 % of human drivers, illustrating AI’s superior safety performance [450-452].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Expert commentary notes that machines can outperform humans in many cognitive tasks and that AI can be safer than the vast majority of human operators, especially in autonomous driving [S21][S22].
MAJOR DISCUSSION POINT
AI reliability versus human performance
R
Rajan Anandan
11 arguments195 words per minute2620 words802 seconds
Argument 1
India should prioritize low‑cost, smaller language models for local use rather than pursuing AGI (Rajan Anandan)
EXPLANATION
Rajan argues that India’s priority should be building affordable, high‑performing models of up to a few hundred billion parameters tailored to Indic languages, rather than chasing trillion‑parameter AGI systems.
EVIDENCE
He notes that India does not need AGI, cites SARVAM’s 30-100 billion-parameter models that are state-of-the-art in Indic languages and cost-effective compared to global models, and emphasizes the need for low-cost models for 1.4 billion people [75-82].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Anandan emphasizes that India does not need AGI and should focus on affordable, high-performing Indic models; this view is echoed in discussions about India’s strategy to build smaller models [S1][S23].
MAJOR DISCUSSION POINT
Focus on affordable, local language models over AGI
AGREED WITH
Matthew Prince
DISAGREED WITH
Matthew Prince, Rahul Matthan
Argument 2
Building a sovereign stack—including domestic chip design and GPU investments—will reduce reliance on foreign hardware (Rajan Anandan)
EXPLANATION
Rajan proposes that India develop its own semiconductor and GPU ecosystem, investing in domestic startups and partnerships, to achieve a sovereign AI stack less dependent on external suppliers.
EVIDENCE
He mentions that 20 % of global semiconductor designers are Indian, the growth from zero to 35-40 semiconductor startups, recent investments in GPU company Agrani and memory firm C2I, and the need for a sovereign stack despite alliances, also noting large Indian corporate AI infra investments [107-119].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s push to scale domestic GPU capacity to 50-60 k units and policy initiatives to develop a local semiconductor ecosystem provide concrete backing for a sovereign AI stack [S24][S25].
MAJOR DISCUSSION POINT
Sovereign AI hardware stack for India
AGREED WITH
Matthew Prince
DISAGREED WITH
Matthew Prince
Argument 3
Open‑source is essential for the ecosystem, but the massive investment required to train models makes fully open models economically unsustainable (Rajan Anandan)
EXPLANATION
Rajan acknowledges the critical role of open‑source for AI development but warns that the billions of dollars needed to train large models make it financially impossible to keep them fully open without a new economic model.
EVIDENCE
He states that open-source is “absolutely critical” yet cites the $80-100 billion spending on models, arguing that such scale of investment cannot be sustained with fully open models and that a different path is needed [217-232].
MAJOR DISCUSSION POINT
Economic limits to fully open AI models
DISAGREED WITH
Matthew Prince, Rahul Matthan
Argument 4
Most Indian data currently flows to a few global data firms; India needs home‑grown data‑collection startups to retain value locally (Rajan Anandan)
EXPLANATION
Rajan points out that the majority of Indian‑generated data is captured by a handful of foreign data companies, and stresses the need for domestic startups that collect and retain data within India to preserve economic value.
EVIDENCE
He observes that only a few Indian startups are in the AI-data business, that global firms currently own most Indian data, and argues for building more local data-collection companies, citing initiatives like AI for Bharat and the need for model companies to keep data in-country [313-319].
MAJOR DISCUSSION POINT
Need for domestic data‑collection ecosystem
DISAGREED WITH
Matthew Prince, Audience
Argument 5
Proprietary domain data (e.g., remote‑ICU telemetry) can be leveraged to build specialized, exportable AI models (Rajan Anandan)
EXPLANATION
Rajan gives an example of an Indian health‑tech startup that uses its own proprietary ICU data to create specialized AI models, which are then commercialized internationally, demonstrating the value of domain‑specific data assets.
EVIDENCE
He describes Cloud Physician, an Indian startup that runs remote ICUs, has amassed extensive proprietary data over several years, built about a dozen specialized healthcare models, and is now selling those models in the U.S. market [319-327].
MAJOR DISCUSSION POINT
Domain data can fuel exportable AI models
Argument 6
Smart regulation is required to govern data sharing and protect national interests (Rajan Anandan)
EXPLANATION
Rajan calls for thoughtful regulatory frameworks around data to ensure that data sharing benefits the nation while safeguarding privacy and strategic interests.
EVIDENCE
He mentions the need for “smart regulation,” references AI for Bharat’s careful data-sharing policies, and stresses that a regulatory framework will be crucial for leveraging data responsibly [329-333].
MAJOR DISCUSSION POINT
Need for smart data regulation
AGREED WITH
Matthew Prince, Rahul Matthan
DISAGREED WITH
Matthew Prince, Rahul Matthan
Argument 7
India’s AI ecosystem now hosts more consumer AI startups than the US, backed by growing venture capital activity (Rajan Anandan)
EXPLANATION
Rajan asserts that India currently has a larger number of consumer‑focused AI startups than the United States, with strong venture‑capital backing, indicating a vibrant domestic AI scene.
EVIDENCE
He notes that India has “more consumer AI startups than the US,” cites a recent Pitchfest where his firm announced five seed AI investments (four consumer), and highlights 900 million internet users with high daily engagement as a market driver [479-488].
MAJOR DISCUSSION POINT
India leads in consumer AI startup activity
Argument 8
Major Indian conglomerates are committing billions to AI infrastructure, signaling strong domestic investment (Rajan Anandan)
EXPLANATION
Rajan highlights recent multi‑billion‑dollar commitments from Indian giants like Adani and Reliance to build AI infrastructure, underscoring significant domestic financial commitment to AI.
EVIDENCE
He references the announcement that Adani and Reliance each pledged $100 billion into AI infrastructure at the model layer, indicating substantial domestic investment [120-122].
MAJOR DISCUSSION POINT
Large Indian corporate AI infrastructure investments
Argument 9
India’s domestic large‑language‑model ecosystem is expanding rapidly, with a growing number of companies building Indic models, indicating a fast‑moving local AI landscape.
EXPLANATION
Rajan points out that dozens of Indian firms, including academic institutions, are already developing large language models in Indic languages, and the number of participants is expected to rise quickly.
EVIDENCE
He mentions that there are 12-15 companies (including IIT Bombay) building large language models and predicts the count will reach 15-20 very quickly [89-91].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Reports highlight dozens of Indian firms and academic institutions actively developing large language models in Indic languages, confirming rapid ecosystem growth [S23].
MAJOR DISCUSSION POINT
Rapid growth of indigenous LLM development
Argument 10
Achieving mass adoption of voice AI in India requires driving inference cost down to a few paisa per minute, far below current rates, highlighting the need for ultra‑low‑cost compute.
EXPLANATION
Rajan explains that while current voice AI costs about three rupees per minute, reaching a price of five to ten paisa per minute is essential to serve a billion‑plus users, and this challenge is rooted in compute and inference efficiency rather than open‑source availability.
EVIDENCE
He provides current cost figures (three rupees per minute) and the target cost (five to ten paisa), emphasizing that lowering inference cost is the key to scalability [239-245].
MAJOR DISCUSSION POINT
Cost reduction for scalable voice AI deployment
Argument 11
Future AI breakthroughs will likely come from new architectures beyond transformers, so investing in research on post‑transformer models is essential for staying competitive.
EXPLANATION
Rajan asserts that transformer‑based large language models are highly inefficient and represent only the beginning of AI development, predicting that the next wave of breakthroughs will involve alternative architectures.
EVIDENCE
He describes LLMs as “the most inefficient compute machines ever” and states that “we believe there will be many more [architectures] to come after transformers” [99-102].
MAJOR DISCUSSION POINT
Strategic focus on next‑generation AI architectures
R
Rahul Matthan
1 argument158 words per minute1775 words673 seconds
Argument 1
Open‑weight models raise security concerns such as malicious fine‑tuning and misuse (Rahul Matthan)
EXPLANATION
Rahul raises the issue that releasing models with open weights can enable malicious actors to fine‑tune them for harmful purposes, creating security risks that need to be addressed.
EVIDENCE
He notes that as models become more performant, open-weight releases increase the danger of “malicious fine-tuning,” making it easier to bypass guardrails, and that this is a fundamental security challenge for the ecosystem [170-175].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Security assessments warn that open-weight releases can be repurposed for malicious fine-tuning, increasing the risk of AI-enabled attacks [S18].
MAJOR DISCUSSION POINT
Security risks of open‑weight AI models
AGREED WITH
Matthew Prince, Rajan Anandan
DISAGREED WITH
Matthew Prince, Rajan Anandan
A
Audience
3 arguments196 words per minute177 words54 seconds
Argument 1
Trustworthiness may depend on explainability and deterministic behavior, prompting calls for clearer standards (Audience)
EXPLANATION
An audience member asks how AI can become trustworthy, questioning whether explainability and deterministic outcomes are necessary and calling for clearer standards.
EVIDENCE
The audience explicitly asks, “what would make AI trustworthy? Is it something to do with explainability, deterministic AI, and what would be the pathways?” [432-433].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Debates on AI trust emphasize the need for explainability, labeling, and deterministic outcomes as part of emerging standards for trustworthy AI [S20][S21][S22].
MAJOR DISCUSSION POINT
Defining AI trustworthiness standards
Argument 2
Concerns about how creators will receive attribution and payment when AI repurposes their work (Audience)
EXPLANATION
An audience member questions how AI companies will compensate and attribute content creators when AI systems use their work without direct payment.
EVIDENCE
The audience asks, “what makes you believe that AI companies would be equally invested in a creator-based compensation when AI creates the Internet and is not giving back attribution or compensation?” [440-442].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The intellectual-property discourse around AI-generated content underscores challenges of attribution and compensation for creators, aligning with calls for new creator-centric models [S19][S1].
MAJOR DISCUSSION POINT
Creator attribution and compensation in AI
DISAGREED WITH
Matthew Prince, Rajan Anandan
Argument 3
Questions arise on how India can match US‑level VC funding (e.g., Y Combinator, AI 16Z) to scale these startups (Audience)
EXPLANATION
An audience member seeks insight into how India can achieve venture‑capital funding comparable to leading US accelerators and funds to support its AI startups.
EVIDENCE
The audience asks, “Where are we in terms of investment from a venture capital side point of view in terms of how can we match the Y Combinator and AI 16Z level in terms of investments?” [447-449].
MAJOR DISCUSSION POINT
Scaling Indian AI funding to US levels
A
Announcer
3 arguments147 words per minute266 words108 seconds
Argument 1
Matthew Prince and Rajan Anandan have been instrumental in delivering transformative technology to millions worldwide.
EXPLANATION
The announcer emphasizes that very few people have done as much as Matthew and Rajan to bring revolutionary and transformative technology into the hands of a massive global audience.
EVIDENCE
He explicitly states this claim in the opening line of the session, noting their outsized impact on technology diffusion [1].
MAJOR DISCUSSION POINT
Impact of individual leaders on technology democratization
Argument 2
Matthew Prince’s background as Cloudflare CEO and his extensive academic and entrepreneurial credentials position him as a key architect of a better internet.
EXPLANATION
The announcer lists Prince’s role as co‑founder and CEO of Cloudflare, his degrees from top universities, and his work on Project Honeypot, framing him as a leader in building a more secure and accessible internet.
EVIDENCE
These details appear in sentences describing his positions, education, and founding mission to help build a better Internet [2-4].
MAJOR DISCUSSION POINT
Leadership and expertise driving internet infrastructure
Argument 3
Rajan Anandan’s experience as a founder of Sequoia Capital India and his leadership in the Indian startup ecosystem make him a pivotal figure in shaping India’s digital future.
EXPLANATION
The announcer highlights Anandan’s decades of entrepreneurship, investing, and technology leadership, noting his role in founding Sequoia Capital India and influencing the country’s startup and digital landscape.
EVIDENCE
His influence is described through statements about his background, the founding of Sequoia Capital India, and his pivotal role in India’s startup ecosystem [5-7].
MAJOR DISCUSSION POINT
Influence of venture capital leadership on national digital development
Agreements
Agreement Points
Future reduction in AI hardware costs and commoditization of models will enable broader AI democratization
Speakers: Matthew Prince, Rajan Anandan
AI hardware dependence on NVIDIA GPUs makes AI expensive and inefficient (Matthew Prince) Chip prices will fall and AI models will become commoditized, enabling frontier models for $10 M in five years (Matthew Prince) Building a sovereign stack—including domestic chip design and GPU investments—will reduce reliance on foreign hardware (Rajan Anandan) Achieving mass adoption of voice AI in India requires driving inference cost down to a few paisa per minute, far below current rates (Rajan Anandan)
Both speakers acknowledge that current AI hardware is a cost barrier but expect that chip prices will decline and domestic semiconductor efforts will create a sovereign stack, making AI models cheaper and more accessible (Matthew: [23-30][50-54][60-62]; Rajan: [107-119][239-245]).
POLICY CONTEXT (KNOWLEDGE BASE)
The trend of decreasing hardware costs and the push for smaller, sector-tailored models is highlighted in India’s AI strategy and global cost-reduction discussions, indicating a path toward broader democratization [S52][S58].
Open‑source / open‑weight models are essential for ecosystem health, yet their economic sustainability is uncertain
Speakers: Matthew Prince, Rajan Anandan
The “AI doom” narrative is used to capture regulation; greater openness will ultimately benefit competition (Matthew Prince) I tend to think that more open is going to win (Matthew Prince) Open‑source is absolutely critical (Rajan Anandan) The massive investment required to train large models makes fully open models economically unsustainable (Rajan Anandan)
Both agree that openness is critical for AI progress, but recognize that the huge training costs pose a challenge to keeping models fully open (Matthew: [176-207][198-204]; Rajan: [217-232]).
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on the benefits versus economic challenges of open-source AI are documented in multiple analyses, noting both ecosystem value and sustainability concerns [S44][S45][S46][S56].
India should prioritize low‑cost, locally‑relevant AI models over pursuing massive AGI systems
Speakers: Rajan Anandan, Matthew Prince
India should prioritize low‑cost, smaller language models for local use rather than pursuing AGI (Rajan Anandan) Chip prices will fall and AI models will become commoditized, enabling frontier models for $10 M in five years (Matthew Prince)
Rajan stresses focusing on affordable, high-performing Indic models, while Matthew predicts that model costs will drop dramatically, making such low-cost models feasible (Rajan: [75-82]; Matthew: [60-62]).
POLICY CONTEXT (KNOWLEDGE BASE)
Policy briefs emphasize India’s focus on smaller, energy-efficient models rather than trillion-parameter AGI, aligning with strategic recommendations for a cost-effective AI roadmap [S52][S57].
Effective regulation of data and AI is needed to ensure fair competition and protect national interests
Speakers: Matthew Prince, Rajan Anandan, Rahul Matthan
AI models need data; regulation may be used to level the playing field (Matthew Prince) Smart regulation is required to govern data sharing and protect national interests (Rajan Anandan) Open‑weight models raise security concerns such as malicious fine‑tuning and misuse (Rahul Matthan)
All three highlight the necessity of regulatory frameworks-whether to address data monopolies, ensure sovereign control, or mitigate security risks from open models (Matthew: [358-366]; Rajan: [329-333]; Rahul: [170-175]).
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for clear data ownership, multi-stakeholder governance, and balanced regulation to safeguard competition are reflected in industry and policy discussions [S49][S61][S62].
Similar Viewpoints
Both see openness as a strategic lever for competition and ecosystem health, while warning that commercial pressures may limit pure openness (Matthew: [176-207]; Rajan: [217-232]).
Speakers: Matthew Prince, Rajan Anandan
The “AI doom” narrative is used to capture regulation; greater openness will ultimately benefit competition (Matthew Prince) Open‑source is absolutely critical (Rajan Anandan)
Both anticipate a future where hardware costs decline, either through market dynamics or domestic chip development, facilitating cheaper AI deployment (Matthew: [50-54][60-62]; Rajan: [107-119]).
Speakers: Matthew Prince, Rajan Anandan
Chip prices will fall and AI models will become commoditized, enabling frontier models for $10 M in five years (Matthew Prince) Building a sovereign stack—including domestic chip design and GPU investments—will reduce reliance on foreign hardware (Rajan Anandan)
Both acknowledge security risks associated with advanced AI, whether through malicious use or open‑weight exploitation (Matthew: [266-277]; Rahul: [170-175]).
Speakers: Matthew Prince, Rahul Matthan
AI will amplify phishing, social‑engineering, and cyber‑attack capabilities, creating short‑term security headlines (Matthew Prince) Open‑weight models raise security concerns such as malicious fine‑tuning and misuse (Rahul Matthan)
Unexpected Consensus
Both speakers see a viable path for India to compete in AI despite current hardware constraints
Speakers: Matthew Prince, Rajan Anandan
Chip prices will fall and AI models will become commoditized, enabling frontier models for $10 M in five years (Matthew Prince) Building a sovereign stack—including domestic chip design and GPU investments—will reduce reliance on foreign hardware (Rajan Anandan)
While Rajan emphasizes building a sovereign hardware stack to overcome dependence, Matthew predicts market-driven price drops will make frontier AI affordable for India, indicating an unexpected alignment that India can achieve competitiveness through both domestic policy and global market trends (Matthew: [60-62]; Rajan: [107-119]).
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses suggest India can remain competitive by leveraging cost-effective models and emerging semiconductor capabilities, despite hardware limitations [S52][S57][S58].
Overall Assessment

The discussion reveals substantial convergence on three fronts: (1) the expectation that AI hardware costs will fall, enabling cheaper, locally‑relevant models; (2) the shared belief that open‑source is vital but faces economic limits; (3) the consensus that regulation—both of data and AI safety—is essential. These agreements suggest a common strategic direction toward democratizing AI through cost reductions, open ecosystems, and thoughtful policy, especially for emerging markets like India.

High consensus on the need for cheaper hardware, open‑source importance, and regulatory frameworks, implying coordinated efforts among industry leaders, investors, and policymakers could accelerate inclusive AI deployment.

Differences
Different Viewpoints
Timeline and cost to achieve frontier AI models
Speakers: Matthew Prince, Rajan Anandan, Rahul Matthan
Chip prices will fall and AI models will become commoditized, enabling frontier models for $10 M in five years (Matthew Prince) India should prioritize low‑cost, smaller language models for local use rather than pursuing AGI (Rajan Anandan)
Matthew predicts that within five years frontier-level specialized models can be built for $10 million as chip prices fall and AI becomes a commodity [60-62][50-54]; Rajan claims India can launch high-performing, low-cost models within the year to meet local needs, emphasizing affordable Indic models [91-92][75-82]; Rahul pushes for an even more aggressive schedule, suggesting the timeline could be shortened further [66-68].
POLICY CONTEXT (KNOWLEDGE BASE)
Uncertainties around the timeline and expense of frontier AI are highlighted in governance reports that outline technical challenges and safety verification needs [S53][S55].
Openness of AI models versus economic feasibility of fully open‑source large models
Speakers: Matthew Prince, Rajan Anandan, Rahul Matthan
The “AI doom” narrative is used to capture regulation; greater openness will ultimately benefit competition (Matthew Prince) Open‑source is essential for the ecosystem, but the massive investment required to train models makes fully open models economically unsustainable (Rajan Anandan) Open‑weight models raise security concerns such as malicious fine‑tuning and misuse (Rahul Matthan)
Matthew argues that the AI-doom story is a strategic tool for regulatory capture and that increased openness will ultimately win competition [176-207]; Rajan acknowledges open-source importance but warns that billions of dollars needed for training make fully open models financially untenable, calling for a different economic path [217-232]; Rahul highlights that open-weight releases create security risks by enabling malicious fine-tuning, underscoring a trade-off between openness and safety [170-175].
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between open-source benefits and commercial viability is explored in several sources discussing market shifts and security implications of open models [S44][S45][S46][S56].
Sovereign AI hardware stack versus reliance on market‑driven price reductions
Speakers: Rajan Anandan, Matthew Prince
Building a sovereign stack—including domestic chip design and GPU investments—will reduce reliance on foreign hardware (Rajan Anandan) AI hardware dependence on NVIDIA GPUs makes AI expensive and inefficient (Matthew Prince) Chip prices will fall and AI models will become commoditized, enabling frontier models for $10 M in five years (Matthew Prince)
Rajan pushes for a sovereign stack, investing in Indian semiconductor and GPU startups to lessen dependence on foreign suppliers [107-119]; Matthew points out current AI’s heavy reliance on NVIDIA GPUs, which are costly and power-hungry, but expects competition and silicon gluts to drive down prices, making models cheaper [23-30][50-54].
POLICY CONTEXT (KNOWLEDGE BASE)
Strategic guidance recommends focusing sovereign resources on critical control points while acknowledging the role of market-driven price reductions in hardware availability [S51][S52][S58].
Purpose and approach of regulation in AI
Speakers: Matthew Prince, Rajan Anandan, Rahul Matthan
The “AI doom” narrative is used to capture regulation; greater openness will ultimately benefit competition (Matthew Prince) Smart regulation is required to govern data sharing and protect national interests (Rajan Anandan) Open‑weight models raise security concerns such as malicious fine‑tuning and misuse (Rahul Matthan)
Matthew sees the AI-doom narrative as a strategy for regulatory capture that would limit competition, arguing that openness is the better path [176-207]; Rajan advocates for smart regulation to manage data sharing and safeguard national interests, emphasizing the need for clear frameworks [329-333]; Rahul stresses that open-weight models pose security threats, suggesting regulation may be needed to mitigate malicious use [170-175].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on regulatory purpose-balancing innovation with safeguards-are reflected in multistakeholder standards work and industry calls for clear, proportionate rules [S59][S61][S62].
Data ownership, creator attribution and compensation
Speakers: Matthew Prince, Rajan Anandan, Audience
Creating scarcity (e.g., licensing agreements) can force AI firms to pay for copyrighted corpora (Matthew Prince) Most Indian data currently flows to a few global data firms; India needs home‑grown data‑collection startups to retain value locally (Rajan Anandan) Concerns about how creators will receive attribution and payment when AI repurposes their work (Audience)
Matthew proposes using scarcity-such as licensing blocks-to compel AI companies to pay for copyrighted corpora, citing the Reddit licensing deal that yielded higher payments than the New York Times [474-477]; Rajan notes that most Indian-generated data is captured by foreign firms and calls for domestic data-collection startups to keep value within the country [313-319]; an audience member questions how creators will be attributed and compensated when AI systems use their content without direct payment [440-442].
POLICY CONTEXT (KNOWLEDGE BASE)
Issues of data ownership and fair compensation for creators are highlighted in discussions on worker compensation and data governance frameworks [S48][S49][S50].
Unexpected Differences
Whether India needs to pursue AGI
Speakers: Rajan Anandan, Matthew Prince
India should prioritize low‑cost, smaller language models for local use rather than pursuing AGI (Rajan Anandan) And I would just say, don’t sell yourself short. Like you may not, India may not need AGI, but India may still build AGI. (Matthew Prince)
Rajan explicitly states that India does not need AGI and should focus on affordable local models, while Matthew counters that India could still build AGI and that constraints can spur innovation, a contrast not anticipated given their shared interest in AI development [75-78][139-141].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy analyses suggest India should prioritize applied, low-cost AI solutions rather than a direct pursuit of AGI, aligning with strategic recommendations [S52][S57].
Feasibility of fully open‑source large models
Speakers: Rajan Anandan, Matthew Prince
Open‑source is essential for the ecosystem, but the massive investment required to train models makes fully open models economically unsustainable (Rajan Anandan) The “AI doom” narrative is used to capture regulation; greater openness will ultimately benefit competition (Matthew Prince)
Both champion openness, yet Rajan warns that the scale of investment makes fully open models financially impossible, whereas Matthew believes openness will ultimately win and sees the AI-doom narrative as a barrier, revealing an unexpected split on the practicality of open-source at scale [217-232][176-207].
POLICY CONTEXT (KNOWLEDGE BASE)
Feasibility concerns for fully open-source large models are raised in analyses of economic sustainability and security risks associated with open releases [S44][S45][S46][S56].
Overall Assessment

The discussion reveals several substantive disagreements: the timeline and economic path to affordable frontier AI, the role and sustainability of open‑source models, the need for a sovereign hardware stack versus reliance on market price declines, divergent views on the purpose and design of regulation, and contrasting positions on data ownership and creator compensation. While participants share the overarching goal of democratizing AI and enhancing security, they diverge sharply on how to achieve these outcomes.

High – The speakers often articulate opposing strategies (e.g., market‑driven commoditization vs sovereign stack, openness vs economic feasibility), indicating that consensus on policy and investment directions is limited. This fragmentation could slow coordinated action on AI democratization, regulation, and data governance, requiring further dialogue to align on shared objectives.

Partial Agreements
All three agree that AI should become more accessible; Matthew envisions market‑driven price drops making frontier models affordable, Rajan focuses on affordable Indic models to serve 1.4 billion people, and Rahul asks what infrastructure construct would democratize AI [21-22][60-62][75-82].
Speakers: Matthew Prince, Rajan Anandan, Rahul Matthan
Chip prices will fall and AI models will become commoditized, enabling frontier models for $10 M in five years (Matthew Prince) India should prioritize low‑cost, smaller language models for local use rather than pursuing AGI (Rajan Anandan) And what is your idea, your vision for how this would be, if not now, but sometime soon? (Rahul Matthan)
All agree that AI introduces new security challenges and that safeguards are needed; Matthew highlights AI‑driven threat detection improving defenses, Rajan calls for smart regulation of data, and Rahul points out security risks of open‑weight models [278-283][329-333][170-175].
Speakers: Matthew Prince, Rajan Anandan, Rahul Matthan
AI‑driven threat detection can make networks more secure than human‑only defenses (Matthew Prince) Smart regulation is required to govern data sharing and protect national interests (Rajan Anandan) Open‑weight models raise security concerns such as malicious fine‑tuning and misuse (Rahul Matthan)
Takeaways
Key takeaways
AI today is expensive because it relies on a narrow hardware base (NVIDIA GPUs) and a small pool of specialized talent; both hardware costs and talent scarcity are expected to improve over time. Model development is moving toward commoditization; frontier‑level models could be built for $10 M within five years, making them accessible to more organizations. India’s strategy should focus on low‑cost, smaller (1‑200 B parameter) models optimized for local languages and use‑cases rather than pursuing AGI. Building a sovereign AI stack—including domestic chip design, GPU and memory investments, and strategic alliances—is seen as essential for reducing dependence on foreign hardware. Open‑source / open‑weights are critical for ecosystem health, but the massive training costs make fully open models economically unsustainable; a balance is needed. The “AI doom” narrative may be used to capture regulation; greater openness could foster competition, but security concerns (malicious fine‑tuning) remain. Most Indian data currently flows to a few global firms; creating home‑grown data‑collection and domain‑specific data businesses is necessary to retain value locally. AI will both amplify cyber‑threats (e.g., sophisticated phishing) and improve defensive capabilities through AI‑driven threat detection. Traditional internet monetization (traffic‑based ads/subscriptions) will be disrupted by AI; a new model that directly compensates content creators is required. India’s AI ecosystem is experiencing rapid VC activity and large corporate investments, positioning it to lead in consumer‑focused AI applications.
Resolutions and action items
Rajan announced investments in two Indian hardware startups: Agrani (GPU company) and C2I (memory company) to advance the sovereign stack. Rajan indicated that within the year India will ship many more low‑cost, high‑performance language models for local applications (e.g., farmer tools, voice AI). Matthew suggested that regulators should consider leveling the data‑crawling playing field (e.g., requiring Google to share index data) to promote fairness. Both speakers agreed on the need for smart regulation of data and AI, though specific policies were not defined.
Unresolved issues
How to sustain open‑weight models financially while preventing malicious fine‑tuning and other security risks. Concrete mechanisms for compensating and attributing content creators when AI repurposes their work. Specific regulatory frameworks for data collection, sharing, and sovereign AI stacks in India. Exact timeline and roadmap for achieving the $10 M frontier model target and for broader AI democratization. How India can match US‑level venture‑capital funding (e.g., Y Combinator, AI 16Z) to scale its AI startups. Details on the future internet business model that will replace traffic‑based monetization.
Suggested compromises
Create a degree of scarcity (e.g., licensing agreements) to force AI firms to pay for copyrighted corpora while still allowing broader access to data. Focus on low‑cost, domain‑specific models for Indian needs rather than competing directly on trillion‑parameter AGI models. Encourage openness in AI research and tools while accepting that the most advanced models may remain partially closed due to investment recovery needs. Pursue a sovereign hardware stack but maintain strategic alliances with global partners to avoid isolation.
Thought Provoking Comments
AI requires lots and lots of chips, largely produced today by one manufacturer, NVIDIA, which were never built for AI workloads. This hardware monopoly makes AI very expensive and hard to democratize.
He pinpoints the fundamental hardware bottleneck that underlies the cost and accessibility challenges of AI, moving the conversation from abstract policy to a concrete technical constraint.
His observation reframed the discussion to focus on supply‑side hardware issues, prompting both Rahul and Rajan to consider how chip diversification and sovereign stacks could address democratization.
Speaker: Matthew Prince
In five years, you’ll be able to build a frontier‑like model within a specialty for $10 million or less.
Provides a bold, data‑driven forecast that challenges the assumption that AI will remain prohibitively expensive, suggesting a rapid cost decline.
This prediction set a timeline that both Rahul and Rajan used to benchmark India’s progress, shifting the tone from pessimistic to optimistic about near‑term feasibility.
Speaker: Matthew Prince
India is not trying to get to AGI. With 1.4 billion people we need highly performant, ultra‑low‑cost models of a few hundred billion parameters, not trillion‑parameter AGI. We already have 30‑100 billion‑parameter models that are state‑of‑the‑art for Indic languages.
He reframes the AI race from a global AGI competition to a localized, purpose‑driven strategy, emphasizing scale, cost, and language relevance over raw parameter counts.
Rajan’s comment redirected the conversation toward practical, region‑specific solutions, prompting Matthew to discuss constraints‑driven innovation and leading Rahul to probe open‑source and data issues.
Speaker: Rajan Anandan
Constraints can be a catalyst for breakthrough innovation – DeepSeek’s efficient pruning algorithm shows that limited compute can produce superior models, something big, well‑funded companies might overlook.
He challenges the notion that more money and bigger models are the only path forward, highlighting how scarcity can drive creative technical solutions.
This insight encouraged the panel to view India’s resource constraints as potential advantages, influencing Rajan’s optimism about Indian startups and sparking discussion on efficiency versus scale.
Speaker: Matthew Prince
The AI‑doom narrative is a strategic move to capture regulation; companies scare everyone to keep competitors out and protect their lead. More openness will ultimately win.
He critically examines the motives behind AI risk rhetoric, suggesting it may serve corporate interests rather than public safety, and advocates for openness.
This comment shifted the debate from pure safety concerns to the politics of regulation, prompting Rajan to acknowledge the economic realities of open‑source and leading Rahul to explore the balance between openness and security.
Speaker: Matthew Prince
Google indexes far more of the web than any other AI company – for every page Google sees, Microsoft sees one, OpenAI sees one, Anthropic sees one in ten. This data monopoly gives them a huge advantage and must be addressed through regulation or equal access.
He uncovers a hidden competitive edge rooted in data access, expanding the conversation beyond hardware to the importance of web crawling dominance.
The point introduced a new topic about data equity, causing the panel to discuss potential regulatory interventions and the need for a level playing field, which Rajan linked to sovereign data strategies.
Speaker: Matthew Prince
AI will fundamentally disrupt the internet’s business model that relies on traffic and ads. We need a new model that compensates creators for knowledge, similar to how the music industry evolved from piracy to streaming royalties.
He connects AI’s impact to broader economic structures, using the music industry analogy to illustrate how new value capture mechanisms can emerge.
This macro‑level insight broadened the scope of the discussion, leading to audience questions about creator compensation and prompting Matthew to elaborate on scarcity‑driven licensing deals.
Speaker: Matthew Prince
AI is already more trustworthy than most humans – for example, self‑driving cars are statistically safer than 99.99 % of human drivers. Trust should be measured against human performance, not an idealized perfection.
He reframes the trust debate by providing empirical evidence that AI can outperform humans, challenging the prevailing fear‑based narrative.
This comment shifted the tone from caution to confidence, influencing the audience’s follow‑up questions on trustworthiness and prompting Rajan to highlight the rapid growth of consumer AI startups in India.
Speaker: Matthew Prince
Overall Assessment

The discussion was steered by a series of pivotal remarks that moved it from abstract concerns about AI monopolies to concrete strategies for democratization. Matthew Prince’s technical and strategic insights about hardware bottlenecks, cost trajectories, data monopolies, and the political use of AI risk reframed the conversation around tangible levers for change. Rajan Anandan’s counter‑point—focusing on India’s unique needs, low‑cost models, sovereign stacks, and a thriving consumer AI ecosystem—shifted the dialogue from a global, US‑centric view to a regional, application‑driven perspective. Together, these comments opened new sub‑topics (efficiency‑driven innovation, open‑source vs security, new internet business models, and trustworthiness) and prompted the participants and audience to explore regulatory, economic, and societal implications, ultimately shaping a nuanced, forward‑looking debate on how AI can be made accessible, responsible, and beneficial at scale.

Follow-up Questions
How can we keep AI models open‑weight while mitigating security risks such as malicious fine‑tuning?
Balancing openness for innovation with safety is critical for responsible AI deployment and for preventing misuse of powerful models.
Speaker: Rahul Matthan
What is the importance of open‑source/open‑weight models for the AI ecosystem, and how can we sustain openness given commercial pressures?
Open models drive community innovation, but large‑scale funding models tend to close them; understanding how to preserve openness is essential for a democratized AI future.
Speaker: Rahul Matthan
What evidence supports the claim that AI will accelerate cyber‑attacks?
Concrete evidence is needed to shape security policies, industry defenses, and regulatory responses to emerging AI‑enabled threats.
Speaker: Rahul Matthan (to Matthew Prince)
What is the business model for Indian data‑collection companies – are they feeding data back to US AI firms or negotiating different terms?
Clarifying data flows and monetisation models is vital for data sovereignty, economic benefit for India, and fair compensation for local data assets.
Speaker: Rahul Matthan (to Rajan Anandan)
What is the idea behind “pay‑to‑crawl” or an AI audit as a mechanism for democratising AI?
Access to web data underpins AI training; a transparent crawl‑payment or audit system could level the playing field between dominant search engines and other AI developers.
Speaker: Rahul Matthan (to Matthew Prince)
How can AI be made more trustworthy – through explainability, deterministic behaviour, or other pathways?
Trustworthiness is a prerequisite for widespread adoption, regulatory approval, and user confidence in AI systems.
Speaker: Audience (directed to Matthew Prince)
How can we ensure creator‑based compensation and attribution when AI consumes internet content without giving credit?
Fair remuneration for content creators addresses ethical, legal, and economic concerns as AI models increasingly rely on existing media.
Speaker: Audience (directed to Matthew Prince)
Where does India stand in venture‑capital investment for consumer AI compared with US benchmarks (e.g., Y Combinator, a16z), and how can we close the gap?
Adequate funding is essential for Indian startups to compete globally and to scale innovative consumer AI solutions.
Speaker: Audience (directed to Rajan Anandan)
How can inference costs for voice AI be reduced to a few paisa per minute so that it becomes affordable for billions of Indians?
High inference costs limit adoption; lowering them is key to achieving mass‑scale AI‑driven services in India.
Speaker: Rahul Matthan (to Rajan Anandan)
What steps are needed for India to build a sovereign AI stack (chips, compute, data) and reduce dependence on foreign technology?
A sovereign stack enhances national security, economic independence, and the ability to tailor AI solutions to local needs.
Speaker: Rajan Anandan
What new internet business model will emerge to compensate content creators in an AI‑driven world where traditional traffic‑based monetisation erodes?
Understanding the next revenue paradigm is crucial for sustaining media, journalism, and creative industries as AI changes content consumption.
Speaker: Matthew Prince
How does Google’s indexing advantage affect AI competition, and what regulatory or technical measures could level the field?
If a few firms control the majority of web data, they gain an outsized AI advantage; addressing this is important for fair competition.
Speaker: Matthew Prince
Should AI be regulated similarly to nuclear technology (e.g., an IAEA‑style body), and what would be the implications?
Exploring a high‑level regulatory framework could help manage existential risks while enabling safe development.
Speaker: Rahul Matthan (referencing earlier remarks)
What further innovation is needed around data collection and data‑as‑a‑service companies in India to leverage the country’s data advantage?
Developing a robust data‑industry can fuel domain‑specific AI models and reduce reliance on external data providers.
Speaker: Rajan Anandan
What smart regulatory approaches are required for data usage, sharing, and ownership in the Indian AI ecosystem?
Effective regulation can protect privacy, encourage innovation, and ensure that data benefits the Indian economy.
Speaker: Rajan Anandan
What research is needed into post‑transformer AI architectures that could be more efficient than current models?
Current large language models are compute‑inefficient; new architectures could lower costs and broaden accessibility.
Speaker: Rajan Anandan
What research is needed to lower compute and inference costs (e.g., memory, chip design) for AI workloads in India?
Cost reductions are essential for scaling AI services to a massive user base and for maintaining competitiveness.
Speaker: Rajan Anandan
What research is needed to understand the impact of AI on internet traffic patterns and the viability of existing ad‑based revenue models?
AI changes how content is accessed; studying these shifts will inform new sustainable business models for the web.
Speaker: Matthew Prince
What security implications arise from open‑weight models, and how can they be mitigated?
Open models can be repurposed for malicious ends; identifying safeguards is vital for safe open‑source AI development.
Speaker: Rahul Matthan
How can AI be leveraged defensively to stay ahead of cyber threats, and what research is required to optimise this?
Using AI for proactive security can counteract AI‑enabled attacks; research is needed to maximise effectiveness and reduce false positives.
Speaker: Matthew Prince

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.