Open Internet Inclusive AI Unlocking Innovation for All
20 Feb 2026 16:00h - 17:00h
Open Internet Inclusive AI Unlocking Innovation for All
Session at a glance
Summary
This discussion focused on democratizing artificial intelligence technology and reducing the dominance of a few major companies concentrated in Silicon Valley. Matthew Prince, CEO of CloudFlare, argued that AI’s current high costs stem from expensive NVIDIA chips and scarce specialized talent, but predicted these barriers will diminish as more people enter the field and chip competition increases. He forecasted that frontier AI models could be built for $10 million or less within five years, making the technology more accessible globally.
Rajan Anandan from Peak15 Partners highlighted India’s approach to AI development, emphasizing that the country doesn’t need to compete in building AGI but rather focus on creating efficient, low-cost models serving India’s 1.4 billion population. He pointed to successful Indian companies like Sarvam that have developed competitive models in local languages at significantly lower costs than global alternatives. Anandan noted that India has launched 12 large language model initiatives and is building a sovereign AI stack spanning chips, compute infrastructure, and applications.
The conversation addressed the importance of open-weight AI models for democratization, with Prince suggesting that claims about AI dangers may be strategically motivated to maintain competitive advantages through regulatory capture. Both speakers emphasized that innovation often comes from resource-constrained environments, citing DeepSeek’s breakthrough in efficient AI processing as an example. They discussed the need for new internet business models as AI disrupts traditional content monetization, drawing parallels to how the music industry transformed after digital disruption. The discussion concluded with optimism about India’s potential to lead in consumer AI applications and the broader democratization of AI technology globally.
Keypoints
Major Discussion Points:
– Democratization of AI and breaking the monopoly of a few companies: The conversation centers on Matthew Prince’s vision that AI technology shouldn’t be controlled by “a handful of companies in the same postal code.” Both speakers discuss how constraints and resource limitations can actually drive innovation, citing examples like DeepSeek’s efficient pruning algorithms and India’s approach to building specialized, cost-effective models rather than pursuing AGI.
– Open source vs. closed AI models and the economics behind the shift: The discussion explores how the massive investments (hundreds of billions to trillions of dollars) in AI development are driving companies away from open-source models toward closed, proprietary systems. The speakers debate whether AI safety concerns are genuine or strategic moves for regulatory capture to maintain competitive advantages.
– India’s unique AI strategy and competitive positioning: Rajan Anandan outlines India’s approach of focusing on highly performant, low-cost models (30-200 billion parameters) rather than competing in the trillion-parameter AGI race. He emphasizes India’s strengths in building sovereign AI capabilities across the entire stack, from chips to applications, with particular success in voice AI and local language models.
– The transformation of internet business models due to AI: Matthew Prince discusses how AI is disrupting the traditional internet economy of “create content, drive traffic, sell ads/subscriptions.” He proposes new compensation models for content creators, drawing parallels to how the music industry transformed from an $8 billion industry to one where Spotify alone pays $12 billion annually to musicians.
– AI security, trustworthiness, and data sovereignty: The conversation addresses cybersecurity challenges posed by AI-powered attacks while arguing that AI will ultimately make systems more secure. They also discuss the importance of data sovereignty for countries like India and the need for fair compensation models for content creators whose data trains AI systems.
Overall Purpose:
The discussion aims to explore pathways for democratizing AI technology and reducing dependence on a small number of dominant companies, with particular focus on how countries like India can build competitive AI capabilities through innovative approaches, resource constraints, and strategic positioning in the global AI landscape.
Overall Tone:
The tone is optimistic and forward-looking throughout, with both speakers expressing confidence in alternative approaches to AI development. While they acknowledge challenges and potential risks, the conversation maintains an entrepreneurial and solution-oriented perspective. The speakers demonstrate mutual respect and build upon each other’s points, creating a collaborative rather than confrontational dynamic. The tone becomes particularly enthusiastic when discussing specific examples of innovation and success stories from India’s AI ecosystem.
Speakers
Speakers from the provided list:
– Announcer: Event host/moderator introducing the speakers and session
– Rajan Anandan: Managing Director of Peak15 Partners (formerly Sequoia), founder of Sequoia Capital India in Southeast Asia, technology leader and investor focusing on transformative technology-led businesses in India’s startup and digital ecosystem
– Matthew Prince: Co-founder and CEO of CloudFlare, World Economic Forum Technology Pioneer, Council on Foreign Relations member, co-creator of Project Honeypot (largest community tracking online fraud and abuse), degrees from Harvard, Chicago, and Trinity College
– Rahul Matthan: Board member and partner in TriLegal’s Bangalore office, leads their technology, media, and telecom practice, extensive experience in high-value TMT transactions and regulatory matters for telecom, internet and data service providers
– Audience: Multiple audience members asking questions during the Q&A session
Additional speakers:
None – all speakers in the transcript were included in the provided speakers names list.
Full session report
This comprehensive discussion between Matthew Prince (CEO of CloudFlare), Rajan Anandan (Managing Director of Peak15 Partners), and moderator Rahul Matthan explored the critical challenge of democratising artificial intelligence technology and breaking the current concentration of AI capabilities amongst a handful of companies in Silicon Valley. The conversation provided both strategic insights and practical examples of how alternative approaches to AI development are already succeeding, particularly in India.
The Current State of AI Concentration and Barriers
Matthew Prince opened by diagnosing why AI remains expensive and concentrated today. He identified two primary barriers: the dominance of NVIDIA chips that were originally designed for gaming rather than AI applications, and the scarcity of specialised AI talent. Prince noted the irony that NVIDIA chips evolved from powering gaming consoles to mining Bitcoin and then to AI applications, arguing that purpose-built AI chips would be designed quite differently. The talent shortage stems from AI’s historical reputation as a field of unfulfilled promises across previous decades, which led to reduced investment in AI education until the recent breakthrough.
However, Prince presented compelling evidence that these barriers are already eroding. Computer science programmes worldwide have experienced dramatic growth in just two years, with AI theory courses experiencing unprecedented demand. Universities that previously had limited AI programmes are now rapidly expanding them. This educational shift, combined with the inevitable competition that follows NVIDIA’s transformation from a gaming company to one of the world’s most valuable companies, suggests that both talent and chip costs will decrease significantly.
Prince made a bold prediction: “In five years, you’ll be able to build a frontier-like model within a specialty for $10 million or less.” This represents a dramatic reduction from current costs and would fundamentally alter the AI landscape by making advanced capabilities accessible to a much broader range of organisations and countries.
India’s Strategic Approach to AI Development
Rajan Anandan provided a compelling counter-narrative to the prevailing assumption that countries must compete directly with trillion-parameter models to remain relevant in AI. He argued that “AGI is not the thing that we need” for India, emphasising instead the goal of uplifting 1.4 billion Indians through highly performant, extremely low-cost models.
Anandan presented concrete evidence of India’s success with this approach, highlighting multiple companies achieving breakthrough results. The scope of India’s AI initiative is substantial: twelve large language model projects comprising eleven companies plus Bharat GPT from IIT Bombay, with expectations that this number will grow to fifteen or twenty very quickly. Importantly, he clarified that despite media characterisations, these are genuinely large language models—anything above 30 billion parameters qualifies as such, putting India firmly in the LLM race.
In voice AI specifically, Indian companies have achieved superior performance in both speech-to-text and text-to-speech whilst significantly undercutting global leaders on cost. Current human voice services in India cost 5-20 rupees per minute, whilst AI voice services have reached 3 rupees per minute and could potentially reach 1 rupee per minute with current technology. However, to serve India’s full population, costs must decrease further to 5-10 paisa per minute.
India’s strategy encompasses the entire technology stack. At the semiconductor level, despite having no semiconductor startups four years ago, India now hosts 35-40 such companies spanning from low-power chips to GPUs and memory solutions. Anandan announced recent investments in Agrani (a GPU company) and C2I (focused on memory), illustrating the breadth of India’s sovereign technology development.
The Economics and Politics of Open Source AI
The discussion revealed sophisticated understanding of the tensions surrounding open-source AI development. Prince offered a provocative economic explanation for the shift away from open models, suggesting that companies investing hundreds of billions of dollars have strong incentives to restrict competition. He argued that AI safety rhetoric may serve as a form of regulatory capture, noting that he had never seen another industry advocate so strongly for its own regulation.
This analysis suggests that doomsday scenarios about AI risks may be strategically motivated to justify regulations that favour incumbent players. Prince advocated for treating AI systems more like humans than machines, recommending criminal codes rather than engineering standards for regulation.
Anandan acknowledged the economic reality that makes open-source challenging: “if you invest a trillion dollars, you can’t give it away for free. It’s as simple as that.” However, he hinted at significant developments, mentioning upcoming announcements from major companies regarding their commitment to open source.
The speakers agreed that open-source approaches remain critical for ecosystem development, but recognised that alternative pathways must emerge. The current investment levels make traditional open-source models economically challenging for frontier development, necessitating new approaches to maintaining accessibility whilst enabling continued innovation.
Innovation Through Constraint: Learning from DeepSeek
A central theme throughout the discussion was how resource constraints can drive superior innovation. Prince highlighted DeepSeek’s breakthrough as a perfect example, explaining that the Chinese company developed significant efficiency improvements precisely because they lacked access to unlimited computing resources. DeepSeek’s innovations in model efficiency allowed them to deliver AI capabilities much more cost-effectively than resource-rich competitors.
Prince argued that well-funded US AI companies may be “blinded” to efficiency innovations because they can simply purchase more computing power rather than optimising their approaches. This dynamic suggests that companies operating under constraints may ultimately develop more sustainable and scalable AI solutions. Prince expressed that he wished “DeepSeek had been an Indian company, not a Chinese company,” seeing similar potential for constraint-driven innovation in India’s AI ecosystem.
This analysis validates India’s approach of building specialised, efficient models rather than attempting to match the massive investments of US companies. The constraint-driven innovation thesis suggests that India’s resource limitations may actually prove advantageous in developing more efficient AI architectures and applications.
The Transformation of Internet Business Models
Prince provided detailed analysis of how AI is fundamentally disrupting the traditional internet economy. The historical model—create content, drive traffic, sell subscriptions or advertisements—is breaking down as AI systems consume content without driving traffic back to creators. He presented stark statistics illustrating this shift: ten years ago, Google sent one unique visitor for every two pages scraped; today, the ratios are dramatically different, with some AI companies extracting hundreds of thousands of pages for every visitor they send back.
This disruption threatens the fundamental economics of content creation, as creators lose the traffic necessary to monetise their work through traditional means. However, Prince drew an optimistic parallel to the music industry’s transformation. The music industry, once devastated by piracy, ultimately recovered through new business models, with platforms like Spotify now paying billions annually to musicians.
Prince argued that a similar transformation must occur for internet content, with new business models emerging that compensate creators based on quality rather than traffic. This shift could actually improve societal outcomes by moving away from engagement-driven content towards content that genuinely contributes to human knowledge.
The practical implementation requires creating scarcity around content access. Prince provided evidence that blocking AI crawlers works, citing successful negotiations between publishers and AI companies. Companies that have blocked AI access have secured more favourable licensing deals than those that allowed free access.
AI Security: Balanced Perspectives on Risks and Benefits
The security implications of AI development received nuanced treatment, with Prince acknowledging both immediate risks and ultimate benefits. In the short term, AI will enable more sophisticated attacks, including highly convincing phishing scams and more effective exploitation of security vulnerabilities. Prince described scenarios where AI-generated content could be used for fraud and deception.
However, Prince argued that the long-term security outlook is positive because “the good guys will always have more data than the bad guys do.” Security companies are incorporating AI into their defence systems, with some already identifying novel threats that no human has previously recognised.
The key adaptation required is moving away from appearance-based authentication. Prince recommended practical measures like establishing family passwords to protect against AI-generated impersonation attacks. More broadly, businesses must abandon verification methods that rely on how someone looks or sounds.
Prince’s prediction that “in 10 years, we are more secure online than we are today” reflects confidence that defensive AI applications will ultimately outpace offensive uses, provided that regulatory frameworks don’t prevent security companies from effectively deploying AI technologies.
Consumer AI Applications and Market Opportunities
Anandan provided an optimistic assessment of India’s position in consumer AI applications, revealing that “India today has more consumer AI startups than the US.” This advantage stems from India’s massive user base—900 million internet users—combined with cost constraints that drive innovation.
The consumer AI opportunity spans multiple sectors where AI can dramatically reduce costs and increase accessibility. In education, AI services can offer comprehensive support at extremely low costs, making quality education accessible to populations previously excluded by price. Voice AI represents a particularly promising application area, with the potential to serve populations that may not be comfortable with text-based interfaces.
The consumer AI opportunity extends beyond cost reduction to fundamental accessibility improvements. Achieving scale in India requires image and video interfaces, highly localised language support, and ultra-low costs—areas where Indian companies have natural advantages due to their deep understanding of local markets and constraints.
Data Sovereignty and Strategic Advantages
The discussion addressed critical questions about data ownership and competitive advantages in the AI era. Anandan noted that while India generates vast amounts of data, the country must be more strategic about how this data is collected, processed, and monetised.
However, Anandan highlighted positive examples of Indian companies leveraging proprietary data effectively. Companies with domain-specific data advantages are building specialised AI models that compete effectively in both domestic and international markets, demonstrating how data sovereignty can create competitive moats.
The broader challenge involves establishing frameworks that prevent exploitative data extraction whilst supporting legitimate AI development and international collaboration. This requires both technical measures and regulatory frameworks that ensure fair compensation for data usage.
Regulatory Approaches and Future Outlook
The speakers advocated for pragmatic regulatory approaches that avoid stifling innovation whilst addressing legitimate concerns. Prince argued for treating AI systems based on their outputs and impacts rather than their internal mechanisms, recognising that AI systems are inherently non-deterministic.
The discussion emphasised the importance of avoiding regulatory capture, where incumbent companies use safety concerns to justify regulations that prevent competition. The goal should be enabling innovation and competition whilst addressing genuine risks.
Strategic Implications and Conclusions
The conversation concluded with optimism about AI democratisation and India’s competitive position. Both speakers agreed that current barriers to AI development are temporary and that alternative approaches to frontier model development are not only viable but potentially superior. The combination of decreasing costs, increasing talent availability, and constraint-driven innovation suggests a more distributed and competitive AI landscape.
For India specifically, the strategy of building specialised, efficient models for local needs whilst developing sovereign capabilities across the technology stack appears to be succeeding. The country’s advantages in consumer applications, combined with its growing semiconductor and infrastructure capabilities, position it well for the next phase of AI development.
The broader implications extend beyond individual countries to the global AI ecosystem. The success of constraint-driven innovation demonstrates that breakthrough efficiency improvements can emerge from resource limitations, suggesting a future where AI capabilities are more widely distributed and where different regions develop AI solutions optimised for their specific needs.
The transformation of internet business models presents both challenges and opportunities. While current disruption threatens traditional content monetisation, the potential emergence of quality-based compensation systems could create a healthier information ecosystem that better rewards valuable content creation.
Overall, the discussion presented a compelling alternative narrative to AI concentration scenarios, suggesting that democratisation is not only possible but already underway through innovative approaches, strategic specialisation, and the natural dynamics of technological competition and diffusion. The key insight is that success in AI may not require matching the massive investments of Silicon Valley companies, but rather developing more efficient, targeted solutions that serve specific markets and needs effectively.
Session transcript
Very few individuals have done more to bring revolutionary and transformative technology into the hands of millions than Matthew and Rajan. Matthew Prince is the co -founder and CEO of CloudFlare, a World Economic Forum on Foreign Relations, Forum Technology Pioneer, and a Council on Foreign Relations member. He has degrees from Harvard, Chicago, and Trinity College, and co -created Project Honeypot, the largest community tracking online fraud and abuse. Matthew’s founding mission for CloudFlare was to help build a better Internet, a goal that has become increasingly critical in the age of artificial intelligence. Rajan Anandan is one of India’s most influential technology leaders and investors, currently serving as Managing Director of Peak15 Partners, formerly Sequoia. He is the founder of Sequoia Capital India in Southeast Asia, where he focuses on backing founders building transformative, technology -led businesses.
With decades of experience across entrepreneurship, investing, and global technology leadership, Rajan has played a pivotal role in shaping India’s startup and digital ecosystem. Orchestrating this conversation is Rahul Mathan, who brings the perfect blend of legal insight, policy depth, and the ability to ask the questions everyone else is thinking. Rahul is a board member and partner in TriLegal’s Bangalore office and has their technology, media, and telecom practice. He has extensive experience advising on high -value TMT transactions in the country. He has worked with companies across sectors, from telecom majors to Internet and data service providers, offering advice on regulatory matters and operational issues. So please join me in welcoming three awesome leaders on stage, and with that, the stage is yours.
Thanks, Rahul.
And since I haven’t worked with you, I’m going to square that circle. Matthew, I just heard your keynote up in the big 3 ,000 -seater hall, and you ended with a very powerful statement, which is that that this wonderful AI technology should not be built by a handful of companies in the same postal code. And that, in many ways, seems to be the driving motivation for having this discussion. But it’s easier said than done in that AI is a very big and complicated stack. And a lot of that stack actually involves complex hardware. And it’s hard, really, to move that hardware around the Internet. So if we are to democratize AI and if we are to come up with the infrastructure construct that would democratize AI, what would that look like?
And what is your idea, your vision for how this would be, if not now, but sometime soon?
Yeah, so let’s talk first about why AI is hard and expensive today. So the first thing is AI requires lots and lots and lots of chips. And the fifth thing is AI requires lots and lots of chips. And the sixth thing is AI requires lots and lots of chips. largely produced today by one manufacturer, NVIDIA, that use a ton of power and are very, very expensive. They were never built to do this. If we’re totally honest, the NVIDIA chips were built to power gaming consoles, right? And then for a while to mine Bitcoin and then magically to create a superintelligence. But if you had started with, let’s create the superintelligence, you would have designed those chips somewhat differently today.
That’s challenge one that keeps AI very, very hard. Challenge two is that it requires a real specialized set of knowledge. There’s a very small set of people in the world that know how to build these models and how to run these systems. And so you have to ask, why is that not something where everyone knows it? If you had known that you could specialize in this in school and literally make $100 million, there’s a year. We would all have studied AI, right? And yet if you go back just five years ago, the people who were studying AI were kind of the weirdos. Why was that the case? Well, because AI was one of these fields that kind of had promise in the 70s and had promise in the 80s and had promise in the 90s.
And then everyone was kind of like, you know what? We’re tired of this. And so the AI professor was kind of shunted off to the side. And so if those are the things that today make AI extremely expensive, the question is, are those things permanent states or are they going to change? Well, we can measure one of them already. Already, if you look in enrollment in computer science programs across the world, it is up dramatically, even though supposedly there’s no future for computer scientists, in just the last two years. And then secondly, the enrollment in specifically AI theory courses is off the top. Every university that used to sort of shutter their course is now standing it up and building it like.
like crazy. And so I think that over time, we’re gonna have more and more people who are able to do this. And so having to pay enormous salaries for those people, that’s probably not going to be the future. On the chip side, you know, if you have literally a company going from being an obscure gaming company, to the most valuable company in the world, obviously, a whole bunch of people are going to chase after that. And if you look at the history of silicon, anytime there has been a silicon shortage, it turns into a silicon glut over time. And with GPUs, it’s kind of had hit after hit after hit. I think that what we’re seeing, at least is that both from startups, as well as incumbent players, as well as from the hyperscalers and other players, they’re getting involved.
There are so many people who are making this this silicon, that no matter what the price per unit of work done is going to come down. The other thing that I think is encouraging is that if we look at the actual AI models themselves, it doesn’t appear that this is necessarily a one company is running away with it. It’s sort of like Google gets a lead, and then Anthropic passes them, and then OpenAI passes them, and then someone else passes them, and then Google does it again, and it keeps leapfrogging itself. That, to me, suggests that the actual model making is more likely in a steady state in the future to be something like a commodity.
And if that’s the case, if the cost of creating the models is going to come down, if the models themselves are more commodities, I think that we can’t assume that the literally hundreds of billions, if not trillions of dollars that are going into building the leading AI companies today, that that might come crashing down. And my prediction would be that you’ll be able to build models. Models that will be on the frontier. They’ll be more specialized, but they’ll be on the frontier. for tens of millions of dollars in the not -so -distant future. I’ll put a date out there. In five years, you’ll be able to build a frontier -like model within a specialty for $10 million or less.
Rajan, about a year ago, one of these companies and one of these postal codes were here, and you asked a question, you know, what will it take for India to compete? And you were told it’s hopeless, don’t compete. And yet at the summit, we’ve come out with a model that’s, by all accounts, haven’t yet played with it competitive. So what Matthew is saying seems to be working out, but he’s putting a five -year timeline. I would argue that perhaps we could be more aggressive with that timeline. So what is your view on this as someone who is actually, you know, in India, working with some of these really smart people who are working under constraints?
but are yet putting out some fairly impressive models, interesting use cases. What’s life like at the other end of the absolute front? I mean, what people call the absolute frontier of these models. What is life at the other end of it where there are many different applications, many different use cases, and many different types of models? Yeah.
Firstly, hi, everyone. Great to have all of you here. So I think the first thing is, look, Matthew, I don’t know whether you’ve been following a company called Sarvam. I think, firstly, it’s important that India is not trying to get to AGI. With 1 .4 billion humans and a million Indians turning 18 every month, AGI is not the thing that we need. Our focus really is to uplift 1 .4 billion Indians. And I think our ecosystem, our innovators, our government, our investors, our technologists, our engineers, our engineers, our engineers, our engineers, our engineers, are all of the view that we have. But to really do that, you don’t need trillion or five trillion parameter models.
What you need are highly performant, extremely low cost models that are a billion parameters to maybe 100 or 200 billion parameters. And actually, for the amount that you mentioned, we have launched 30 and 100 billion parameter models that are SOTA in Indic languages. In fact, I don’t know whether you know this, Matthew, but if you look at voice AI in Indic languages, SARVAM today is both SOTA in speech -to -text and text -to -speech and is a fraction of the cost of the global models, including the global leader in voice AI. So I think what you’re going to see is the government’s actually – and by the way, the reason SARVAM is able to do that, because of tremendous support from the government, but it’s not just SARVAM.
There are 12 large and small language models. By the way, just – I don’t know, I don’t know. a clarification because when Sarvam launched, I think somebody said India is really good at small language models. The last time I checked Chakchipiti and Gemini, anything above 30 billion is actually a large language model. So we are actually in the large language model race. Just a clarification there. So basically, we have 12 companies, actually 11 companies and Bharat Chen, which is part of IIT Bombay, that are building these models. I think this number goes to 15, 20 very, very quickly. And I would say actually well within this year, Matthew, in many, many things that India needs, right?
We need to uplift 100 million farmers, and for that, we need to build basically models that work on feature fonts, right, in local languages. That’s done, right? That was actually launched on Wednesday and so on. So I think that’s the first thing. And now when you ask this other question of, look, true frontier, which is, you know, the frontier today, let’s call it a few trillion, maybe it’ll go to 5 trillion, 10 trillion. I think it’s going to be very, if you define frontier that way, part of it is also definition of frontier and more importantly, what’s the objective, right? If you define that frontier that way, I think it’s going to, it’s not going to be, Indians are not going to be able to do it with these set of architectures.
But what we are going to do is to Matthew, to the point that you made, which is, look, LLMs are the most inefficient compute machines ever. I mean, this is the, you know, these are not efficient architectures, right? We believe that this is the beginning, not the end. We believe there’s going to be many more to come after transformers. And I think that is where the bets are going to be made. In fact, I mean, you know, Jan Lukun is here and he sort of said that, look, you know, this is not sort of the, this is not going to lead us to AGI. We don’t really want to get to AGI, but even if you just look at where AI is going to go.
So I’d say at the model layer this week, India entered the race, but we are going to play this race differently, which is IE. We’re not going to try to build trillion parameter models and we’ll do it at super, super low cost. Now, coming to the chip player, that’s harder for India, but I don’t know whether you know this, Matthew, but 20 % of the world’s semiconductor designers are in India. Four years ago, we had no semiconductor startups. Today, we have about 35 to 40. They span the spectrum from low -power, call it 20 nanometer chips, these all SoCs, all the way up through – actually, two weeks ago, we announced an investment in a GPU company.
It’s a very seasoned team, Intel, AMD, a company called Agrani. Monday, this week, we announced an investment in a company called C2I, which is going to make memory, focus on memory. So we’ll see even at the chip player, because what is very clear to us and I think to many in India is we have to have the sovereign stack. Our friends are no longer friends, or sometimes they’re friends, or sometimes they’re not. And as India, we just need to have the sovereign stack. And we can’t – we, of course, are going to have alliances. And today, I think a very important alliance. This alliance was announced with Paxilica and so on. But we’ve got to actually have a sovereign stack.
So whether it’s the chip layer, whether it’s the compute layer, I think it’s great that both Adani, Reliance announced $100 billion investments into AI infra this week at the model layer. Where we have excelled is on the application layer. And what I can tell you is at the application layer, I joined Google in 2011. At the time, India had 10 million connected smartphone users, no venture capital, and no unicorns. Today, we have a lot of venture capital. We have 900 million smartphone users. We don’t have enough capital, but we have enough venture capital, and we have 125 unicorns. At the application layer, I can confidently say, whether it’s consumer, whether it’s enterprise, Indian companies will win.
Because the traditional formats of consumer consumption, which is called search, or now Gemini, ChatGPT, et cetera, will scale to, in my view, will probably scale to 200 or 300 million. They’re not going to scale to a billion Indians. To scale to a billion Indians, you’ve got to. You’ve got to have image, you’ve got to have video, you’ve got to have highly local language, and it’s got to be. ultra, ultra low cost. So I do think we have a shot. I think what you’ve just described, we ship this week. I think by the end of the year, we’re going to ship 100x more because, Matthew, what’s happened in India is we don’t, we can’t, we’re trying to do things differently.
Payments is an example, which I think the whole world knows about. And you’ll see that playing out in many other things. And I’ll last end with this. Matthew, 2015, India had two space tech
And I would just say, don’t sell yourself short. Like you may not, India may not need AGI, but India may still build AGI. Right. And I think that the thing that actually might end up holding back the biggest AI companies is that they are so unresourced constrained. If you look at what was the biggest over the last two years, innovation that really drove AI forward, it wasn’t anything that Google or OpenAI or anyone else did. It was actually DeepSeek, and DeepSeek’s ability to say that within the constraints of the chips that they had access to, that they would – they had two incredible innovations. They would prune the tree more efficiently, and they’d be able to process that pruned tree much more quickly.
I wish DeepSeek had been an Indian company, not a Chinese company. It would have ended a little bit more constrained. I think we have a thought room. But it is – I actually think those places with constraints, and I would not be scared away if you’re an Indian AI company by hearing the hundreds of billions of dollars that the big U.S. AI companies are pouring in. That seems like an asset. That seems like an advantage that they have. But in some ways, it’s blinding them to what will be the real innovations that cause AI to become more efficient, that cause AI to become – more scalable, and there is no way that the long -term solution to this is you have to turn up a mothballed nuclear power plant.
So we’re going to get more efficient, and I would bet that that efficiency comes from places just like this.
So if I can push back to both of you, I think there’s – not that I disagree with any of this, but to say that – so one of the things that DeepSeek did was it came up with this reasoning model. I guess other people were working on it, but they did a really good job of doing reasoning really well and really powerfully. I mean the real – the DeepSeek innovation was being able to say that you’ve got to build this giant tree if you’re building an AI model. And they were able to say probabilistically there’s a whole bunch of branches on the tree that we can ignore. Like there’s a bunch of things in your life that have happened to you that your brain is just really good at forgetting about, whereas there are a few just salient moments that have formed who you are as a person.
What DeepSeek did is did a better job of pruning that. The big US AI models don’t have to do that because they can just – well, let’s just buy another H200, right? And let’s just keep throwing more money at the problem. By having the constraints and the specialization. in this case, the memory constraints, it forced DeepSeek to come up with a better pruning algorithm, which allowed them to then just be able to deliver AI at a much, much, much more efficient level. And I suspect Sarvam did something similar because, I mean, I spoke to Pratyush and he said that one of the things that the big guys have been coming around when we told him was how do you do this with 15 people?
And it’s certainly some of those constraints that they’re at work. I wanted to talk along similar lines on this idea of open source, open weights, perhaps as a stick with open weights because open source is a controversial definition. A lot of the early models at that time, there was a lot more open weight stuff coming out. Of late, that’s gone down. And the power of open weights, of open weights models and perhaps open sources different is that you can actually, I mean, develop open weights and you can actually develop open weights and you can actually can actually tinker around with the model. and customize it to their use case. But increasingly, we’ve seen a sort of a drop -off other than the Chinese models, Kimi and Qen, which are still open -weight.
I wanted to just discuss among the two of you, perhaps from different perspectives, maybe the use case perspective and perhaps just the whole internet infrastructure perspective, how important open -weight is. And some of the backroom chatter I’ve been getting is as these models get more performant, it becomes increasingly dangerous to put out highly performant models as open -weights because it’s something that OpenAI called malicious fine -tuning and the fact that as these models become better, it’s easier to undo the fine -tuning guardrails that have been established so these models don’t do bad things. And so that’s why it won’t be delivered. So I know that the ecosystem needs open -weights because not everyone has the time to do the training and the time to do the training.
And someone just perhaps wants to do the pre -training and get the model. out. But I’m also hearing from the other side that open weights has this fundamental security challenge. And I know, but we go to security separately, but just on open weights, what’s the, you know, what’s the way We thread this needle between these two things?
Well, I think that, okay, let’s, let’s, let’s, I’m going to tell a story. I don’t know that 100 % of the story is right, but I think it adds up to something that approximates what’s right. Let’s imagine that over time, you are one of these major model makers, you’re open AI, you’re, you’re anthropic, you’re, you’re Google. And you look at this and you say, huh, if we keep playing this out, then this is a commodity. And the only way that we win is if we restrict as many people from getting into the game as we possibly can. So how do you do that? I mean, one of the best ways to do that is just to scare everyone.
that if everybody has this technology, that the world is going to end. And so the next time you come across an AI doomer and they say, if everyone has this, the world is going to end, just keep pushing them. Just be like, okay, and then what happens? And then what happens? And then what happens? And basically, you know, the most likely, the scariest scenario is these things can design very bad maybe pathogens or other malicious vectors, biological vectors, that could then get synthesized and spread around society, to which I say, well, then shouldn’t we be regulating the synthesizers, not regulating the sort of technology that’s out there? But, again, it gets to be very hand -wavy.
But if you think about it as a strategy, if you believe that these ultimately are commodities, then what you want to do is actually regulate. Regulate them in order to make it so that yours is the only company that can be safely trusted to handle this. And I think that that’s, again, somewhat cynically, a lot of the explanation for why the people that are building these horribly dangerous, scary things keep telling you how horribly dangerous and scary they are. I’ve never seen another industry that has done that. You never – you don’t see like the automobile industry be like, you know, this could plow through a crowd of people and kill and be used in a mass murder event, right?
You don’t – that just doesn’t make any sense. And so the only way that I can make sense of that world is if, from a business perspective, if it’s actually trying to do some sort of regulatory capture. And so I am pretty discounting on what the risks are here. I tend to think that more open is going to win, and I tend to think that the Chinese approach right now is the smartest approach to take on what looks like this enormous kind of just money machine which the U.S. is creating. And so it’s – I think that as India thinks about how it’s going to regulate AI – I would be careful about listening to sort of the AI doomers.
I would be especially careful about trying to regulate the output of what is fundamentally at least a pseudo non -deterministic system. We have built machines that act like humans, and yet we think we can regulate them like machines. The better way to regulate them is actually more like humans. Look to the criminal code, not the engineering code, in order to figure out what that regulation should look like. And so I am very much pro -open. I think we should think about what these risks are and what these dangers are. We should definitely be testing and looking for those. But I tend to think that they are somewhat overblown. And if you want to understand why they are somewhat overblown, I would argue it is because it’s a strategy in order to keep the people , they’re currently in the lead going forward.
I think you may be absolutely right, because on that big stage that you were at just a short while ago, yesterday, there was a call by one of these companies for an IAEA for AI, that it should be regulated like nuclear technology. And the other example I keep giving is, you know, at the turn of this last century, people were just walking around the streets and getting electrocuted, because electricity is highly dangerous. And yet we sit in this room, which literally the walls are buzzing with electricity, and we’re completely safe. And this is the nature of all technologies. But on the positive side, Rajan, all the AI deployers that we have in India, a large number of them are relying on open source.
And if the open source pipeline starts, to diminish, where are they going to go? I mean, Sarvam can certainly deliver these models, but how important is it actually to the community of people that are, I mean, AI and chat GPT and all this is all well and good, but it’s really those applications, the voice applications that people need. How dependent are they on open source? What can we do to continue to keep this open?
Look, I mean, firstly, as Matthew said, look, if you invest a trillion dollars, okay, you can’t give it away for free. It’s as simple as that. It’s just economics. So you can position it any which way you want, but fundamentally it’s about economics, right, and how do you build a business, especially if you have to invest so much. No, look, open source is absolutely critical. You know, I think the – I mean, Lama is the most recent example, right, where the reality is if you’re going to launch the next state -of -the -art version of Lama, it’ll be closed because otherwise how are they going to monetize this, right? And especially if you have – you know, they’re spending $80 billion, $100 billion.
There are other ways to make money. They don’t make money. Even for them, $100 billion a year is kind of a lot, you know. So anyway, coming back, look, it’s super important to the ecosystem. I don’t know. I don’t have the answer to that. I think in March. In March, you’ll see one of the big companies make a big announcement, massive, massive announcement on their commitment to open source. But it is, you know, I think the only way you do this is there has to be a different path, okay, because if you’re going to have to invest hundreds of millions of dollars to build a new model or billions of dollars to build a new model or tens of billions of dollars, that doesn’t, that’s not really open.
You can’t keep those models open, right? So to the first question that you ask in Matthew’s response, you really need to have a different way of doing this, right? So actually, by the way, if you look at voice AI, for instance, I’ll give you some data. You know, India has very low labor costs. If you look at human cost of voice today in India, it’s five rupees a minute to about 20 rupees a minute. Five rupees a minute is the lowest you can get. Amex would be probably 40 or 50 rupees a minute. Today’s voice AI costs about three rupees a minute. So already, and that’s why you’re beginning to see voice AI really begin to take off in India, right?
But even with today’s SOTA model, you can get to about maybe a rupee. Right now, the question is, even at a rupee, now you’re one, you’re twenty one fifth the cost of humans. So it’s going to really take off. But if you want to make voice the primary medium through which one point four billion Indians will access AI, that’s still too expensive. Right. You’ve got to get it down to maybe five paisa or ten paisa. And that’s actually not about open source. It’s about compute and it’s about the cost of inference. So if you ask me, open source is really, really important. Important. But we have to find a way to get the cost of inference down.
Obviously, model size, all of these things matter. And, you know, we can talk about that as well. But a short answer is, look, it’s really important. But it is not clear to me how you do this, especially in the current game that we’re in. Because anybody that wants to be at the frontier, the way a frontier is defined today actually has to go out and invest. Right. And honestly, I don’t know how the Chinese are doing it because, you know, it’s a bit opaque as to exactly how much are they investing. You’re right. It’s kind of a hedge fund.
Which is basically what deep seek is and they have this on the side META is the fascinating question here because it took me a really long time to understand META’s strategy like why are they doing all this VR why are they doing all of this AI what they learned was the lesson that if you are caught on the wrong side of a platform shift and you then become beholden to some other platform where in the past they were on the web and that was fine and social worked and no one controlled the underlying platform and then the platform shifted and all of a sudden it was on mobile and they were beholden to both Apple and Google that put them on a back foot and it really limited their business.
So they are so desperate to whatever the next platform shift is stay in front of that platform shift and so for a while that looked like it might be VR that was less likely today although never count these technologies out The real next platform shift is almost certainly going to be AI. And so if you control the social graph, which is an unreplicable kind of asset that they have, they need to be – they need to make sure that whatever the next platform is, that they control that or at least have an equal seat at the table of everyone else. And so if they continue to invest in open source and you’re like, why are they spending so much money in order to do this?
It’s to make sure that as the next platform shifts, that it’s going to be – that they aren’t in the same back -footed position that they were with Apple and Google. That would be my analysis of META.
I don’t want to comment either way, that AI is going to accelerate cyber attacks because agentic swarms, et cetera, can do things much more dangerous. So what’s the evidence that you have for this?
Yeah, so I think this is sort of a long -term good news, short -term scary headline story. The long -term good news – let’s start with short -term scary headline. There are going to be a whole bunch of scary headlines of bad things that AI does. There will be a story about an Indian family who lost all their money because they wired it to some criminals that made it seem like their daughter had been kidnapped. I mean, we’re already seeing the level and sophistication of phishing scams go through the roof in terms of what is being done. And so the bad guys are going to use that to attack. The other thing that we’re seeing is – so there was an example.
There was a company called SalesLock. It had a program called Drift, a piece of software that was connected into hundreds of thousands of Salesforce instances. SalesLoft got breached by a Russian hacker. The Russian hacker didn’t understand how Salesforce worked, so they kind of fumbled around for a really long time. Had they just used AI, which is what we’re now seeing a lot of North Korean and Chinese hackers do, they would have been able to just be instantly knowledgeable on how to get as much information out of Salesforce as quickly as possible, and the breach could have been orders of magnitude worse. So those are the bad stories, and there’s going to be real hardship and real pain that’s caused from it.
The counter to that is that folks like us, I was just with Nikesh from Palo Alto Networks, Jay from Zscaler is here. We’re all using AI. We’re all using AI in our own systems to make them smart. In fact, Cloudflare, we would have never described ourselves this way, but the whole theory of the company was let’s get as much Internet traffic flowing through a machine learning system to be able to predict where security threats were in the same way that three years ago we all looked at ChatGPT and were like, whoa, that’s amazing. internally about three years ago was the first time the system said, bloop, here’s a new threat that no human has ever identified before.
And that went from being something that happened once in the first 15 years of CloudFlare’s history to now it’s happening on an incredibly regular basis where the machine learning is able to win. And so I think the good news is that the good guys will always have more data than the bad guys do. Again, a caveat to regulation preventing us from using it in order to do cybersecurity in various ways. But largely, we’re able to do that. And I think that we will actually use AI in order to stay ahead of these threats. That’s what we’re seeing. It’s going to require some change in any part of your life where you are today relying on what someone looks like or what they sound like in order to verify who they are and give them access to anything, secure, confidential.
That’s got to change. And so the simple thing that you should all do with your immediate family at your next holiday meal. is decide on a family password. And that seems silly, but I guarantee you at some point, some hacker is going to call up and say, hey, your son or your dad or your grandmother or whatever needs money. And if you say, hey, what’s the family password? And they say, I don’t know, Aardvark, you’ll know that it’s a scam, right? So it’s a simple thing that you can do. And it’s going to be these simple things, which I think are going to get translated. And I think businesses have got to go away from, oh, the person looked right, so we let them in the door.
Like that can’t happen in the cyber world. And so we’re going to have to lock systems down. There are going to be some scary stories, but I would predict again that in 10 years, we are more secure online than we are today.
Rajan, I wanted to talk about data because a lot of the conversation is around how the models that we have, CERN accepted, are models that are largely built in the West and therefore are Western systems. And I think that’s a very important part of the question. And I think that’s a very important part of the question. And I think that’s a very important part of the question. And I think that’s a very important part of the question. And I think that’s a very important part of the question. And I think that’s a very important part of the question. And I think that’s a very important part of the question. And I think that’s a very important part of the question.
And I think that’s a very important part of the question. And I think that’s a very important part of the question. and I know there’s something of a land grab going on for the data so as far as the data companies in India are concerned the companies are actually hoovering up the data, annotating it, making it ready what’s the business model for them? Are they feeding this all up back to that one pin code in the US or what’s the negotiating part and I ask this question because we have a lot of data in this country but I know that there are countries in Africa where the deal is already done and the data is out the door there are deals I know for 25 years worth of medical data out of Africa in exchange for setting up an EHR system because that’s a deal they’ve done.
And I was wondering whether we are thinking about this in a nuanced way and then I know Matthew you’ve got some ideas on crawling, I’ll come to you on that. What is actually happen on the ground with this?
I think the first thing is we don’t have as many I mean there are there are you know initiatives or NGOs like AI for Bharat that are collecting data but if you look at the leading global data companies they’re not Indian right India probably has a handful of startups that are that are that are actually in the quote unquote business of AI data for AI so first and foremost I mean because these companies are global you’re absolutely right all the data that Indians are generating is actually going to you know going to those sort of few handful of companies now that being said look firstly you know for for Indian companies to actually keep the data here we have to have model companies right like you know what I mean otherwise you have to sell it because if you’re in the data business you have to sell it to somebody but I think the benefit we have is you know like honest like you know we’ve only collected probably one less than one percent of the data we actually need if you really want to get to AGI right like if you look at but physical intelligence and things like that.
And India really has a competitive advantage. In fact, we’ve been looking for startups we could find, fund that would basically do all kinds of data collection for robotics and things like that. So that’s number one. I think number two, we’re also beginning to see companies that are leveraging their proprietary data in a very, very interesting way. I’ll just give you one example. So we have a company called Cloud Physician. It’s an Indian startup. They run these remote ICUs in tier two, tier three towns. In India, they’ve been doing that for four or five years. They’ve got this extraordinary amount of proprietary data that they’ve now used to actually build about a dozen or so specialized models in healthcare.
And now they’re actually taking those models to market in the U.S. And the kind of data that they have, which they’ve collected over four or five years for a different, for a sort of a healthcare delivery business, if you will, has been very valuable. So, you know, in our portfolio, we only have a handful of companies in different spaces that are using. And data is an advantage to actually build a final proposition, which is usually tied to some sort of domain model or something like that. But I do think, you know, we need probably some sort of – firstly, we need a lot more innovation around this. I’m surprised we don’t have more companies that are actually trying to sort of build businesses around this India’s data advantage.
And second, we need to have – I do think we need to have some smart regulation. I don’t know where the regulatory framework is on data. I think that’s going to be super, super important. I do know, like, AI for Bharat, et cetera, are being quite thoughtful about who they share data with, which is great. So, yeah, that’s sort of where it is. But it’s a huge opportunity for India. I don’t – you know, my real view is, like, look, basically, you know, it’s all the data on the Internet. That’s accessible to everybody. You just need, like, literally large amounts of capital. Most of the data that we need to get to AGI we don’t have yet.
And we have 1 .4 billion people. Well, created.
But, Matthew, you – You wanted to intervene in that whole thing. You had – is something called – maybe you didn’t call it, maybe the media started to call it pay -to -crawl, and you may have something more sophisticated like an AI audit or something like that. What’s the idea behind that? Because that’s also part of this democratization of AI as I see it.
So firstly, correcting a little bit of a misconception, all the money in the world and you still can’t even crawl the Internet. So how much less of the Internet does Microsoft see than Google? Microsoft Bing, they’ve thrown a ton of money at it. For every six pages that Google sees, Microsoft sees one. OpenAI knows how much of an advantage that is. So every 3 .5 pages that Google sees, OpenAI sees one. But that means that two -thirds of the Internet is hidden to – the most sophisticated model. Anthropic, it’s almost 10 to 1 in terms of what’s there. And so if you want to ask why did Gemini just leapfrog open AI, I don’t think it’s the chips.
I don’t think it’s the researchers. I actually think it’s the data. And I think getting access to data is important. And so if we want to have a level playing field, there’s a real risk that Google is going to leverage the monopoly position they had indexing the Internet yesterday in order to win in the AI market tomorrow. And that’s something that we’re really concerned about. And I think we have to do one of two things. We either have to bring Google down and say that they have to play by the same rules as all the other AI companies. That’s something that you could do from a regulatory perspective. And that’s something that the U.K. is looking into.
Canada is looking into. Australia is looking into. The alternative is how do we give all the other AI companies the same access that Google does. And that’s, I think, an opportunity to also solve some of the democratization challenges out there. One of the things I really worry about. is that AI is going to disrupt the fundamental internet business model. The fundamental internet business model was create content, drive traffic, and then sell things, subscriptions, or ads. That was it. I don’t care if you’re B2B, B2C. I don’t care if you’re a media company. That was it. Create great content, drive traffic, sell things, subscriptions, or ads. AI doesn’t work that way. So just take a media company.
If AI scrapes your ads and takes it, let’s say it’s the New York Times, or the Times of India, or whatever it is, you can now go to your AI and just say, show me all, summarize all the articles from the New York Times that would be of interest to me. And you’re going to read it there. Now, that’s great for you as a user. It’s better as a user experience. So it’s going to win. But now the Times of India isn’t selling a subscription or an ad. Now the New York Times isn’t getting anyone to click on an ad. And that’s going to make it harder. And to make this clear how much harder it’s gotten.
Ten years ago, for every two pages that Google scraped on the Internet, they sent you one unique visitor. And then you could monetize that visitor again by selling them things, subscriptions, or ads. Today, what is it? 50 to 1. Actually, excuse me, 30 to 1 in Google’s case, 50 to 1 in Bing’s case. That’s the good news. In OpenAI’s case, it’s 3 ,500 to 1. In Anthropic’s case, it’s half a million to 1. They take half a million pages for every one page they give back. So AI takes, but it doesn’t always give back. And if the currency of the Internet has been traffic, that traffic is gone. And it’s getting harder and harder to then make money through the traditional business model of the Internet.
So one of two things happens. One is, well, the Internet just dies. But that’s not going to happen because the AI companies need the content. They need the information. They need the things that are out there. And so the Internet. The alternative is a new business model emerges. So what happens? and that’s what’s going to happen over the course of the next five years a new business model is going to emerge for the internet and think how exciting that is think how rare new business models for something as grand and as large as the internet are how often they emerge almost never and yet we’re all going to live through it and that’s an incredible opportunity and i don’t know quite what it is but it has to be some way that the people who are creating the content and creating the value get compensated for the things that they are creating and what the encouraging version of this is to think about the music industry the entire music industry 22 years ago was valued at 8 billion us dollars which is a lot of money but it’s not that much money because that was the beatles and rolling stones and like everything right why was it that well because napster and grokster and kazan all these things had commoditized they were basically taking a music and musicians weren’t getting paid for it and they were getting paid for it and they were getting paid for it and they weren’t getting paid for the music anymore what changed one day steve jobs walked on stage and he said it’s going to be 99 cents per song right itunes launched almost 22 years ago to this day and that wasn’t the business model that won but at least was a business model and it started the conversation and that evolved into what is the business model that won which is something closer to spotify which is now i don’t know what it is in india but in the u .s it’s like ten dollars a month and what’s incredible is that spotify last year sent over 12 billion dollars to musicians more than the entire music industry was worth 22 years ago and that’s just spotify there’s apple music and and uh title and tiktok and youtube and tons of people there’s more money going into music creation today than at any other time in human history by an order of magnitude now different winners and losers and we can debate whether or not the right people are winning the right people are losing but there is more money going into music creation today than any time in human history and so as we figure out what the next business model of the internet is going to be, let’s try not to make it one that’s worse.
Let’s try and learn the lessons because traffic was always a terrible proxy for quality. So let’s actually find something that is a proxy for quality and let’s reward the people who are creating that. And the good news is, I think that’s what everyone in this room wants, but it’s what Sam wants. It’s what Daria wants. It’s even what Elon probably wants. And that’s the sort of thing that is actually going to drive not only a healthier internet ecosystem, but I actually think that a lot of what’s wrong with the world today is that we have monetized traffic. And what that has meant is we have monetized basically making people emotional or angry or whatever it gets to click on things, which is part of what’s driven society apart in a lot of ways.
I think if instead what we monetize and what we reward is the creation of human knowledge. That’s what the AI companies want. That’s what we all want. And I think that’s what we can actually do to actually bring our society back together in.
I want to turn it over to the audience for questions. I don’t want to be the only one asking questions. I’ll take – hands are going. I’m going to take three questions at a time.
I like Indian audiences. They ask questions. Like you go to the UK and everyone just sits on their hands.
No, no. Indian audiences are very, very – now we’ll have to shut them up because we don’t have time. I’m going to take this one. I’m going to take that one. I’m going to take this one, right? So first up here, yeah? And I have a rule, a question, not a statement. So it has to end with your voice going up a little bit. Then I know it’s a question.
Sir, this is for you. You’ve touched upon a lot of interesting topics across domains. First of all, I remember you talking about the deterministic AI outcomes. Now AI having crossed the threshold –
Give me the question.
Okay. So how – So what, in your view, would make AI trustworthy? Is it something to do with explainability, deterministic AI, and what would be the pathways?
Let me get a couple more. Otherwise, we won’t get through. The lady at the back there. So one is, I’ll keep track of it. How do we make AI more trustworthy?
My question is for Matthew. So you mentioned about the paper crawl. We see robots .txt getting ignored. My question for you is, what makes you believe that AI companies would be equally invested in a creator -based compensation when AI creates the Internet and is not giving back attribution or compensation?
Trustworthy, and how do creators get paid? And attribution. I think she also wants to do attribution. This gentleman here.
Hi. My question is to Rajan. So, Rajan, what do you think you were explaining about the consumer and vertical part in the application layer? So what do you think? Where are we in terms of investment from a venture capital side point of view in terms of how can we match the Y Combinator and AI 16Z level in terms of investments?
Great. So AI is already more trustworthy than most humans. The simple fact is that AI is a better driver than ninety nine point nine nine percent of humans that are on the road today. Literally, since I started talking within a kilometer of where we’re sitting, there was an accident where between two cars. I mean, we just know that’s happening. We’re sitting in Delhi. Right. You will not be able to find any news about that anywhere on any publication anywhere on Earth. And yet, if one of those two cars had been a self -driving car, it would have been front page news around the world. There are expectations for AI are too high. We have built a system that acts like humans and we need to think of it as acting like humans.
The smartest CEO that I know in terms of doing this is Robin Vince at BNY Mellon. In their case, they actually have AI employees. The AI employees get an employee number. They get an email address. They get a quarterly review. They can get fired if they don’t do a good job. They can get promoted if they do a good job. I asked if there are any AIs that are supervising humans. He said, not yet, but it’s inevitable. That’s the way to think of it, right, is that they act like humans because they are like humans. And, again, we are all fallible, and we’re all going to make mistakes, but already we see in certain disciplines like driving, AI is better than human beings are.
In terms of getting paid, I think the empirical evidence is that when you’ve actually seen – forget robots at TXC. That’s like a no trespassing sign. Anyone can ignore it. When you actually block the AI agents, which is what we have done, then they come to the table. And so with big publishers like Condé Nast, DotDash, Meredith, and others, where starting July 1st, we said all of the AI companies are blocked, they actually came to the table, and they were able to get paid. Things done. In the case of Reddit, Reddit was willing to block everyone. including even Google. And as a result, they got the public number is seven times as much for the Reddit corpus licensing that than the New York Times did, even though the two corpuses are about the same.
So again, I think that the first step in any market is having some level of scarcity. As long as you’re making it easy for anyone to take your data, then you’re not going to get paid for it.
Yeah, I think on the question on consumer AI, I don’t know, very few people know this, but India today has more consumer AI startups than the US. In fact, on Tuesday this week at the Pitchfest, just our firm, one firm, we announced five new seed investments in AI companies. Four out of the five are consumer AI companies, right? And the reason is, and we think this is going to explode because we have 900 million Indians on the internet, 850 million of them are active every day, seven hours a day on the internet. and every space has potential for tremendous innovation, right? If you take education, education hasn’t been accessible to a large part of online education because it’s just been too expensive, right?
But today with AI, you can have a 99 rupees a month plan with an AI tutor across. In fact, the fastest growing AI education company in the world is in India and nobody’s really heard of it because actually, fortunately, these guys are all just being stealth and just building, which is very good. So I think it’s a great time to be building consumer. Actually, it’s a great time building AI companies, but especially in consumer AI, we’re going to see some breakouts. Look, the world’s leading consumer AI companies in education, healthcare, entertainment, et cetera, will be either here or in China. They won’t be in the Western world because we just need it.
The one beautiful thing about this summit is there have been so many wonderful, rich, diverse conversations. This is one of them. Matthew, Rajan, thank you so much. Thank you all for being such a good audience. Thank you.
Matthew Prince
Speech speed
186 words per minute
Speech length
4453 words
Speech time
1431 seconds
AI hardware cost barrier
Explanation
Prince explains that the current AI ecosystem is dominated by NVIDIA GPUs, which consume a lot of power and are very expensive, making AI development costly. He questions whether this reliance on costly chips is a permanent state and notes that costs could drop dramatically within five years.
Evidence
“largely produced today by one manufacturer, NVIDIA, that use a ton of power and are very, very expensive” [2]. “And so if those are the things that today make AI extremely expensive, the question is, are those things permanent states or are they going to change?” [5]. “In five years, you’ll be able to build a frontier-like model within a specialty for $10 million or less” [16].
Major discussion point
Democratizing AI Infrastructure & Cost
Topics
Artificial intelligence | The enabling environment for digital development
Open model strategy wins
Explanation
Prince argues that a more open approach to AI development will prevail, citing the Chinese model as a smart strategy that leverages financial scale, and notes that OpenAI recognizes the advantage of openness.
Evidence
“I tend to think that more open is going to win, and I tend to think that the Chinese approach right now is the smartest approach to take on what looks like this enormous kind of just money machine which the U.S. is creating” [66]. “OpenAI knows how much of an advantage that is” [67].
Major discussion point
Open‑Source / Open‑Weights Models and Economics
Topics
Artificial intelligence | Data governance | The enabling environment for digital development
AI‑enabled cyber threats and defense
Explanation
Prince acknowledges that AI could accelerate sophisticated cyber attacks but asserts that defenders can also use AI and have more data, leading to improved security over time.
Evidence
“I don’t want to comment either way, that AI is going to accelerate cyber attacks because agentic swarms, et cetera, can do things much more dangerous” [84]. “And I think that we will actually use AI in order to stay ahead of these threats” [85].
Major discussion point
AI Safety, Security, Trustworthiness & Regulation
Topics
Building confidence and security in the use of ICTs | Artificial intelligence
Regulation should target criminal behavior
Explanation
Prince suggests that AI regulation should focus on the criminal code rather than trying to control the nondeterministic output of AI systems, warning against over‑engineering regulatory approaches.
Evidence
“Look to the criminal code, not the engineering code, in order to figure out what that regulation should look like” [92]. “I would be especially careful about trying to regulate the output of what is fundamentally at least a pseudo non-deterministic system” [94].
Major discussion point
AI Safety, Security, Trustworthiness & Regulation
Topics
Artificial intelligence | The enabling environment for digital development
Web‑crawling monopoly risk
Explanation
Prince warns that dominant search engine companies could leverage their indexing monopoly to gain an unfair advantage in the AI market, calling for a level playing field.
Evidence
“And so if we want to have a level playing field, there’s a real risk that Google is going to leverage the monopoly position they had indexing the Internet yesterday in order to win in the AI market tomorrow” [90].
Major discussion point
Data Ownership, Web Crawling, and Creator Compensation
Topics
Data governance | Artificial intelligence | The digital economy
Rajan Anandan
Speech speed
195 words per minute
Speech length
2620 words
Speech time
802 seconds
Low‑cost, modest‑parameter models for India
Explanation
Anandan emphasizes the need for highly performant, ultra‑low‑cost models with a few hundred million to a few hundred billion parameters, noting that India has been developing such models for several years.
Evidence
“What you need are highly performant, extremely low cost models that are a billion parameters to maybe 100 or 200 billion parameters” [17]. “We’re not going to try to build trillion parameter models and we’ll do it at super, super low cost” [18]. “In India, they’ve been doing that for four or five years” [20].
Major discussion point
Democratizing AI Infrastructure & Cost
Topics
Artificial intelligence | The enabling environment for digital development
Focus on language‑specific models over AGI
Explanation
Anandan states that India’s priority is to build affordable models for local languages and agricultural use cases rather than pursuing AGI, highlighting existing state‑of‑the‑art Indic language models.
Evidence
“I think, firstly, it’s important that India is not trying to get to AGI” [30]. “We need to uplift 100 million farmers, and for that, we need to build basically models that work on feature fonts, right, in local languages” [31]. “With 1 .4 billion humans and a million Indians turning 18 every month, AGI is not the thing that we need” [33]. “And actually, for the amount that you mentioned, we have launched 30 and 100 billion parameter models that are SOTA in Indic languages” [34].
Major discussion point
India’s AI Strategy & Sovereign Stack
Topics
Artificial intelligence | Data governance | Closing all digital divides
Building a sovereign AI chip stack
Explanation
Anandan argues that India must develop its own semiconductor design capabilities and foster chip startups to create a sovereign AI stack, noting that 20 % of global designers are already in India.
Evidence
“So we’ll see even at the chip player, because what is very clear to us and I think to many in India is we have to have the sovereign stack” [40]. “Now, coming to the chip player, that’s harder for India, but I don’t know whether you know this, Matthew, but 20 % of the world’s semiconductor designers are in India” [43]. “Four years ago, we had no semiconductor startups” [44].
Major discussion point
India’s AI Strategy & Sovereign Stack
Topics
Artificial intelligence | The enabling environment for digital development
Open‑weights funding challenge
Explanation
Anandan points out that keeping models open‑weight is financially unsustainable unless new business models emerge, because building large models now requires hundreds of millions to billions of dollars.
Evidence
“You can’t keep those models open, right?” [64]. “But it is, you know, I think the only way you do this is there has to be a different path, okay, because if you’re going to have to invest hundreds of millions of dollars to build a new model or billions of dollars to build a new model or tens of billions of dollars, that doesn’t, that’s not really open” [65].
Major discussion point
Open‑Source / Open‑Weights Models and Economics
Topics
Artificial intelligence | Data governance | The enabling environment for digital development
Smart data regulation needed
Explanation
Anandan calls for thoughtful regulation of data and AI to protect national interests while fostering innovation, acknowledging uncertainty about the current regulatory framework.
Evidence
“And second, we need to have – I do think we need to have some smart regulation” [101]. “I don’t know where the regulatory framework is on data” [102].
Major discussion point
AI Safety, Security, Trustworthiness & Regulation
Topics
Artificial intelligence | Data governance | The enabling environment for digital development
Domestic data‑centric startups retain value
Explanation
Anandan notes that most Indian data currently flows to a few global AI firms and argues that building Indian data‑focused startups is essential to keep that value within the country.
Evidence
“all the data that Indians are generating is actually going to you know going to those sort of few handful of companies” [36]. “Most of the data that we need to get to AGI we don’t have yet” [37].
Major discussion point
Data Ownership, Web Crawling, and Creator Compensation
Topics
Data governance | Artificial intelligence | The digital economy
Booming consumer AI startup ecosystem
Explanation
Anandan highlights that India now has more consumer AI startups than the US, backed by abundant venture capital and a massive internet‑active population, positioning Indian firms to win in consumer AI.
Evidence
“India today has more consumer AI startups than the US” [45]. “Today, we have a lot of venture capital” [119]. “we have 900 million Indians on the internet, 850 million of them are active every day” [123].
Major discussion point
Investment Landscape & Consumer AI Applications
Topics
The digital economy | Artificial intelligence | The enabling environment for digital development
Rahul Matthan
Speech speed
158 words per minute
Speech length
1775 words
Speech time
673 seconds
Open‑weights importance and security challenges
Explanation
Matthan stresses that open‑weights are vital for the ecosystem but raise fundamental security concerns, especially as models become more capable and can be maliciously fine‑tuned.
Evidence
“But I’m also hearing from the other side that open weights has this fundamental security challenge” [53]. “as these models become better, it’s easier to undo the fine-tuning guardrails that have been established so these models don’t do bad things” [59].
Major discussion point
Open‑Source / Open‑Weights Models and Economics
Topics
Artificial intelligence | Data governance | Building confidence and security in the use of ICTs
Reliance on open‑source in Indian AI deployments
Explanation
Matthan observes that many Indian AI deployers depend heavily on open‑source software, underscoring its central role in the local AI stack.
Evidence
“all the AI deployers that we have in India, a large number of them are relying on open source” [48].
Major discussion point
Open‑Source / Open‑Weights Models and Economics
Topics
Artificial intelligence | The enabling environment for digital development
Need to sustain open‑source pipelines
Explanation
Matthan argues that maintaining open‑source pipelines is crucial for continued AI growth in India, noting that not everyone has the resources to train models from scratch.
Evidence
“So I know that the ecosystem needs open -weights because not everyone has the time to do the training and the time to do the training” [51]. “but increasingly, we’ve seen a sort of a drop -off other than the Chinese models, Kimi and Qen, which are still open -weight” [62].
Major discussion point
Investment Landscape & Consumer AI Applications
Topics
Artificial intelligence | The enabling environment for digital development
Announcer
Speech speed
147 words per minute
Speech length
266 words
Speech time
108 seconds
Leadership legacy of Matthew and Rajan
Explanation
The announcer highlights the extensive experience of both Matthew Prince and Rajan Anandan, portraying them as pivotal figures who have brought transformative technology to millions.
Evidence
“Very few individuals have done more to bring revolutionary and transformative technology into the hands of millions than Matthew and Rajan” [125]. “With decades of experience across entrepreneurship, investing, and global technology leadership, Rajan has played a pivotal role” [124].
Major discussion point
Opening Framing / Significance of Leadership
Topics
The enabling environment for digital development | Artificial intelligence
Audience
Speech speed
196 words per minute
Speech length
177 words
Speech time
54 seconds
Question on AI trustworthiness
Explanation
An audience member asks what would make AI trustworthy, prompting discussion on safety, regulation, and confidence in AI systems.
Evidence
“Now AI having crossed the threshold –” [105].
Major discussion point
AI Safety, Security, Trustworthiness & Regulation
Topics
Artificial intelligence | Building confidence and security in the use of ICTs
Agreements
Agreement points
Resource constraints drive innovation and efficiency in AI development
Speakers
– Matthew Prince
– Rajan Anandan
Arguments
Constraints force innovation – companies with limited resources often develop more efficient solutions than well-funded competitors
India can compete by building specialized, cost-effective models for local needs rather than pursuing AGI
Summary
Both speakers agree that having limited resources can actually be advantageous for AI innovation. Prince cites DeepSeek’s breakthrough efficiency innovations due to chip constraints, while Anandan describes how Indian companies like Sarvam are achieving state-of-the-art results with smaller teams and budgets by focusing on specialized, cost-effective models rather than trying to match the massive investments of US companies.
Topics
Artificial intelligence | The enabling environment for digital development
AI costs and barriers will decrease over time making the technology more accessible
Speakers
– Matthew Prince
– Rajan Anandan
Arguments
AI costs will decrease dramatically due to increased competition in chips and talent availability – current barriers are temporary
India has entered the AI race with competitive models and is building a sovereign technology stack across chips, compute, and applications
Summary
Both speakers are optimistic about AI becoming more accessible and affordable. Prince predicts frontier-like models will cost $10 million or less within five years due to increased competition and talent availability. Anandan provides evidence that this is already happening, with Indian companies building competitive models and the country developing its own semiconductor capabilities.
Topics
Artificial intelligence | The enabling environment for digital development | Capacity development
Open source/open weights are important for the ecosystem but face economic and security challenges
Speakers
– Matthew Prince
– Rajan Anandan
– Rahul Matthan
Arguments
More open approaches will ultimately win, and regulation should focus on outputs rather than restricting access to models
Open source is critical for the ecosystem but economically challenging when companies invest hundreds of billions in model development
Open-weight models are essential for AI ecosystem development but face security challenges as models become more performant
Summary
All three speakers acknowledge the importance of open approaches for AI development while recognizing the practical challenges. Prince advocates for openness and warns against regulatory capture, Anandan explains the economic reality that massive investments make free distribution difficult, and Matthan highlights the security concerns with malicious fine-tuning as models become more capable.
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society | The enabling environment for digital development
AI will transform internet business models and require new compensation mechanisms for content creators
Speakers
– Matthew Prince
– Rahul Matthan
Arguments
AI is disrupting traditional internet monetization by taking content without driving traffic back to creators
Data sovereignty and fair compensation for data providers is critical, with concerning precedents from other regions
Summary
Both speakers recognize that AI is fundamentally disrupting how content creators are compensated online. Prince provides detailed analysis of how AI systems scrape content without sending traffic back, breaking the traditional create-content-drive-traffic-monetize model. Matthan raises concerns about data extraction patterns and the need for fair compensation, referencing problematic deals in other regions.
Topics
Data governance | Artificial intelligence | The digital economy
Similar viewpoints
Both speakers are skeptical of AI doom narratives and believe in distributed, competitive AI development rather than concentration in a few large companies. Prince argues that doom scenarios are strategically promoted for regulatory capture, while Anandan demonstrates through India’s success that alternative approaches to AI development are viable and competitive.
Speakers
– Matthew Prince
– Rajan Anandan
Arguments
AI companies promote doom scenarios to justify regulatory capture and maintain competitive advantages
India has more consumer AI startups than the US and is well-positioned for AI applications serving its large user base
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Both speakers believe that AI safety concerns are often exaggerated and that AI systems are already performing better than humans in many applications. They draw parallels to historical technology adoption challenges, suggesting that society will adapt and develop appropriate safety measures over time.
Speakers
– Matthew Prince
– Rahul Matthan
Arguments
AI is already more trustworthy than humans in many applications but faces unrealistic expectations
Technology safety concerns may be overblown, drawing parallels to electricity adoption challenges
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Both speakers see data as a valuable asset that should be properly monetized. Prince provides examples of successful negotiations when content is protected, while Anandan highlights India’s data advantages and the need for more companies to build businesses around this asset.
Speakers
– Matthew Prince
– Rajan Anandan
Arguments
Blocking AI crawlers creates scarcity that forces AI companies to negotiate fair compensation with content creators
India has competitive advantages in data collection but needs more companies building businesses around this data advantage
Topics
Data governance | Artificial intelligence | The digital economy
Unexpected consensus
AI regulation should be practical rather than restrictive
Speakers
– Matthew Prince
– Rajan Anandan
– Rahul Matthan
Arguments
AI should be regulated like humans using criminal codes rather than engineering standards for deterministic systems
India has entered the AI race with competitive models and is building a sovereign technology stack across chips, compute, and applications
Technology safety concerns may be overblown, drawing parallels to electricity adoption challenges
Explanation
Despite coming from different perspectives (infrastructure provider, investor, and legal expert), all three speakers converge on the view that AI regulation should be practical and not overly restrictive. This consensus is unexpected given the current global trend toward more stringent AI regulation and the different stakeholder interests they represent.
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society | The enabling environment for digital development
Democratization of AI is both necessary and achievable
Speakers
– Matthew Prince
– Rajan Anandan
Arguments
Constraints force innovation – companies with limited resources often develop more efficient solutions than well-funded competitors
India can compete by building specialized, cost-effective models for local needs rather than pursuing AGI
Explanation
Both speakers, despite representing different sectors (US tech infrastructure vs Indian venture capital), strongly agree that AI democratization is not only desirable but actively happening. This consensus is unexpected given the narrative of AI concentration in a few large US companies, and their agreement suggests a more optimistic future for global AI competition.
Topics
Artificial intelligence | The enabling environment for digital development | Closing all digital divides
Overall assessment
Summary
The speakers demonstrate remarkable consensus on key issues around AI democratization, the temporary nature of current barriers, the importance of open approaches, and the need for practical rather than restrictive regulation. They agree that resource constraints can drive innovation, that AI costs will decrease, and that new business models must emerge to fairly compensate content creators.
Consensus level
High level of consensus with significant implications for AI policy and development. The agreement between speakers from different backgrounds (US infrastructure provider, Indian investor, and legal expert) suggests these views may represent broader industry sentiment. Their shared optimism about AI democratization and skepticism of doom scenarios could influence policy discussions, while their agreement on the need for new internet business models highlights the urgency of addressing creator compensation in the AI era.
Differences
Different viewpoints
Timeline for AI cost reduction and democratization
Speakers
– Matthew Prince
– Rajan Anandan
Arguments
AI costs will decrease dramatically due to increased competition in chips and talent availability – current barriers are temporary
India can compete by building specialized, cost-effective models for local needs rather than pursuing AGI
Summary
Prince predicts frontier-like models will cost $10 million or less within five years, while Anandan argues India has already achieved competitive results this week and suggests the timeline could be more aggressive
Topics
Artificial intelligence | The enabling environment for digital development
Approach to AI safety and regulation concerns
Speakers
– Matthew Prince
– Rahul Matthan
Arguments
AI companies promote doom scenarios to justify regulatory capture and maintain competitive advantages
Technology safety concerns may be overblown, drawing parallels to electricity adoption challenges
Summary
Both speakers are skeptical of AI safety concerns but for different reasons – Prince sees it as strategic business manipulation while Matthan draws historical parallels to technology adoption
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Effectiveness of blocking AI crawlers for creator compensation
Speakers
– Matthew Prince
– Audience
Arguments
Blocking AI crawlers creates scarcity that forces AI companies to negotiate fair compensation with content creators
AI companies may not be genuinely committed to creator compensation despite ignoring existing protection mechanisms
Summary
Prince provides evidence that blocking works citing successful negotiations, while audience member questions AI companies’ genuine commitment given they already ignore robots.txt
Topics
Data governance | Artificial intelligence | The digital economy
Unexpected differences
AI trustworthiness standards and expectations
Speakers
– Matthew Prince
– Audience
Arguments
AI is already more trustworthy than humans in many applications but faces unrealistic expectations
AI trustworthiness requires focus on explainability and deterministic outcomes
Explanation
Unexpected because Prince argues AI is already trustworthy enough and faces unfair standards, while audience member seeks traditional technical solutions like explainability – represents fundamental disagreement about whether the problem is technical or perceptual
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Investment capacity requirements for AI competitiveness
Speakers
– Rajan Anandan
– Audience
Arguments
Consumer AI will see major breakthroughs in education, healthcare, and entertainment, particularly in India and China
India needs to match international venture capital investment levels to compete in AI development
Explanation
Unexpected because Anandan is optimistic about India’s AI prospects with current resources while audience member suggests India needs to match Y Combinator and a16z investment levels, representing disagreement about whether current funding is sufficient
Topics
Financial mechanisms | Artificial intelligence | The enabling environment for digital development
Overall assessment
Summary
The discussion revealed surprisingly few fundamental disagreements among speakers, with most differences centered on timelines, approaches, and emphasis rather than core principles. Main disagreements involved the pace of AI democratization, the effectiveness of current strategies for creator compensation, and whether existing AI systems meet trustworthiness standards.
Disagreement level
Low to moderate disagreement level with high convergence on goals but different perspectives on implementation strategies and timelines. This suggests a productive foundation for collaboration while highlighting areas needing further discussion around practical implementation details.
Partial agreements
Partial agreements
Both agree open source is important for the ecosystem, but disagree on feasibility – Prince advocates for more openness while Anandan acknowledges economic realities make it difficult for companies investing massive amounts
Speakers
– Matthew Prince
– Rajan Anandan
Arguments
More open approaches will ultimately win, and regulation should focus on outputs rather than restricting access to models
Open source is critical for the ecosystem but economically challenging when companies invest hundreds of billions in model development
Topics
Artificial intelligence | The enabling environment for digital development | Financial mechanisms
Both agree that resource constraints can drive innovation and efficiency, but Prince focuses on this as a general principle while Anandan specifically applies it to India’s strategy of building specialized rather than general AI models
Speakers
– Matthew Prince
– Rajan Anandan
Arguments
Constraints force innovation – companies with limited resources often develop more efficient solutions than well-funded competitors
India can compete by building specialized, cost-effective models for local needs rather than pursuing AGI
Topics
Artificial intelligence | The enabling environment for digital development
Both recognize India’s data advantages and the need for better data governance, but Anandan focuses on business opportunities while Matthan emphasizes sovereignty and protection concerns
Speakers
– Rajan Anandan
– Rahul Matthan
Arguments
India has competitive advantages in data collection but needs more companies building businesses around this data advantage
Data sovereignty and fair compensation for data providers is critical, with concerning precedents from other regions
Topics
Data governance | Artificial intelligence | Social and economic development
Similar viewpoints
Both speakers are skeptical of AI doom narratives and believe in distributed, competitive AI development rather than concentration in a few large companies. Prince argues that doom scenarios are strategically promoted for regulatory capture, while Anandan demonstrates through India’s success that alternative approaches to AI development are viable and competitive.
Speakers
– Matthew Prince
– Rajan Anandan
Arguments
AI companies promote doom scenarios to justify regulatory capture and maintain competitive advantages
India has more consumer AI startups than the US and is well-positioned for AI applications serving its large user base
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Both speakers believe that AI safety concerns are often exaggerated and that AI systems are already performing better than humans in many applications. They draw parallels to historical technology adoption challenges, suggesting that society will adapt and develop appropriate safety measures over time.
Speakers
– Matthew Prince
– Rahul Matthan
Arguments
AI is already more trustworthy than humans in many applications but faces unrealistic expectations
Technology safety concerns may be overblown, drawing parallels to electricity adoption challenges
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Both speakers see data as a valuable asset that should be properly monetized. Prince provides examples of successful negotiations when content is protected, while Anandan highlights India’s data advantages and the need for more companies to build businesses around this asset.
Speakers
– Matthew Prince
– Rajan Anandan
Arguments
Blocking AI crawlers creates scarcity that forces AI companies to negotiate fair compensation with content creators
India has competitive advantages in data collection but needs more companies building businesses around this data advantage
Topics
Data governance | Artificial intelligence | The digital economy
Takeaways
Key takeaways
AI democratization is achievable as costs will decrease dramatically due to competition in chips and talent – current barriers like expensive NVIDIA chips and scarce AI expertise are temporary states that will resolve within 5 years
India can compete in AI by focusing on specialized, cost-effective models for local needs rather than pursuing AGI – constraints often drive more efficient innovation than unlimited resources
Open source AI models are critical for ecosystem development but face economic challenges as companies invest hundreds of billions in development – this creates tension between accessibility and business viability
AI will initially enable more sophisticated cyberattacks but ultimately make systems more secure through better defense capabilities – the key is evolving authentication beyond appearance-based verification
The traditional internet business model (create content, drive traffic, sell ads/subscriptions) is being disrupted by AI, necessitating a new compensation model for content creators similar to how the music industry evolved
AI should be regulated like humans using criminal codes rather than engineering standards, and treated as employees with accountability measures rather than deterministic machines
India has significant advantages in consumer AI applications due to its large user base (900 million internet users) and cost constraints that drive innovation
Resolutions and action items
Content creators should block AI crawlers to create scarcity and force AI companies to negotiate fair compensation agreements
Families should establish password systems to protect against AI-enabled social engineering attacks
India should continue building its sovereign technology stack across chips, compute, and applications rather than relying solely on foreign providers
Businesses must move away from appearance-based authentication systems to more secure verification methods
The industry needs to develop new internet business models that reward quality content creation rather than just traffic generation
Unresolved issues
How to maintain open source AI development when companies need to recoup massive investments in model development
What specific regulatory framework should govern AI development and deployment, particularly regarding data usage and model access
How to balance AI safety concerns with democratization goals – the tension between restricting access for safety versus enabling innovation
What the new internet business model will actually look like beyond general principles of compensating content creators
How to ensure fair attribution and compensation for content used in AI training when current systems largely ignore creator rights
Whether India can successfully build competitive semiconductor capabilities to achieve true technological sovereignty
How to scale AI applications to serve India’s full 1.4 billion population at affordable costs (getting voice AI from current 3 rupees per minute to 5-10 paisa per minute)
Suggested compromises
Focus on specialized AI models for specific use cases rather than competing directly with frontier AGI models – allows for innovation within resource constraints
Implement graduated access to AI models based on use case and safety considerations rather than blanket restrictions or complete openness
Develop hybrid approaches where basic models remain open source while advanced capabilities require licensing agreements
Create international cooperation frameworks for AI development while maintaining sovereign capabilities in critical areas
Establish industry standards for content creator compensation that balance AI company needs with fair payment for training data
Regulate AI outputs and applications rather than restricting access to underlying models and research
Thought provoking comments
Already, if you look in enrollment in computer science programs across the world, it is up dramatically… And then secondly, the enrollment in specifically AI theory courses is off the top. Every university that used to sort of shutter their course is now standing it up and building it like crazy… In five years, you’ll be able to build a frontier-like model within a specialty for $10 million or less.
Speaker
Matthew Prince
Reason
This comment reframes the AI democratization debate by providing concrete evidence that the barriers to AI development are already eroding. Rather than accepting the narrative that AI will remain concentrated among a few companies, Prince presents data showing structural changes in education and predicts dramatic cost reductions. His specific $10 million prediction creates a measurable benchmark for the discussion.
Impact
This comment fundamentally shifted the conversation from whether AI can be democratized to how and when it will happen. It prompted Rajan to provide concrete examples of Indian companies already achieving competitive results with constrained resources, moving the discussion from theoretical to practical examples.
AGI is not the thing that we need. Our focus really is to uplift 1.4 billion Indians… What you need are highly performant, extremely low cost models that are a billion parameters to maybe 100 or 200 billion parameters.
Speaker
Rajan Anandan
Reason
This comment challenges the fundamental assumption that success in AI means competing directly with frontier models. Anandan redefines the problem space entirely, arguing that different regions need different AI solutions based on their specific challenges and constraints. This represents a strategic pivot from imitation to innovation.
Impact
This reframing allowed the discussion to explore alternative pathways to AI success, leading to detailed examples of Indian companies achieving state-of-the-art results in specific domains. It shifted the conversation from a zero-sum competition narrative to a more nuanced view of specialized AI applications.
I wish DeepSeek had been an Indian company, not a Chinese company… I actually think those places with constraints… it’s blinding them to what will be the real innovations that cause AI to become more efficient… there is no way that the long-term solution to this is you have to turn up a mothballed nuclear power plant.
Speaker
Matthew Prince
Reason
This comment inverts the conventional wisdom about resource constraints being disadvantages. Prince argues that unlimited resources can actually hinder innovation by preventing the development of more efficient solutions. This paradoxical insight challenges the assumption that more funding automatically leads to better outcomes.
Impact
This observation validated Rajan’s examples of Indian companies achieving impressive results with limited resources and introduced the concept that constraints can drive superior innovation. It led to a deeper discussion about efficiency versus brute force approaches in AI development.
Let’s imagine that over time, you are one of these major model makers… the only way that we win is if we restrict as many people from getting into the game as we possibly can. So how do you do that? I mean, one of the best ways to do that is just to scare everyone… I’ve never seen another industry that has done that.
Speaker
Matthew Prince
Reason
This comment provides a cynical but compelling economic explanation for AI safety rhetoric, suggesting that doomsday scenarios may be strategically motivated rather than purely safety-driven. Prince’s comparison to other industries (noting that car companies don’t emphasize their products’ potential for mass casualties) is particularly striking.
Impact
This comment dramatically shifted the tone of the open-source discussion, reframing safety concerns as potential regulatory capture attempts. It led to a more skeptical analysis of AI regulation and reinforced arguments for keeping AI development open and competitive.
The fundamental internet business model was create content, drive traffic, and then sell things, subscriptions, or ads… In OpenAI’s case, it’s 3,500 to 1. In Anthropic’s case, it’s half a million to 1. They take half a million pages for every one page they give back.
Speaker
Matthew Prince
Reason
This comment identifies a fundamental economic disruption that extends far beyond AI companies to the entire internet ecosystem. The specific ratios (3,500:1, 500,000:1) make the scale of value extraction viscerally clear and highlight an unsustainable dynamic that threatens content creation.
Impact
This observation expanded the discussion beyond technical AI development to broader economic implications. It introduced the music industry analogy and led to a forward-looking discussion about new business models, suggesting that current disruption could ultimately lead to better compensation for creators.
AI is already more trustworthy than most humans. The simple fact is that AI is a better driver than ninety nine point nine nine percent of humans… There are expectations for AI are too high. We have built a system that acts like humans and we need to think of it as acting like humans.
Speaker
Matthew Prince
Reason
This comment challenges the framing of AI trustworthiness by comparing AI performance to human performance rather than to perfect performance. Prince’s observation about media coverage bias (human accidents ignored, AI accidents front-page news) reveals how perception shapes reality in AI adoption.
Impact
This reframing shifted the trustworthiness discussion from abstract concerns about AI reliability to practical comparisons with human performance. It introduced the innovative example of BNY Mellon treating AIs as employees, providing a concrete model for AI integration in organizations.
Overall assessment
These key comments fundamentally reshaped the discussion from a conventional narrative about AI concentration and risks to a more nuanced exploration of alternative pathways and hidden dynamics. Prince’s economic analysis of AI development costs and safety rhetoric, combined with Anandan’s strategic reframing of India’s AI goals, created a counter-narrative to Silicon Valley dominance. The conversation evolved from defensive positioning (‘how can we compete?’) to offensive strategy (‘how can we innovate differently?’). The discussion’s progression through technical capabilities, economic models, and societal implications was driven by these provocative reframings that challenged basic assumptions about AI development, regulation, and deployment. The overall effect was to transform what could have been a standard panel about AI challenges into a strategic discussion about alternative approaches to AI development and deployment.
Follow-up questions
How can India develop a sovereign semiconductor stack given the constraints and competition from established players?
Speaker
Rajan Anandan
Explanation
This is critical for India’s technological independence and ability to compete in AI infrastructure without relying on foreign suppliers
What specific regulatory framework should India adopt for AI data governance and protection?
Speaker
Rajan Anandan
Explanation
India needs smart regulation around data usage for AI development to protect its competitive advantages while enabling innovation
How can the cost of AI inference be reduced to 5-10 paisa per minute to make voice AI accessible to 1.4 billion Indians?
Speaker
Rajan Anandan
Explanation
Current costs of 1 rupee per minute are still too expensive for mass adoption across India’s population
What new business models will emerge to replace the traditional internet traffic-based monetization as AI disrupts content consumption?
Speaker
Matthew Prince
Explanation
The fundamental shift from traffic-driven revenue to AI-mediated content consumption requires entirely new economic models for content creators
How can quality-based compensation systems be developed to reward content creators in an AI-driven internet ecosystem?
Speaker
Matthew Prince
Explanation
Moving beyond traffic as a proxy for value to actual quality metrics could create a healthier internet ecosystem and better societal outcomes
What mechanisms can ensure AI companies provide fair attribution and compensation to content creators whose work they use for training?
Speaker
Audience member
Explanation
Current AI systems often ignore robots.txt and don’t provide attribution, raising questions about fair compensation for creators
How can India scale venture capital investments to match the level of Y Combinator and Andreessen Horowitz for AI startups?
Speaker
Audience member
Explanation
India needs stronger funding ecosystems to support its growing AI startup community and compete globally
What technical approaches beyond current transformer architectures could lead to more efficient AI systems?
Speaker
Rajan Anandan
Explanation
Current LLMs are described as inefficient compute machines, suggesting need for research into alternative architectures
How can AI systems be made more explainable and deterministic to increase trustworthiness?
Speaker
Audience member
Explanation
As AI systems become more powerful and autonomous, understanding their decision-making processes becomes crucial for trust and adoption
What data collection strategies should India pursue for robotics and physical intelligence applications?
Speaker
Rajan Anandan
Explanation
India has competitive advantages in data collection that could be leveraged for next-generation AI applications beyond language models
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

