Open Internet Inclusive AI Unlocking Innovation for All
20 Feb 2026 16:00h - 17:00h
Open Internet Inclusive AI Unlocking Innovation for All
Summary
The panel, moderated by Rahul Mathan, opened by stressing that AI should not be confined to a handful of companies in a single postal code and that democratizing the technology requires new infrastructure [15-21]. Matthew Prince explained that today’s AI is expensive because it relies on massive numbers of NVIDIA GPUs originally designed for gaming and cryptocurrency, which consume huge power and are costly [22-30]. He added that only a tiny global talent pool knows how to build and run large models, further driving high salaries and limiting broader participation [31-33]. Prince argued that rising enrollment in computer-science and AI courses, expanding chip production, and competition among startups will drive down hardware costs, making AI models more of a commodity [44-53][54-57]. He predicted that within five years a frontier-level specialized model could be built for under $10 million, a dramatic drop from today’s billions-dollar investments [60-62].
Rajan Anandan countered that India does not need to chase AGI; instead it is deploying billion-parameter models optimized for Indic languages, such as SARVAM, which already outperform global voice-AI at a fraction of the cost [75-83][84-92]. He emphasized building a sovereign AI stack, citing recent investments in Indian GPU and memory startups and alliances with firms like Paxilica to reduce dependence on foreign chip suppliers [108-119][120-128]. When asked about open-weight models, Rahul noted security concerns, while Prince defended openness, saying AI-doom narratives may be a strategy for incumbent firms to capture regulation and that regulation should target behavior rather than model access [165-172][176-204]. Rajan agreed that open source is essential but argued that the economics of training large models make fully open releases difficult, and that lowering inference costs is the key to affordable AI for India’s billion users [217-227][240-248].
Prince also warned that AI could amplify cyber-attacks, but highlighted that his company uses machine learning to detect threats faster than humans and expects overall online security to improve over the next decade [264-282]. He warned that dominant search engines’ control over web indexing gives them a data advantage, urging regulators to ensure equal data access and anticipating a new internet business model that rewards content creators rather than traffic [345-368][395-404]. The discussion concluded that achieving AI democratization will depend on cheaper hardware, open-source ecosystems, sovereign data and chip strategies, and nuanced regulation that balances innovation with security [58-62][217-227][176-204][345-368].
Keypoints
– Why AI is currently hard and expensive, and how those barriers might fall.
Matthew explains that AI’s cost is driven by a reliance on a single chip supplier (NVIDIA) and massive power needs, plus a tiny pool of specialized talent [22-31]. He notes that enrollment in CS and AI courses is soaring, which should broaden expertise [44-46]. He also predicts that chip competition and economies of scale will drive down per-unit costs, making frontier-level models affordable (≈ $10 M) within five years [58-62].
– India’s distinct AI roadmap: low-cost, language-focused models and a sovereign tech stack.
Rajan stresses that India does not need trillion-parameter AGI; instead it needs “highly performant, extremely low-cost” models of 1-200 B parameters for 1.4 billion users [75-81]. He cites home-grown models (Sarvam) that outperform global voice-AI at a fraction of the cost [82-84] and outlines rapid growth in Indian chip design and GPU startups, as well as the push for a sovereign hardware and compute stack [108-119]. He argues that India will win on the application layer, leveraging massive smartphone penetration and local-language demand [121-132].
– The open-source / open-weights dilemma.
Rahul raises the tension between the need for open models to enable rapid innovation and the security risks of releasing highly capable weights [165-174]. Matthew argues that restricting access is a business strategy masquerading as safety concerns and that a more open ecosystem will ultimately prevail [176-204]. Rajan adds that the economics of trillion-dollar model training make pure openness infeasible, yet open models remain “absolutely critical” for the ecosystem [217-252].
– Regulation, safety, and the narrative of AI risk.
Matthew suggests that “AI doomers” may be motivated by a desire to capture regulatory advantage, warning that over-regulation could stifle competition [180-197]. Rahul points out the parallel with nuclear-style regulation proposals and the broader societal fear of AI-driven cyber threats [208-212]. Both agree that human-like legal frameworks (criminal code) may be more appropriate than engineering-centric rules [199-202].
– Data access, web crawling, and the future internet business model.
Matthew highlights the asymmetry in web indexing (Google sees far more pages than competitors) and warns that AI could upend the traditional traffic-based monetisation model of the internet [345-403]. He calls for new compensation mechanisms for content creators and suggests that the next five years will see a “new business model” that rewards knowledge creation rather than mere traffic [395-404].
Overall purpose / goal
The panel’s aim was to explore how to democratise artificial intelligence-making the technology, infrastructure, models, and data accessible beyond the current concentration in a few “postal-code” companies-while addressing the technical, economic, regulatory, and societal challenges that this transition entails, with a particular focus on India’s role and opportunities.
Overall tone
The discussion begins with a technical, analytical tone, outlining AI’s cost drivers. It then shifts to a national-strategic, optimistic tone as Rajan outlines India’s emerging capabilities. Mid-conversation the tone becomes critical and cautionary, debating open-source risks and regulatory capture. Toward the end it moves to a forward-looking, hopeful tone, envisioning new internet business models and collaborative solutions. Throughout, the speakers remain collegial but the emphasis oscillates between optimism about rapid progress and concern over concentration of power and safety.
Speakers
– Announcer
– Role/Title: Event announcer / moderator
– Areas of expertise: (not specified)
– Rahul Matthan
– Role/Title: Moderator; Partner at TriLegal (board member, Bangalore office), leads technology, media & telecom practice
– Areas of expertise: Legal insight, policy, technology, media, telecom, high-value TMT transactions
– Matthew Prince
– Role/Title: Co-founder and CEO of Cloudflare
– Areas of expertise: Internet infrastructure, cloud security, AI, web performance, networking
– Rajan Anandan
– Role/Title: Managing Director of Peak15 Partners (formerly founder of Sequoia Capital India in Southeast Asia)
– Areas of expertise: Technology investment, AI, semiconductor ecosystem, startup ecosystem, digital sovereignty, venture capital
– Sources: (information derived from transcript)
– Audience
– Role/Title: Audience members (questioners)
– Areas of expertise: Varied; examples include
– Yuv – individual from Senegal [S12]
– Professor Charu – public administration scholar [S13]
– Dr. Nazar – (role not clearly specified) [S14]
Additional speakers:
– (None identified beyond those listed above)
Moderator Rahul Mathan opened the session by recalling Matthew Prince’s closing remark from his keynote – that the transformative power of artificial intelligence should not be confined to “a handful of companies in the same postal code” – and asked the panel to discuss what infrastructure would be needed to democratise AI given today’s technical and economic barriers [15-21][14].
Matthew Prince began by explaining why AI is currently hard and expensive. He noted that modern AI workloads require massive numbers of GPUs, a market dominated by NVIDIA, whose chips were originally designed for gaming consoles and later repurposed for Bitcoin mining rather than for AI, making them power-hungry and costly [23-30]. Prince also highlighted the scarcity of specialised talent – only a tiny global pool of engineers can design, train and operate large models, driving up salaries and limiting broader participation [31-33].
He then pointed to several forces that could erode these barriers. Enrolment in computer-science and AI-theory programmes has surged worldwide, expanding the talent pipeline [44-46]. The silicon market, after successive shortages, is now experiencing a “glut”, and a growing number of startups, incumbents and hyperscalers are entering GPU production, which should drive down per-unit compute costs [50-54]. As models become more of a commodity, Prince argued that the cost of building frontier-level specialised models could fall to “$10 million or less” within five years [58-62].
Rajan Anandan shifted the focus to India’s distinct AI strategy. He stressed that India does not need to chase artificial general intelligence; instead the priority is to develop “highly performant, extremely low-cost models” of one to two-hundred billion parameters that can serve 1.4 billion people [75-81]. He cited the home-grown SARVAM system, which already delivers state-of-the-art speech-to-text and text-to-speech in Indic languages at a fraction of the cost of global competitors [82-84].
To sustain this trajectory, Rajan outlined the need for a sovereign AI stack. India accounts for roughly 20 % of the world’s semiconductor designers [108-109] and now hosts 35-40 semiconductor startups ranging from low-power 20 nm system-on-chips to GPU designers such as Agrani and memory firms like C2I, both of which have received fresh investment [110-113]. He argued that a “sovereign AI stack” – covering chips, compute and data – is essential because “our friends are no longer friends, or sometimes they are, sometimes they aren’t” [113-116]; strategic alliances such as the recent partnership with Paxilica are part of this approach [117-119]. He also noted that Indian conglomerates Adani and Reliance announced a combined $100 billion commitment to AI infrastructure at the model layer this week [120-124].
Rajan further emphasized voice-AI economics, noting that current Indian voice-AI costs about 3 rupees per minute, while SARVAM can already deliver sub-rupee rates; to achieve mass adoption the cost must fall to roughly 5-10 paisa per minute [239-245][240-245].
He highlighted a startup, Cloud Physician, which has amassed proprietary ICU data from tier-2/3 towns and used it to build a dozen specialised healthcare models now being commercialised in the U.S. [250-255]. Rajan also pointed out that India has collected less than 1 % of the data needed for AGI, underscoring the opportunity for data-collection startups and the importance of smart data regulation [260-267].
The panel then debated the open-source/open-weights dilemma. Rahul warned that releasing highly capable models as open weights could enable “malicious fine-tuning” and other security threats [165-174]. Prince responded that attempts to restrict access are often a “business strategy” masquerading as safety concerns, noting that incumbents may deliberately amplify AI-doom narratives to capture regulation and preserve market dominance [176-197], and he maintained that a more open ecosystem will ultimately prevail [198-204]. Rajan agreed that openness is “absolutely critical” for the ecosystem [217-221] but cautioned that the economics of training trillion-dollar models make fully open releases untenable, suggesting new economic pathways are needed to reconcile openness with investment recovery [222-232][217-227].
Regulation and safety were further explored. Prince suggested that regulators should focus on the behaviour of systems – applying criminal-code principles to AI rather than trying to control deterministic outputs [199-202]. He also warned that the “AI-doom” narrative may be a tool for regulatory capture, urging caution against over-regulation that could stifle competition [180-197]. Rahul compared AI oversight to nuclear regulation, proposing an “IAEA for AI” [208-212], while Prince reiterated that regulation should target system behaviour [199-202]. The discussion also covered AI’s dual security role: AI can amplify phishing, social-engineering and sophisticated breaches (e.g., the SalesLoft incident) [266-276], yet Cloudflare’s machine-learning-driven threat detection shows how AI can make the internet more secure, with Prince predicting that “in ten years we are more secure online than we are today” [277-286].
Data-access inequality was identified as another source of asymmetry. Prince warned that Google indexes far more of the web than competitors – roughly six pages to Microsoft’s one, three-and-a-half pages to OpenAI’s one, and ten pages to Anthropic’s one – giving it a decisive training advantage [345-353]. He argued that either regulators must force Google to share its index on equal terms [358-366] or the industry must devise mechanisms such as “pay-to-crawl” to level the playing field. He also cited DeepSeek’s breakthrough pruning algorithm, which efficiently discards large portions of the computation tree, allowing models to run on far fewer chips and illustrating how constraints can drive efficiency [380-387].
When the audience posed questions, several themes resurfaced. On trustworthiness, Prince replied that AI is already “more trustworthy than most humans”, citing self-driving cars that are statistically safer than 99.99 % of human drivers, and suggested that trust should be measured against human performance rather than an idealised perfection [450-466][467-470]. On creator compensation, he explained that scarcity (e.g., blocking AI crawlers) forces publishers to negotiate higher licensing fees, as demonstrated by Reddit’s 7× higher payout compared with the New York Times [474-477]. Rajan highlighted the rapid growth of consumer-AI startups in India, noting that “India today has more consumer AI startups than the US” and that recent seed investments are targeting education, healthcare and entertainment for the country’s 900 million internet users [479-486].
In sum, the panel converged on three pillars for AI democratisation: (1) falling hardware costs and model commoditisation – exemplified by Prince’s $10 million frontier-model prediction and India’s emerging chip ecosystem; (2) open-source/open-weights as essential for a healthy ecosystem, tempered by the economics of trillion-dollar training runs; and (3) proportionate regulation that addresses data monopolies, security risks and the shift away from traffic-based monetisation. Unresolved issues include how to fund fully open models while preventing malicious fine-tuning, designing robust creator-attribution and compensation frameworks, finalising India’s sovereign AI-stack roadmap, and defining the next internet business model that rewards knowledge creation rather than mere traffic.
Very few individuals have done more to bring revolutionary and transformative technology into the hands of millions than Matthew and Rajan. Matthew Prince is the co -founder and CEO of CloudFlare, a World Economic Forum on Foreign Relations, Forum Technology Pioneer, and a Council on Foreign Relations member. He has degrees from Harvard, Chicago, and Trinity College, and co -created Project Honeypot, the largest community tracking online fraud and abuse. Matthew’s founding mission for CloudFlare was to help build a better Internet, a goal that has become increasingly critical in the age of artificial intelligence. Rajan Anandan is one of India’s most influential technology leaders and investors, currently serving as Managing Director of Peak15 Partners, formerly Sequoia. He is the founder of Sequoia Capital India in Southeast Asia, where he focuses on backing founders building transformative, technology -led businesses.
With decades of experience across entrepreneurship, investing, and global technology leadership, Rajan has played a pivotal role in shaping India’s startup and digital ecosystem. Orchestrating this conversation is Rahul Mathan, who brings the perfect blend of legal insight, policy depth, and the ability to ask the questions everyone else is thinking. Rahul is a board member and partner in TriLegal’s Bangalore office and has their technology, media, and telecom practice. He has extensive experience advising on high -value TMT transactions in the country. He has worked with companies across sectors, from telecom majors to Internet and data service providers, offering advice on regulatory matters and operational issues. So please join me in welcoming three awesome leaders on stage, and with that, the stage is yours.
Thanks, Rahul.
And since I haven’t worked with you, I’m going to square that circle. Matthew, I just heard your keynote up in the big 3 ,000 -seater hall, and you ended with a very powerful statement, which is that that this wonderful AI technology should not be built by a handful of companies in the same postal code. And that, in many ways, seems to be the driving motivation for having this discussion. But it’s easier said than done in that AI is a very big and complicated stack. And a lot of that stack actually involves complex hardware. And it’s hard, really, to move that hardware around the Internet. So if we are to democratize AI and if we are to come up with the infrastructure construct that would democratize AI, what would that look like?
And what is your idea, your vision for how this would be, if not now, but sometime soon?
Yeah, so let’s talk first about why AI is hard and expensive today. So the first thing is AI requires lots and lots and lots of chips. And the fifth thing is AI requires lots and lots of chips. And the sixth thing is AI requires lots and lots of chips. largely produced today by one manufacturer, NVIDIA, that use a ton of power and are very, very expensive. They were never built to do this. If we’re totally honest, the NVIDIA chips were built to power gaming consoles, right? And then for a while to mine Bitcoin and then magically to create a superintelligence. But if you had started with, let’s create the superintelligence, you would have designed those chips somewhat differently today.
That’s challenge one that keeps AI very, very hard. Challenge two is that it requires a real specialized set of knowledge. There’s a very small set of people in the world that know how to build these models and how to run these systems. And so you have to ask, why is that not something where everyone knows it? If you had known that you could specialize in this in school and literally make $100 million, there’s a year. We would all have studied AI, right? And yet if you go back just five years ago, the people who were studying AI were kind of the weirdos. Why was that the case? Well, because AI was one of these fields that kind of had promise in the 70s and had promise in the 80s and had promise in the 90s.
And then everyone was kind of like, you know what? We’re tired of this. And so the AI professor was kind of shunted off to the side. And so if those are the things that today make AI extremely expensive, the question is, are those things permanent states or are they going to change? Well, we can measure one of them already. Already, if you look in enrollment in computer science programs across the world, it is up dramatically, even though supposedly there’s no future for computer scientists, in just the last two years. And then secondly, the enrollment in specifically AI theory courses is off the top. Every university that used to sort of shutter their course is now standing it up and building it like.
like crazy. And so I think that over time, we’re gonna have more and more people who are able to do this. And so having to pay enormous salaries for those people, that’s probably not going to be the future. On the chip side, you know, if you have literally a company going from being an obscure gaming company, to the most valuable company in the world, obviously, a whole bunch of people are going to chase after that. And if you look at the history of silicon, anytime there has been a silicon shortage, it turns into a silicon glut over time. And with GPUs, it’s kind of had hit after hit after hit. I think that what we’re seeing, at least is that both from startups, as well as incumbent players, as well as from the hyperscalers and other players, they’re getting involved.
There are so many people who are making this this silicon, that no matter what the price per unit of work done is going to come down. The other thing that I think is encouraging is that if we look at the actual AI models themselves, it doesn’t appear that this is necessarily a one company is running away with it. It’s sort of like Google gets a lead, and then Anthropic passes them, and then OpenAI passes them, and then someone else passes them, and then Google does it again, and it keeps leapfrogging itself. That, to me, suggests that the actual model making is more likely in a steady state in the future to be something like a commodity.
And if that’s the case, if the cost of creating the models is going to come down, if the models themselves are more commodities, I think that we can’t assume that the literally hundreds of billions, if not trillions of dollars that are going into building the leading AI companies today, that that might come crashing down. And my prediction would be that you’ll be able to build models. Models that will be on the frontier. They’ll be more specialized, but they’ll be on the frontier. for tens of millions of dollars in the not -so -distant future. I’ll put a date out there. In five years, you’ll be able to build a frontier -like model within a specialty for $10 million or less.
Rajan, about a year ago, one of these companies and one of these postal codes were here, and you asked a question, you know, what will it take for India to compete? And you were told it’s hopeless, don’t compete. And yet at the summit, we’ve come out with a model that’s, by all accounts, haven’t yet played with it competitive. So what Matthew is saying seems to be working out, but he’s putting a five -year timeline. I would argue that perhaps we could be more aggressive with that timeline. So what is your view on this as someone who is actually, you know, in India, working with some of these really smart people who are working under constraints?
but are yet putting out some fairly impressive models, interesting use cases. What’s life like at the other end of the absolute front? I mean, what people call the absolute frontier of these models. What is life at the other end of it where there are many different applications, many different use cases, and many different types of models? Yeah.
Firstly, hi, everyone. Great to have all of you here. So I think the first thing is, look, Matthew, I don’t know whether you’ve been following a company called Sarvam. I think, firstly, it’s important that India is not trying to get to AGI. With 1 .4 billion humans and a million Indians turning 18 every month, AGI is not the thing that we need. Our focus really is to uplift 1 .4 billion Indians. And I think our ecosystem, our innovators, our government, our investors, our technologists, our engineers, our engineers, our engineers, our engineers, our engineers, are all of the view that we have. But to really do that, you don’t need trillion or five trillion parameter models. What you need are highly performant, extremely low cost models that are a billion parameters to maybe 100 or 200 billion parameters.
And actually, for the amount that you mentioned, we have launched 30 and 100 billion parameter models that are SOTA in Indic languages. In fact, I don’t know whether you know this, Matthew, but if you look at voice AI in Indic languages, SARVAM today is both SOTA in speech -to -text and text -to -speech and is a fraction of the cost of the global models, including the global leader in voice AI. So I think what you’re going to see is the government’s actually – and by the way, the reason SARVAM is able to do that, because of tremendous support from the government, but it’s not just SARVAM. There are 12 large and small language models. By the way, just – I don’t know, I don’t know.
a clarification because when Sarvam launched, I think somebody said India is really good at small language models. The last time I checked Chakchipiti and Gemini, anything above 30 billion is actually a large language model. So we are actually in the large language model race. Just a clarification there. So basically, we have 12 companies, actually 11 companies and Bharat Chen, which is part of IIT Bombay, that are building these models. I think this number goes to 15, 20 very, very quickly. And I would say actually well within this year, Matthew, in many, many things that India needs, right? We need to uplift 100 million farmers, and for that, we need to build basically models that work on feature fonts, right, in local languages.
That’s done, right? That was actually launched on Wednesday and so on. So I think that’s the first thing. And now when you ask this other question of, look, true frontier, which is, you know, the frontier today, let’s call it a few trillion, maybe it’ll go to 5 trillion, 10 trillion. I think it’s going to be very, if you define frontier that way, part of it is also definition of frontier and more importantly, what’s the objective, right? If you define that frontier that way, I think it’s going to, it’s not going to be, Indians are not going to be able to do it with these set of architectures. But what we are going to do is to Matthew, to the point that you made, which is, look, LLMs are the most inefficient compute machines ever.
I mean, this is the, you know, these are not efficient architectures, right? We believe that this is the beginning, not the end. We believe there’s going to be many more to come after transformers. And I think that is where the bets are going to be made. In fact, I mean, you know, Jan Lukun is here and he sort of said that, look, you know, this is not sort of the, this is not going to lead us to AGI. We don’t really want to get to AGI, but even if you just look at where AI is going to go. So I’d say at the model layer this week, India entered the race, but we are going to play this race differently, which is IE.
We’re not going to try to build trillion parameter models and we’ll do it at super, super low cost. Now, coming to the chip player, that’s harder for India, but I don’t know whether you know this, Matthew, but 20 % of the world’s semiconductor designers are in India. Four years ago, we had no semiconductor startups. Today, we have about 35 to 40. They span the spectrum from low -power, call it 20 nanometer chips, these all SoCs, all the way up through – actually, two weeks ago, we announced an investment in a GPU company. It’s a very seasoned team, Intel, AMD, a company called Agrani. Monday, this week, we announced an investment in a company called C2I, which is going to make memory, focus on memory.
So we’ll see even at the chip player, because what is very clear to us and I think to many in India is we have to have the sovereign stack. Our friends are no longer friends, or sometimes they’re friends, or sometimes they’re not. And as India, we just need to have the sovereign stack. And we can’t – we, of course, are going to have alliances. And today, I think a very important alliance. This alliance was announced with Paxilica and so on. But we’ve got to actually have a sovereign stack. So whether it’s the chip layer, whether it’s the compute layer, I think it’s great that both Adani, Reliance announced $100 billion investments into AI infra this week at the model layer.
Where we have excelled is on the application layer. And what I can tell you is at the application layer, I joined Google in 2011. At the time, India had 10 million connected smartphone users, no venture capital, and no unicorns. Today, we have a lot of venture capital. We have 900 million smartphone users. We don’t have enough capital, but we have enough venture capital, and we have 125 unicorns. At the application layer, I can confidently say, whether it’s consumer, whether it’s enterprise, Indian companies will win. Because the traditional formats of consumer consumption, which is called search, or now Gemini, ChatGPT, et cetera, will scale to, in my view, will probably scale to 200 or 300 million. They’re not going to scale to a billion Indians.
To scale to a billion Indians, you’ve got to. You’ve got to have image, you’ve got to have video, you’ve got to have highly local language, and it’s got to be. ultra, ultra low cost. So I do think we have a shot. I think what you’ve just described, we ship this week. I think by the end of the year, we’re going to ship 100x more because, Matthew, what’s happened in India is we don’t, we can’t, we’re trying to do things differently. Payments is an example, which I think the whole world knows about. And you’ll see that playing out in many other things. And I’ll last end with this. Matthew, 2015, India had two space tech
And I would just say, don’t sell yourself short. Like you may not, India may not need AGI, but India may still build AGI. Right. And I think that the thing that actually might end up holding back the biggest AI companies is that they are so unresourced constrained. If you look at what was the biggest over the last two years, innovation that really drove AI forward, it wasn’t anything that Google or OpenAI or anyone else did. It was actually DeepSeek, and DeepSeek’s ability to say that within the constraints of the chips that they had access to, that they would – they had two incredible innovations. They would prune the tree more efficiently, and they’d be able to process that pruned tree much more quickly.
I wish DeepSeek had been an Indian company, not a Chinese company. It would have ended a little bit more constrained. I think we have a thought room. But it is – I actually think those places with constraints, and I would not be scared away if you’re an Indian AI company by hearing the hundreds of billions of dollars that the big U .S. AI companies are pouring in. That seems like an asset. That seems like an advantage that they have. But in some ways, it’s blinding them to what will be the real innovations that cause AI to become more efficient, that cause AI to become – more scalable, and there is no way that the long -term solution to this is you have to turn up a mothballed nuclear power plant.
So we’re going to get more efficient, and I would bet that that efficiency comes from places just like this.
So if I can push back to both of you, I think there’s – not that I disagree with any of this, but to say that – so one of the things that DeepSeek did was it came up with this reasoning model. I guess other people were working on it, but they did a really good job of doing reasoning really well and really powerfully. I mean the real – the DeepSeek innovation was being able to say that you’ve got to build this giant tree if you’re building an AI model. And they were able to say probabilistically there’s a whole bunch of branches on the tree that we can ignore. Like there’s a bunch of things in your life that have happened to you that your brain is just really good at forgetting about, whereas there are a few just salient moments that have formed who you are as a person.
What DeepSeek did is did a better job of pruning that. The big US AI models don’t have to do that because they can just – well, let’s just buy another H200, right? And let’s just keep throwing more money at the problem. By having the constraints and the specialization. in this case, the memory constraints, it forced DeepSeek to come up with a better pruning algorithm, which allowed them to then just be able to deliver AI at a much, much, much more efficient level. And I suspect Sarvam did something similar because, I mean, I spoke to Pratyush and he said that one of the things that the big guys have been coming around when we told him was how do you do this with 15 people?
And it’s certainly some of those constraints that they’re at work. I wanted to talk along similar lines on this idea of open source, open weights, perhaps as a stick with open weights because open source is a controversial definition. A lot of the early models at that time, there was a lot more open weight stuff coming out. Of late, that’s gone down. And the power of open weights, of open weights models and perhaps open sources different is that you can actually, I mean, develop open weights and you can actually develop open weights and you can actually can actually tinker around with the model. and customize it to their use case. But increasingly, we’ve seen a sort of a drop -off other than the Chinese models, Kimi and Qen, which are still open -weight.
I wanted to just discuss among the two of you, perhaps from different perspectives, maybe the use case perspective and perhaps just the whole internet infrastructure perspective, how important open -weight is. And some of the backroom chatter I’ve been getting is as these models get more performant, it becomes increasingly dangerous to put out highly performant models as open -weights because it’s something that OpenAI called malicious fine -tuning and the fact that as these models become better, it’s easier to undo the fine -tuning guardrails that have been established so these models don’t do bad things. And so that’s why it won’t be delivered. So I know that the ecosystem needs open -weights because not everyone has the time to do the training and the time to do the training.
And someone just perhaps wants to do the pre -training and get the model. out. But I’m also hearing from the other side that open weights has this fundamental security challenge. And I know, but we go to security separately, but just on open weights, what’s the, you know, what’s the way We thread this needle between these two things?
Well, I think that, okay, let’s, let’s, let’s, I’m going to tell a story. I don’t know that 100 % of the story is right, but I think it adds up to something that approximates what’s right. Let’s imagine that over time, you are one of these major model makers, you’re open AI, you’re, you’re anthropic, you’re, you’re Google. And you look at this and you say, huh, if we keep playing this out, then this is a commodity. And the only way that we win is if we restrict as many people from getting into the game as we possibly can. So how do you do that? I mean, one of the best ways to do that is just to scare everyone.
that if everybody has this technology, that the world is going to end. And so the next time you come across an AI doomer and they say, if everyone has this, the world is going to end, just keep pushing them. Just be like, okay, and then what happens? And then what happens? And then what happens? And basically, you know, the most likely, the scariest scenario is these things can design very bad maybe pathogens or other malicious vectors, biological vectors, that could then get synthesized and spread around society, to which I say, well, then shouldn’t we be regulating the synthesizers, not regulating the sort of technology that’s out there? But, again, it gets to be very hand -wavy.
But if you think about it as a strategy, if you believe that these ultimately are commodities, then what you want to do is actually regulate. Regulate them in order to make it so that yours is the only company that can be safely trusted to handle this. And I think that that’s, again, somewhat cynically, a lot of the explanation for why the people that are building these horribly dangerous, scary things keep telling you how horribly dangerous and scary they are. I’ve never seen another industry that has done that. You never – you don’t see like the automobile industry be like, you know, this could plow through a crowd of people and kill and be used in a mass murder event, right?
You don’t – that just doesn’t make any sense. And so the only way that I can make sense of that world is if, from a business perspective, if it’s actually trying to do some sort of regulatory capture. And so I am pretty discounting on what the risks are here. I tend to think that more open is going to win, and I tend to think that the Chinese approach right now is the smartest approach to take on what looks like this enormous kind of just money machine which the U .S. is creating. And so it’s – I think that as India thinks about how it’s going to regulate AI – I would be careful about listening to sort of the AI doomers.
I would be especially careful about trying to regulate the output of what is fundamentally at least a pseudo non -deterministic system. We have built machines that act like humans, and yet we think we can regulate them like machines. The better way to regulate them is actually more like humans. Look to the criminal code, not the engineering code, in order to figure out what that regulation should look like. And so I am very much pro -open. I think we should think about what these risks are and what these dangers are. We should definitely be testing and looking for those. But I tend to think that they are somewhat overblown. And if you want to understand why they are somewhat overblown, I would argue it is because it’s a strategy in order to keep the people , they’re currently in the lead going forward.
I think you may be absolutely right, because on that big stage that you were at just a short while ago, yesterday, there was a call by one of these companies for an IAEA for AI, that it should be regulated like nuclear technology. And the other example I keep giving is, you know, at the turn of this last century, people were just walking around the streets and getting electrocuted, because electricity is highly dangerous. And yet we sit in this room, which literally the walls are buzzing with electricity, and we’re completely safe. And this is the nature of all technologies. But on the positive side, Rajan, all the AI deployers that we have in India, a large number of them are relying on open source.
And if the open source pipeline starts, to diminish, where are they going to go? I mean, Sarvam can certainly deliver these models, but how important is it actually to the community of people that are, I mean, AI and chat GPT and all this is all well and good, but it’s really those applications, the voice applications that people need. How dependent are they on open source? What can we do to continue to keep this open?
Look, I mean, firstly, as Matthew said, look, if you invest a trillion dollars, okay, you can’t give it away for free. It’s as simple as that. It’s just economics. So you can position it any which way you want, but fundamentally it’s about economics, right, and how do you build a business, especially if you have to invest so much. No, look, open source is absolutely critical. You know, I think the – I mean, Lama is the most recent example, right, where the reality is if you’re going to launch the next state -of -the -art version of Lama, it’ll be closed because otherwise how are they going to monetize this, right? And especially if you have – you know, they’re spending $80 billion, $100 billion.
There are other ways to make money. They don’t make money. Even for them, $100 billion a year is kind of a lot, you know. So anyway, coming back, look, it’s super important to the ecosystem. I don’t know. I don’t have the answer to that. I think in March. In March, you’ll see one of the big companies make a big announcement, massive, massive announcement on their commitment to open source. But it is, you know, I think the only way you do this is there has to be a different path, okay, because if you’re going to have to invest hundreds of millions of dollars to build a new model or billions of dollars to build a new model or tens of billions of dollars, that doesn’t, that’s not really open.
You can’t keep those models open, right? So to the first question that you ask in Matthew’s response, you really need to have a different way of doing this, right? So actually, by the way, if you look at voice AI, for instance, I’ll give you some data. You know, India has very low labor costs. If you look at human cost of voice today in India, it’s five rupees a minute to about 20 rupees a minute. Five rupees a minute is the lowest you can get. Amex would be probably 40 or 50 rupees a minute. Today’s voice AI costs about three rupees a minute. So already, and that’s why you’re beginning to see voice AI really begin to take off in India, right?
But even with today’s SOTA model, you can get to about maybe a rupee. Right now, the question is, even at a rupee, now you’re one, you’re twenty one fifth the cost of humans. So it’s going to really take off. But if you want to make voice the primary medium through which one point four billion Indians will access AI, that’s still too expensive. Right. You’ve got to get it down to maybe five paisa or ten paisa. And that’s actually not about open source. It’s about compute and it’s about the cost of inference. So if you ask me, open source is really, really important. Important. But we have to find a way to get the cost of inference down.
Obviously, model size, all of these things matter. And, you know, we can talk about that as well. But a short answer is, look, it’s really important. But it is not clear to me how you do this, especially in the current game that we’re in. Because anybody that wants to be at the frontier, the way a frontier is defined today actually has to go out and invest. Right. And honestly, I don’t know how the Chinese are doing it because, you know, it’s a bit opaque as to exactly how much are they investing. You’re right. It’s kind of a hedge fund.
Which is basically what deep seek is and they have this on the side META is the fascinating question here because it took me a really long time to understand META’s strategy like why are they doing all this VR why are they doing all of this AI what they learned was the lesson that if you are caught on the wrong side of a platform shift and you then become beholden to some other platform where in the past they were on the web and that was fine and social worked and no one controlled the underlying platform and then the platform shifted and all of a sudden it was on mobile and they were beholden to both Apple and Google that put them on a back foot and it really limited their business.
So they are so desperate to whatever the next platform shift is stay in front of that platform shift and so for a while that looked like it might be VR that was less likely today although never count these technologies out The real next platform shift is almost certainly going to be AI. And so if you control the social graph, which is an unreplicable kind of asset that they have, they need to be – they need to make sure that whatever the next platform is, that they control that or at least have an equal seat at the table of everyone else. And so if they continue to invest in open source and you’re like, why are they spending so much money in order to do this?
It’s to make sure that as the next platform shifts, that it’s going to be – that they aren’t in the same back -footed position that they were with Apple and Google. That would be my analysis of META.
I don’t want to comment either way, that AI is going to accelerate cyber attacks because agentic swarms, et cetera, can do things much more dangerous. So what’s the evidence that you have for this?
Yeah, so I think this is sort of a long -term good news, short -term scary headline story. The long -term good news – let’s start with short -term scary headline. There are going to be a whole bunch of scary headlines of bad things that AI does. There will be a story about an Indian family who lost all their money because they wired it to some criminals that made it seem like their daughter had been kidnapped. I mean, we’re already seeing the level and sophistication of phishing scams go through the roof in terms of what is being done. And so the bad guys are going to use that to attack. The other thing that we’re seeing is – so there was an example.
There was a company called SalesLock. It had a program called Drift, a piece of software that was connected into hundreds of thousands of Salesforce instances. SalesLoft got breached by a Russian hacker. The Russian hacker didn’t understand how Salesforce worked, so they kind of fumbled around for a really long time. Had they just used AI, which is what we’re now seeing a lot of North Korean and Chinese hackers do, they would have been able to just be instantly knowledgeable on how to get as much information out of Salesforce as quickly as possible, and the breach could have been orders of magnitude worse. So those are the bad stories, and there’s going to be real hardship and real pain that’s caused from it.
The counter to that is that folks like us, I was just with Nikesh from Palo Alto Networks, Jay from Zscaler is here. We’re all using AI. We’re all using AI in our own systems to make them smart. In fact, Cloudflare, we would have never described ourselves this way, but the whole theory of the company was let’s get as much Internet traffic flowing through a machine learning system to be able to predict where security threats were in the same way that three years ago we all looked at ChatGPT and were like, whoa, that’s amazing. internally about three years ago was the first time the system said, bloop, here’s a new threat that no human has ever identified before.
And that went from being something that happened once in the first 15 years of CloudFlare’s history to now it’s happening on an incredibly regular basis where the machine learning is able to win. And so I think the good news is that the good guys will always have more data than the bad guys do. Again, a caveat to regulation preventing us from using it in order to do cybersecurity in various ways. But largely, we’re able to do that. And I think that we will actually use AI in order to stay ahead of these threats. That’s what we’re seeing. It’s going to require some change in any part of your life where you are today relying on what someone looks like or what they sound like in order to verify who they are and give them access to anything, secure, confidential.
That’s got to change. And so the simple thing that you should all do with your immediate family at your next holiday meal. is decide on a family password. And that seems silly, but I guarantee you at some point, some hacker is going to call up and say, hey, your son or your dad or your grandmother or whatever needs money. And if you say, hey, what’s the family password? And they say, I don’t know, Aardvark, you’ll know that it’s a scam, right? So it’s a simple thing that you can do. And it’s going to be these simple things, which I think are going to get translated. And I think businesses have got to go away from, oh, the person looked right, so we let them in the door.
Like that can’t happen in the cyber world. And so we’re going to have to lock systems down. There are going to be some scary stories, but I would predict again that in 10 years, we are more secure online than we are today.
Rajan, I wanted to talk about data because a lot of the conversation is around how the models that we have, CERN accepted, are models that are largely built in the West and therefore are Western systems. And I think that’s a very important part of the question. And I think that’s a very important part of the question. And I think that’s a very important part of the question. And I think that’s a very important part of the question. And I think that’s a very important part of the question. And I think that’s a very important part of the question. And I think that’s a very important part of the question. And I think that’s a very important part of the question.
And I think that’s a very important part of the question. And I think that’s a very important part of the question. and I know there’s something of a land grab going on for the data so as far as the data companies in India are concerned the companies are actually hoovering up the data, annotating it, making it ready what’s the business model for them? Are they feeding this all up back to that one pin code in the US or what’s the negotiating part and I ask this question because we have a lot of data in this country but I know that there are countries in Africa where the deal is already done and the data is out the door there are deals I know for 25 years worth of medical data out of Africa in exchange for setting up an EHR system because that’s a deal they’ve done.
And I was wondering whether we are thinking about this in a nuanced way and then I know Matthew you’ve got some ideas on crawling, I’ll come to you on that. What is actually happen on the ground with this?
I think the first thing is we don’t have as many I mean there are there are you know initiatives or NGOs like AI for Bharat that are collecting data but if you look at the leading global data companies they’re not Indian right India probably has a handful of startups that are that are that are actually in the quote unquote business of AI data for AI so first and foremost I mean because these companies are global you’re absolutely right all the data that Indians are generating is actually going to you know going to those sort of few handful of companies now that being said look firstly you know for for Indian companies to actually keep the data here we have to have model companies right like you know what I mean otherwise you have to sell it because if you’re in the data business you have to sell it to somebody but I think the benefit we have is you know like honest like you know we’ve only collected probably one less than one percent of the data we actually need if you really want to get to AGI right like if you look at but physical intelligence and things like that.
And India really has a competitive advantage. In fact, we’ve been looking for startups we could find, fund that would basically do all kinds of data collection for robotics and things like that. So that’s number one. I think number two, we’re also beginning to see companies that are leveraging their proprietary data in a very, very interesting way. I’ll just give you one example. So we have a company called Cloud Physician. It’s an Indian startup. They run these remote ICUs in tier two, tier three towns. In India, they’ve been doing that for four or five years. They’ve got this extraordinary amount of proprietary data that they’ve now used to actually build about a dozen or so specialized models in healthcare.
And now they’re actually taking those models to market in the U .S. And the kind of data that they have, which they’ve collected over four or five years for a different, for a sort of a healthcare delivery business, if you will, has been very valuable. So, you know, in our portfolio, we only have a handful of companies in different spaces that are using. And data is an advantage to actually build a final proposition, which is usually tied to some sort of domain model or something like that. But I do think, you know, we need probably some sort of – firstly, we need a lot more innovation around this. I’m surprised we don’t have more companies that are actually trying to sort of build businesses around this India’s data advantage.
And second, we need to have – I do think we need to have some smart regulation. I don’t know where the regulatory framework is on data. I think that’s going to be super, super important. I do know, like, AI for Bharat, et cetera, are being quite thoughtful about who they share data with, which is great. So, yeah, that’s sort of where it is. But it’s a huge opportunity for India. I don’t – you know, my real view is, like, look, basically, you know, it’s all the data on the Internet. That’s accessible to everybody. You just need, like, literally large amounts of capital. Most of the data that we need to get to AGI we don’t have yet.
And we have 1 .4 billion people. Well, created.
But, Matthew, you – You wanted to intervene in that whole thing. You had – is something called – maybe you didn’t call it, maybe the media started to call it pay -to -crawl, and you may have something more sophisticated like an AI audit or something like that. What’s the idea behind that? Because that’s also part of this democratization of AI as I see it.
So firstly, correcting a little bit of a misconception, all the money in the world and you still can’t even crawl the Internet. So how much less of the Internet does Microsoft see than Google? Microsoft Bing, they’ve thrown a ton of money at it. For every six pages that Google sees, Microsoft sees one. OpenAI knows how much of an advantage that is. So every 3 .5 pages that Google sees, OpenAI sees one. But that means that two -thirds of the Internet is hidden to – the most sophisticated model. Anthropic, it’s almost 10 to 1 in terms of what’s there. And so if you want to ask why did Gemini just leapfrog open AI, I don’t think it’s the chips. I don’t think it’s the researchers.
I actually think it’s the data. And I think getting access to data is important. And so if we want to have a level playing field, there’s a real risk that Google is going to leverage the monopoly position they had indexing the Internet yesterday in order to win in the AI market tomorrow. And that’s something that we’re really concerned about. And I think we have to do one of two things. We either have to bring Google down and say that they have to play by the same rules as all the other AI companies. That’s something that you could do from a regulatory perspective. And that’s something that the U .K. is looking into. Canada is looking into.
Australia is looking into. The alternative is how do we give all the other AI companies the same access that Google does. And that’s, I think, an opportunity to also solve some of the democratization challenges out there. One of the things I really worry about. is that AI is going to disrupt the fundamental internet business model. The fundamental internet business model was create content, drive traffic, and then sell things, subscriptions, or ads. That was it. I don’t care if you’re B2B, B2C. I don’t care if you’re a media company. That was it. Create great content, drive traffic, sell things, subscriptions, or ads. AI doesn’t work that way. So just take a media company. If AI scrapes your ads and takes it, let’s say it’s the New York Times, or the Times of India, or whatever it is, you can now go to your AI and just say, show me all, summarize all the articles from the New York Times that would be of interest to me.
And you’re going to read it there. Now, that’s great for you as a user. It’s better as a user experience. So it’s going to win. But now the Times of India isn’t selling a subscription or an ad. Now the New York Times isn’t getting anyone to click on an ad. And that’s going to make it harder. And to make this clear how much harder it’s gotten. Ten years ago, for every two pages that Google scraped on the Internet, they sent you one unique visitor. And then you could monetize that visitor again by selling them things, subscriptions, or ads. Today, what is it? 50 to 1. Actually, excuse me, 30 to 1 in Google’s case, 50 to 1 in Bing’s case. That’s the good news.
In OpenAI’s case, it’s 3 ,500 to 1. In Anthropic’s case, it’s half a million to 1. They take half a million pages for every one page they give back. So AI takes, but it doesn’t always give back. And if the currency of the Internet has been traffic, that traffic is gone. And it’s getting harder and harder to then make money through the traditional business model of the Internet. So one of two things happens. One is, well, the Internet just dies. But that’s not going to happen because the AI companies need the content. They need the information. They need the things that are out there. And so the Internet. The alternative is a new business model emerges. So what happens?
and that’s what’s going to happen over the course of the next five years a new business model is going to emerge for the internet and think how exciting that is think how rare new business models for something as grand and as large as the internet are how often they emerge almost never and yet we’re all going to live through it and that’s an incredible opportunity and i don’t know quite what it is but it has to be some way that the people who are creating the content and creating the value get compensated for the things that they are creating and what the encouraging version of this is to think about the music industry the entire music industry 22 years ago was valued at 8 billion us dollars which is a lot of money but it’s not that much money because that was the beatles and rolling stones and like everything right why was it that well because napster and grokster and kazan all these things had commoditized they were basically taking a music and musicians weren’t getting paid for it and they were getting paid for it and they were getting paid for it and they weren’t getting paid for the music anymore what changed one day steve jobs walked on stage and he said it’s going to be 99 cents per song right itunes launched almost 22 years ago to this day and that wasn’t the business model that won but at least was a business model and it started the conversation and that evolved into what is the business model that won which is something closer to spotify which is now i don’t know what it is in india but in the u .s it’s like ten dollars a month and what’s incredible is that spotify last year sent over 12 billion dollars to musicians more than the entire music industry was worth 22 years ago and that’s just spotify there’s apple music and and uh title and tiktok and youtube and tons of people there’s more money going into music creation today than at any other time in human history by an order of magnitude now different winners and losers and we can debate whether or not the right people are winning the right people are losing but there is more money going into music creation today than any time in human history and so as we figure out what the next business model of the internet is going to be, let’s try not to make it one that’s worse.
Let’s try and learn the lessons because traffic was always a terrible proxy for quality. So let’s actually find something that is a proxy for quality and let’s reward the people who are creating that. And the good news is, I think that’s what everyone in this room wants, but it’s what Sam wants. It’s what Daria wants. It’s even what Elon probably wants. And that’s the sort of thing that is actually going to drive not only a healthier internet ecosystem, but I actually think that a lot of what’s wrong with the world today is that we have monetized traffic. And what that has meant is we have monetized basically making people emotional or angry or whatever it gets to click on things, which is part of what’s driven society apart in a lot of ways.
I think if instead what we monetize and what we reward is the creation of human knowledge. That’s what the AI companies want. That’s what we all want. And I think that’s what we can actually do to actually bring our society back together in.
I want to turn it over to the audience for questions. I don’t want to be the only one asking questions. I’ll take – hands are going. I’m going to take three questions at a time.
I like Indian audiences. They ask questions. Like you go to the UK and everyone just sits on their hands.
No, no. Indian audiences are very, very – now we’ll have to shut them up because we don’t have time. I’m going to take this one. I’m going to take that one. I’m going to take this one, right? So first up here, yeah? And I have a rule, a question, not a statement. So it has to end with your voice going up a little bit. Then I know it’s a question.
Sir, this is for you. You’ve touched upon a lot of interesting topics across domains. First of all, I remember you talking about the deterministic AI outcomes. Now AI having crossed the threshold –
Give me the question.
Okay. So how – So what, in your view, would make AI trustworthy? Is it something to do with explainability, deterministic AI, and what would be the pathways?
Let me get a couple more. Otherwise, we won’t get through. The lady at the back there. So one is, I’ll keep track of it. How do we make AI more trustworthy?
My question is for Matthew. So you mentioned about the paper crawl. We see robots .txt getting ignored. My question for you is, what makes you believe that AI companies would be equally invested in a creator -based compensation when AI creates the Internet and is not giving back attribution or compensation?
Trustworthy, and how do creators get paid? And attribution. I think she also wants to do attribution. This gentleman here.
Hi. My question is to Rajan. So, Rajan, what do you think you were explaining about the consumer and vertical part in the application layer? So what do you think? Where are we in terms of investment from a venture capital side point of view in terms of how can we match the Y Combinator and AI 16Z level in terms of investments?
Great. So AI is already more trustworthy than most humans. The simple fact is that AI is a better driver than ninety nine point nine nine percent of humans that are on the road today. Literally, since I started talking within a kilometer of where we’re sitting, there was an accident where between two cars. I mean, we just know that’s happening. We’re sitting in Delhi. Right. You will not be able to find any news about that anywhere on any publication anywhere on Earth. And yet, if one of those two cars had been a self -driving car, it would have been front page news around the world. There are expectations for AI are too high. We have built a system that acts like humans and we need to think of it as acting like humans.
The smartest CEO that I know in terms of doing this is Robin Vince at BNY Mellon. In their case, they actually have AI employees. The AI employees get an employee number. They get an email address. They get a quarterly review. They can get fired if they don’t do a good job. They can get promoted if they do a good job. I asked if there are any AIs that are supervising humans. He said, not yet, but it’s inevitable. That’s the way to think of it, right, is that they act like humans because they are like humans. And, again, we are all fallible, and we’re all going to make mistakes, but already we see in certain disciplines like driving, AI is better than human beings are.
In terms of getting paid, I think the empirical evidence is that when you’ve actually seen – forget robots at TXC. That’s like a no trespassing sign. Anyone can ignore it. When you actually block the AI agents, which is what we have done, then they come to the table. And so with big publishers like Condé Nast, DotDash, Meredith, and others, where starting July 1st, we said all of the AI companies are blocked, they actually came to the table, and they were able to get paid. Things done. In the case of Reddit, Reddit was willing to block everyone. including even Google. And as a result, they got the public number is seven times as much for the Reddit corpus licensing that than the New York Times did, even though the two corpuses are about the same.
So again, I think that the first step in any market is having some level of scarcity. As long as you’re making it easy for anyone to take your data, then you’re not going to get paid for it.
Yeah, I think on the question on consumer AI, I don’t know, very few people know this, but India today has more consumer AI startups than the US. In fact, on Tuesday this week at the Pitchfest, just our firm, one firm, we announced five new seed investments in AI companies. Four out of the five are consumer AI companies, right? And the reason is, and we think this is going to explode because we have 900 million Indians on the internet, 850 million of them are active every day, seven hours a day on the internet. and every space has potential for tremendous innovation, right? If you take education, education hasn’t been accessible to a large part of online education because it’s just been too expensive, right?
But today with AI, you can have a 99 rupees a month plan with an AI tutor across. In fact, the fastest growing AI education company in the world is in India and nobody’s really heard of it because actually, fortunately, these guys are all just being stealth and just building, which is very good. So I think it’s a great time to be building consumer. Actually, it’s a great time building AI companies, but especially in consumer AI, we’re going to see some breakouts. Look, the world’s leading consumer AI companies in education, healthcare, entertainment, et cetera, will be either here or in China. They won’t be in the Western world because we just need it.
The one beautiful thing about this summit is there have been so many wonderful, rich, diverse conversations. This is one of them. Matthew, Rajan, thank you so much. Thank you all for being such a good audience. Thank you.
“largely produced today by one manufacturer, NVIDIA, that use a ton of power and are very, very expensive”<a href=”https://dig.watch/event/india-ai-impact-summit-2026/open-internet-inclusive-ai-unlock…
EventYoshua Bengio: Yeah, I have a comment about values and trying to make AI behave morally. This question has been studied a lot in the AI safety literature. And of course, we have reinforcement lea…
Event– **Affordability**: Internet costs exceeding recommended percentages of income Lacina Kone: H.E. Mr. Solly Malatsi to talk about the A.I. divide. So, I’m going to start with you, Mr. Bosun, and then…
EventThis comment helped explain the seemingly paradoxical situation of competitors collaborating on standards by revealing the shared economic incentives. It shifted the discussion from viewing standards …
EventYes. So do you think the whole banking system will become redundant? Because today if I have to make a transaction, I’ll use Paytm. If I have to invest, I’ll use Paytm or a Zerodha. So why is the bank…
EventAddressing potential concerns about technological nationalism, Mazumdar-Shaw emphasised that “sovereignty is not isolation.” She advocated for India to build ethical, transparent, energy-efficient, an…
EventAnd these vision models are actually very good for document digitization. They’re very good at language layout understanding, visual grounding, and in fact, finding intelligence by visual components. …
EventThis comment provides crucial context about India’s position in the global AI ecosystem, distinguishing between application-layer innovation and foundational model development. It highlights India’s u…
EventBilel Jamoussi:Great. Thank you. Bilel Jamoussi:Great. Thank you. Google has a history of both open source contributions and proprietary developments. Are we already seeing the benefits or advantage,…
EventEl-Assady emphasised the crucial distinction between “open source” and “open weight” models. Unlike models that merely share final weights, the Swiss-made LLM provides complete documentation of all de…
EventAnd obviously just because you open source the software doesn’t mean that the data that’s produced with it is open data. And so that relationship is not one -to -one. So I think there will be a lot of…
EventAudience: My name is Satish and I have a long background in open source. I am presently part of ICANN and DotAsia organization. I sense a little bit of uncertainty when you refer to open source AI, be…
Event“And those engineered systems might require, for example, yes, human in the loop or on the loop, for sure, but also agents.”<a href=”https://dig.watch/event/india-ai-impact-summit-2026/open-internet-i…
EventAdditionally, the EU’s long-negotiated AI Act imposes strict rules on AI systems (e.g. high-risk systems must meet safety, transparency, and oversight requirements). The Union is thus leveraging regul…
BlogWhat is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed multiple dimensions of AI that stakeholders believe require regulation, spannin…
EventVint Cerf: First, I have to unmute. So thank you so much, Alex. I always enjoy your line of reasoning. Let me suggest a couple of small points. The first one is that with regard to regulation of AI ba…
EventAUDIENCE: Ends up. We cannot hear. Rely on ISO 31,000 is what they see as the kind of framework for risk assessments. And I think in a lot of tech companies, there’s a compliance team, which is…
EventFigure 1. Current Internet business model In the current Internet business model (Figure 1) user data is collected, processed, and sold for advertising. Legally speaking, this is covered by our servi…
BlogFor example, in data policy, many governments are lobbying for wide access to Internet data in order to ensure the protection of national security and the fight against crime. Internet businesses rely…
BlogAt theeconomic level, the internet business model is based on data. The role of tech companies which process user data, and the role of authorities to ensure users and their data are protected, come i…
BlogIn this business model, user data is the core economic resource. When searching for information and interacting on the internet, users give away significant amounts of data, including personal data ge…
Topic– Aman Bhutani- Malte Kosub Competition and Market Structure Development | Economic | Sustainable development Leurent argues that meaningful innovation in essential sectors like food systems, preve…
Event“Modern AI workloads require massive numbers of GPUs, and the GPU market is dominated by NVIDIA, whose chips are power‑hungry and costly.”
The knowledge base notes that AI requires many chips and that the market is largely supplied by a single manufacturer, NVIDIA, highlighting the dominance and scarcity of GPUs [S1] and discussing the NVIDIA monopoly [S86]; however it does not mention the chips’ origins in gaming consoles or Bitcoin mining, so that detail is not corroborated.
“Only a tiny global pool of engineers can design, train and operate large AI models, driving up salaries and limiting broader participation.”
The knowledge base confirms a very limited pool of experts worldwide, citing a tiny pool of specialists and estimating roughly 1,000 engineers capable of training extremely large models, which aligns with the report’s statement [S89] and [S51].
The discussion reveals substantial convergence on three fronts: (1) the expectation that AI hardware costs will fall, enabling cheaper, locally‑relevant models; (2) the shared belief that open‑source is vital but faces economic limits; (3) the consensus that regulation—both of data and AI safety—is essential. These agreements suggest a common strategic direction toward democratizing AI through cost reductions, open ecosystems, and thoughtful policy, especially for emerging markets like India.
High consensus on the need for cheaper hardware, open‑source importance, and regulatory frameworks, implying coordinated efforts among industry leaders, investors, and policymakers could accelerate inclusive AI deployment.
The discussion reveals several substantive disagreements: the timeline and economic path to affordable frontier AI, the role and sustainability of open‑source models, the need for a sovereign hardware stack versus reliance on market price declines, divergent views on the purpose and design of regulation, and contrasting positions on data ownership and creator compensation. While participants share the overarching goal of democratizing AI and enhancing security, they diverge sharply on how to achieve these outcomes.
High – The speakers often articulate opposing strategies (e.g., market‑driven commoditization vs sovereign stack, openness vs economic feasibility), indicating that consensus on policy and investment directions is limited. This fragmentation could slow coordinated action on AI democratization, regulation, and data governance, requiring further dialogue to align on shared objectives.
The discussion was steered by a series of pivotal remarks that moved it from abstract concerns about AI monopolies to concrete strategies for democratization. Matthew Prince’s technical and strategic insights about hardware bottlenecks, cost trajectories, data monopolies, and the political use of AI risk reframed the conversation around tangible levers for change. Rajan Anandan’s counter‑point—focusing on India’s unique needs, low‑cost models, sovereign stacks, and a thriving consumer AI ecosystem—shifted the dialogue from a global, US‑centric view to a regional, application‑driven perspective. Together, these comments opened new sub‑topics (efficiency‑driven innovation, open‑source vs security, new internet business models, and trustworthiness) and prompted the participants and audience to explore regulatory, economic, and societal implications, ultimately shaping a nuanced, forward‑looking debate on how AI can be made accessible, responsible, and beneficial at scale.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

