Harnessing Collective AI for India’s Social and Economic Development
20 Feb 2026 13:00h - 14:00h
Harnessing Collective AI for India’s Social and Economic Development
Summary
The panel opened by likening the debate on AI for the collective good to an “Avengers” narrative, assigning each speaker a superhero persona to highlight diverse viewpoints on technology’s societal role and asking whether AI will become an ally or a destructive “snap.” [1][13-15]
Professor Seth argued that AI should shift from answering isolated queries to coordinating whole populations during events such as floods or tax filing, turning coordination itself into a form of intelligence; he emphasized that this requires new technologies, cross-sector partnerships, and proactive policy guidance rather than leaving development to market forces. [25-31][32-33]
Professor Nirav described many societal challenges as socio-technical multi-agent problems, noting that individual optimization often yields local maxima that fail to maximize social welfare; he cited ride-sharing and epidemic prevention as domains where a global optimum would better serve collective needs. [38-55][47-56]
Professor Manjunath explained that recommendation systems act as learning agents that continuously nudge users by shaping the utility functions they optimize, thereby altering preferences at scale; he pointed to the Facebook scandal documented in Sarah Wynn Williams’s book as evidence of large-scale societal impact and argued that these systems function as powerful advertisements that make repeated exposure highly persuasive. [77-84][90-99][94-99]
Antaraa illustrated AI’s governance role through a Maharashtra project that gathered 380 000 citizen inputs via a chatbot and made this feedback mandatory for future law-making, showing how AI can amplify citizen voices while requiring transparent design to ensure equity; Kushe added that the greatest sustainable value of AI lies in personalized services that generate new revenue rather than simple cost-saving replacements, and the panel agreed that public education about AI is more effective than trying to block malicious use. [121-130][288-290][185-204][248-251] They concluded that if AI is built to enhance connectivity, give citizens a genuine voice, and be governed with transparency, tangible everyday improvements could be felt within five years. [301-311][312-321]
Keypoints
Major discussion points
– AI as a coordination tool for whole populations, not just individual assistants – Seth argues that future AI should help coordinate large groups (e.g., flood victims, tax payers) and that this requires new technologies, partnerships, and a shift away from “AI-for-profit” pathways [24-32]. He later stresses that the biggest risk is widespread use without public understanding, not malicious intent [248-251].
– Multi-agent and socio-technical systems as a framework for solving collective problems – Nirav explains that many social challenges (ride-sharing, pandemics, etc.) are inherently socio-technical and can be modeled as interacting agents, allowing a move from local to global optima and better social welfare [36-55].
– Recommendation systems and algorithmic nudging shape preferences and can amplify bias – Manjunath describes how learning agents infer user utility functions, subtly steer choices, and can dramatically alter preferences over time, effectively acting as powerful advertisements [77-96].
– AI in governance can both empower citizens and reinforce institutional power – Antaraa details a large-scale citizen-feedback chatbot used by Maharashtra, showing how AI can amplify voices when designed transparently [121-130]; she later argues that AI should shift power toward citizens by reducing information asymmetry [237]. Manjunath counters that institutions, with their resources, are more likely to capture AI benefits [294-297].
– Impact of AI on work: replacement vs. reshaping and value creation – Kushe highlights that simple task automation often fails to sustain cost savings, whereas AI that enables uniquely human-scale personalization unlocks far greater value (e.g., revenue uplift) [185-202]. In the rapid-fire segment he predicts AI will primarily reshape jobs rather than merely replace them [257].
Overall purpose / goal of the discussion
The panel, framed through an “Avengers” metaphor, aimed to explore how AI can be harnessed for the collective good-by improving coordination, fairness, and citizen participation-while identifying technical, ethical, and governance challenges that must be addressed to prevent harm and ensure equitable outcomes [13-15][20-22].
Overall tone and its evolution
– Opening (0:00-4:00): Playful, optimistic, and metaphor-rich, setting a collaborative mood.
– Middle (4:00-22:00): Shifts to analytical and cautionary as experts present technical concepts (population-level AI, multi-agent models) and raise concerns about algorithmic nudging, government over-reach, and unintended resource consumption [58-68][133-160].
– Rapid-fire & audience Q&A (22:00-45:00): Becomes pragmatic and solution-focused, with concise answers, concrete examples (Maharashtra chatbot, job-impact figures), and a mix of optimism about new value creation and realism about regulatory gaps [121-130][185-202][237][294-297].
– Closing (45:00-53:00): Returns to a grateful, hopeful tone, thanking participants and emphasizing the need for continued collaboration [501-506].
Thus, the conversation moves from an enthusiastic framing to a nuanced, sometimes uneasy examination of AI’s societal role, ending on a constructive, forward-looking note.
Speakers
– Moderator (Janhavi) – Moderator of the panel; serves as the voice asking questions.
– Professor Seth Bullock – Professor; expertise in collective AI, coordination, societal systems, and shared values [S3][S4].
– Professor Manjunath – Professor; focuses on recommendation systems, algorithmic bias, and AI ethics [S5][S6].
– Professor Nirav Ajmeri – Professor at the University of Bristol; specializes in multi-agent systems and socio-technical networks [S21].
– Antaraa Vasudev – Founder/Leader at Civis (NGO); works on civic technology, AI for citizen engagement and governance [S13][S14].
– Kushe Bahl – Senior leader (Partner) at McKinsey - leads McKinsey Digital and McKinsey Analytics practices in India; expertise in AI implementation, consulting, and scaling AI for business [S28][S29].
– Audience Member 1 – Founder of Corral Inc. [S10].
– Audience Member 2 – Participant from Germany (group affiliation). [S25].
– Audience Member 3 – Audience participant (no specific role mentioned). [S1].
– Audience Member 4 – Intellectual property and business lawyer. [S23].
– Audience Member 5 – Audience participant (no specific role mentioned). [S7].
– Speaker 3 – Unspecified speaker (role/title not provided). [S15].
Additional speakers:
– (None)
Opening & framing – The panel opened with a playful “Avengers” metaphor, positioning each speaker as a superhero to illustrate the diversity of perspectives on artificial intelligence (AI) for the collective good, and the moderator asked whether AI would become an ally or the “great snap” that could threaten society [1][13-15].
Population-scale AI – Professor Seth Bullock argued that AI should move beyond answering isolated queries and become a tool for population-scale coordination (i.e., coordinating whole groups of people rather than individual queries). He described intelligence as the ability to orchestrate whole communities-flood victims, patients with a common disease, or taxpayers-through shared knowledge and coordinated action [24-33]. To realise this, he called for new technologies, delivery models, and cross-sector partnerships among researchers, private firms, non-profits and governments, warning that reliance on “path of least resistance” commercial tools would be insufficient [31-33].
Multi-agent socio-technical systems – Professor Nirav Ajmeri framed many societal challenges as socio-technical multi-agent systems, explaining that intelligence emerges from the interaction of human and technical agents and that optimisation for individual users often yields local maxima that do not serve overall social welfare [36-55]. Using ride-sharing and pandemic-prevention examples, he showed how a global optimum derived from multi-agent modelling could improve collective outcomes and fairness [47-56].
Recommendation systems & nudging – Professor Manjunath characterised recommendation systems as learning agents that infer users’ utility functions and continuously nudge preferences. He noted that the utility functions are set by platform owners, not users, allowing platforms to reshape tastes over time and act as powerful, personalised advertisements [77-84][94-99]. He cited the Facebook scandal documented in Sarah Wynn Williams’s book as evidence of large-scale societal impact when recommendation engines “go berserk” [90-92].
AI-enabled governance example – Antaraa Vasudev presented a concrete example from Maharashtra, where a simple chatbot collected 380 000 citizen inputs (voice notes, text, drawings) and fed them into the policy-making pipeline, making citizen feedback a mandatory consideration for future laws [121-130]. She stressed that such systems must be transparent, accessible and equitable, and argued that AI can reduce information asymmetry to close power gaps [109-115][237]. Later she expanded the vision, noting that disaggregation of civic-tech platforms can enable decentralized control and broader citizen participation [380-386].
Rapid-fire exchange – In a brief rapid-fire segment, Antaraa asserted that AI shifts power toward citizens by amplifying their voices [260-262], while Professor Manjunath countered that institutions with greater resources are more likely to capture AI benefits, making it difficult for citizens to compete [280-283]. He also warned that algorithms can hide bias, a point raised during the same exchange [280-283]. Professor Bullock warned about the next wave of agentic AI, describing purposive agents that communicate with each other and could generate cascades of resource consumption from trivial requests (e.g., a picture of a dog on a skateboard), disadvantaging other users unless social responsibility is embedded in their design [58-68].
Role of government – Professor Manjunath critiqued heavy-handed state direction, citing India’s CDOT project and Japan’s Fifth-Generation computing initiative as examples where governments, as generalists, failed to keep pace with rapid technological change [139-152][155-162]. He advocated an enabling and monitoring stance rather than micromanagement, a view echoed by Antaraa’s call for transparent frameworks and by an audience member who cited recent bans on social-media use for minors in Spain and Australia as useful early guardrails [109-115][409-416].
Employment impact – Kushe Bahl distinguished between simple task replacement and value-creating personalization. He argued that replacing routine tasks rarely yields sustainable savings, whereas AI-driven personalised services-such as recommendation engines that boost revenue by up to ten percent-unlock far greater economic value and reshape rather than merely replace jobs [185-202][257].
Education concerns – The audience’s rapid-fire reactions (excitement, anxiety, FOMO, etc.) were tallied by the moderator, highlighting mixed emotions about AI’s role in learning [320-327]. Bahl warned that AI-generated content, while correct, often lacks “soul” and inspiration, making it unsuitable for deep learning [375-378]. Manjunath shared a classroom example where a student used ChatGPT to fabricate data, illustrating how instant AI feedback can bypass step-by-step learning and undermine understanding [490-496].
Intellectual-property & consent – Professor Bullock noted that generative models are already trained on copyrighted material without consent and that legal battles over musicians’ and artists’ rights are just beginning [470-478]. He proposed the development of consent-based data ecosystems in which participants voluntarily share information for collective benefit [476-478].
Regulatory experiments – An unnamed speaker highlighted early regulatory experiments restricting AI-enabled platforms for children, arguing that such steps, though imperfect, signal accountability and may influence industry behaviour [409-416]. Manjunath reinforced the need for agile, enabling regulation rather than rigid micromanagement, noting the difficulty of imposing guardrails on fast-moving technology [139-152][155-162].
Audience questions – When asked about AI’s impact on young minds, Professor Seth Bullock responded that education systems must adapt to foster critical thinking alongside AI tools [340-345]; Kushe Bahl added that over-reliance on AI can erode foundational skills [350-354]. A question on regulation of AI in education elicited Manjunath’s answer that standards should be flexible, outcome-oriented, and regularly updated [470-476].
Closing visions – Professor Bullock envisioned AI delivering a greater sense of connection by breaking down language, expertise and distance barriers, enabling richer citizen-government interactions that go beyond simple voting [301-311]. Kushe Bahl offered a concrete “‘unicorn-scale’ impact” scenario: if AI could raise the earnings of India’s 150 million self-employed workers by just ₹600 each, the aggregate effect would be transformative [312-321]. Antaraa reiterated that disaggregated, transparent AI systems can broaden access to governance, while Professor Ajmeri highlighted the potential for collective decision-making at scale, and Professor Manjunath warned that the quality of AI-generated output must be critically assessed [380-386][470-476].
sci -fi movies that we grew up watching and what it primarily also reminds me of is in specific terms the avengers right the avengers are the superheroes and they’re trying to you know save the world and decide how one can do that and they all have very different strengths so i was wondering that if all our panelists were superheroes who would they be introducing our panelists i have our first avenger captain america principled steady under pressure obsessed with doing the right thing even when it’s unpopular professor seth is exactly that and reminds me of the lens that he brings in he studies how societies hold together how coordination succeeds or fails and why systems need shared values as much as intelligence next we have spider -man spider -man strength isn’t brute force it’s his ability to navigate through complex webs adapt quickly, and see connections that others miss.
Professor Nirav thinks the same way. At the University of Bristol, his work focuses on multi -Asian systems because societies like Spider -Man are all about networks. Andhra Vasudev reminds me of Captain Marvel, operating at scale, moving across institutions, pushing boundaries. Through her NGO service, she uses AI to amplify citizen voices and reshape how power flows between governments and people. And of course we have Iron Man, Iron Man who is obsessed with execution, iteration, and making ideas work in the real world. Mr. Bal is our Iron Man, focused on execution, scale, and impact in the real economy. He leads the McKinsey Digital and McKinsey Analytics practices in India. Last but not the least, no team is complete without Bruce Banner.
Deeply aware of the challenges that we face, of AI’s raw power and focused on how to control it before it controls us, Professor Mantunath’s work reminds us that intelligence at scale can cause damage if we don’t fully understand its consequences. My name is Janhavi and today I’m embodying Jarvis, except for being the one answering the questions, I’m the voice asking them. Every Avengers story has a Thanos. The real question is whether AI becomes our ally or the great snap that we didn’t see coming. So when we talk about AI for collective good, we’re not just talking about smarter apps, we’re talking about systems that influence how people live, work and participate in society. Before we start, I would request all my panelists to just stand up for a quick photo op.
So, quick show of hands from the audience. How many of you feel that technology today is only with those who have power or resources or information, that technology has been reserved for the elite few? Do we have a show of hands in the house by any chance? Okay, clearly we don’t really have an opinion as such over here. But moving on, Professor Seth, when we look at society, you know, governments, markets or online platforms, we often assume that problems exist because we don’t have enough intelligence or data. Your work suggests something a little bit deeper, that perhaps failures come from how decisions interact at scale. From a systems perspective, Do you think our biggest societal problems are intelligence problems
Thanks a lot. So it’s great to be here in India. I think this topic is extremely relevant to both the UK where I’m working and India. And I think the answer is that coordination is intelligence in this situation that we’re interested in. So I guess we’re used to situations now where we interact with an AI as an individual. One person asks the AI a question and gets one answer. But really there’s the potential for us to develop AI systems that are designed to support a whole population at once. So a population of people that are affected by a flood, a population of people that all are coping with the same. disease or medical condition, a population of people that are all trying to get taxes to and from a summit.
So instead of AI answering individual questions, AI can help coordinate those people, share intelligence, share their knowledge, and achieve better outcomes. And I think that’s quite a different way of framing AI than many of the systems that we’re hearing about and requires different technologies and different ways of delivering that to people, different ways of engaging with populations. So I think that’s something that can only really be achieved by partnerships between researchers and companies and not -for -profit organizations and governments and requires probably interventions in the way that we promote AI rather than letting the sort of path of least resistance develop AI commercial tools. I think there are opportunities to really engage with the idea of making AI for populations.
Wonderful. Professor Nirav, you’re also from the University of Bristol and your work focuses on multi -agent systems where basically intelligence emerges from all these entities interacting with one another. What kind of social problems are best suited for these multi -agent approaches?
Thanks, Chandni. Good question. And I think partly Seth already answered what multi -agents could do. So all problems that we’re thinking over here are in terms of, if you’re understanding those problems, they are socio -technical in nature. So there are social entities including people, organizations which interact. All of us also use some technical tools. These could be intelligent agents. These could be applications, softwares that we use. And all of these combined together, help us. So all problems… include or all domains are socio -technical in nature. Multi -agent inherently can encapsulate socio -technical systems. So that is how I would look at it. If you’re talking about, say, ride sharing, for instance, or hailing a ride, current system could be optimizing only for me, right?
And then what we end up with, we could end up with local maxima. So if we are optimizing for each one of us, we are doing a local optimal for each of us. But we may not be doing a global optimal. And global optimal would map to social welfare. What does social welfare mean? Does it mean just maximizing experience for everybody? Or are we meaning satisfactory experience? So I think any problem that we think about, say, epidemic, pandemic prevention, making sure. That is. are located properly, all of that would be multi -agent in nature.
Interesting. Professor Seth, do you have anything that you’d like to add on to that?
So, yeah, I think we’ve heard, I think, a little bit from some AI leaders about a next wave of AI that will be agentic, where we won’t be just interacting with ChatGPT as a monolith. We will be interacting with an agent that has purposive aims and is helping us to achieve tasks. And it might do that by communicating with other agents. Whenever we interact with AI, we would be, in fact, interacting with a population of AIs that are sending each other information, that are tasking each other with different jobs to do. And actually, it might not be clear whether one of those agents is artificial or a person. And so, if we enter into that sort of world, I think we have to really understand whether those agents are going to be able to do that.
I think those agents are interacting with each other. in a way that is likely to advantage the community of users because the amount of resources that will be consumed by these population of agents and the potential for them to interact in ways that have unforeseen consequences for other people are going to ramify. When we do that manually, really we can only hold so many interactions with other people at once, and so we’re limited in the scale. You know, one request does not create this kind of cascade of other requests in the system. But as we move to artificial systems, that scaling will rapidly increase and potentially one trivial request by me asking a computer to make a picture of a dog riding a skateboard could create a whole kind of wave of different agentic interactions that consume loads of resource and also, depending on what I’ve asked for, disadvantage other people.
So embedding some kind of social responsibility… into those agents, some appreciation for how their behavior impacts other agents in the system, I think is going to be imperative. Otherwise, we end up with systems that create conflict and contestation for resources.
Interesting. Whenever I’m on Instagram or Facebook, and let’s say if I’m talking to my friends, I’m really thinking about buying this Dyson or a particular product, it’s always weird to me how the next time I open the app, it’s almost like the app has heard me, and I start seeing the ads for those exact things, even if I’ve not searched it. I’ve just talked about it to someone. Has anybody here also experienced the same thing, a show of hands quickly, where you feel that maybe the choices that we make, are they really our choices, or are we being nudged by algo somewhere? So, Professor Manjana, your work focuses so much on recommendation systems, and we often hear that these algos are just tools.
Perhaps your research suggests that they actively shape what people see, buy, believe. How much of human behavior today is genuinely chosen by us and how much is subtly nudged by these algorithms?
Yeah, recommendation systems and the way they shape many of our feelings and our attitudes and our habits has essentially been a significant concern for me for a while. One of the things that you have to think about when you look at recommendation systems is that they’re essentially learning agents. So they want to learn your preferences, your likes, your dislikes, etc. And when they’re trying to do that learning, they do things. They’re trying to sort of give you options, different kinds of options, and then see how you react. So there is the first way in which the interaction between you and the learning system happens. The learning algorithm. So corresponding to our recommendations. So according to our recommendations.
system happens is this, they are showing you a variety of things and the way you react. And then your reaction is usually captured in some kind of utility function, something that the algorithm believes is positive for whoever is designing that algorithm. Now, what exactly is that utility function essentially determines what gets recommended to you in the future and what the system learns about you. Now, there is no such thing as the right utility function and every organization will figure out what they want for themselves. And if you just look at some of the, we have actually done several models, mathematical models on this and show that if I, depending on the kind of learning algorithm that I have and I am assuming a benign recommendation system here, depending on the kind of learning algorithm that I use.
Where I start off with a set of preferences. by the end of the day or over a certain time horizon, my preferences can be dramatically different. So there is a certain nudge that is steadily pushed by these algorithms and in which direction the nudge is pushed depends on the kind of algorithms they use and the kind of what we call utility functions that they use. So what exactly are they trying to optimize for themselves? And if you look at various analysis of many of these, especially Facebook algorithms, there is a very famous book that came out recently by somebody called Sarah Wynn Williams who was an insider. You can see the impact of what that had on some sections of some society elsewhere when the whole recommendation system went berserk.
So there is definitely a huge impact on the population’s preferences by the recommendation systems. And if you want to sort of give a quick understanding of that, recommendation systems essentially are advertisements. The difference between a standard… and this advertisement definitely shape our preferences. If you see something more often, you will start thinking about it and so on. The difference between, at least in my opinion, the difference between the advertising advertisements that you see on the street and the advertisement corresponding to a recommendation engine is that you are significantly more receptive. You are looking to do something. And when you are trying to look for something to do, if the recommendation pushes you in a certain direction, you are naturally going to go there.
So the impact of recommendation systems on the population’s preference, in my opinion, is spectacularly large.
Wow. That’s quite a lot to actually digest and hear from. I really wonder how much my personality is my own at this point. Antara, so from your work in civic engagement, when AI enters governance, is it to primarily help citizens be heard or is it helping governments manage complexity? And where do… Where do citizens struggle the most when technology becomes the interface between them and the government?
Thank you for that question. Just want to make sure that everyone can hear me. Thank you. Some problems like on -stage mics, AI cannot solve. No, but I just wanted to, of course, next year, correct. Thank you for that lovely question and lovely being here with all of you today. Jandi, to your point, I think AI currently is being used in both use cases. It’s allowing us to engage with citizens who perhaps have little or limited knowledge about law and policy and to be able to help them clarify doubts, for them to be able to air out their grievances, for them to actually be able to understand the frameworks of policy and law that govern their lives.
But in addition to that, it is also being used in a very large way for optimization. In a country of India’s size and diversity, I think the only other ways to perhaps not tackle circumstances, not an important governance does. So better than that is to actually build strong and robust frameworks for how governance can utilize AI, which is put out in a manner which is transparent, accessible, and one that actually has certain equity built in, which is really what the panel is also discussing today. And once you have that, to know that these optimization solutions can perhaps be built by AI rather than citizen -led. So at CIVIS, we’ve actually been working on gathering a lot more public feedback on draft laws and policies using AI.
And again, we see optimization in both ends, but very, very mindful of the fact that the frameworks that govern that level of optimization are what needs to be designed before perhaps even we race to the next model.
Got it. Can you share some examples of the kind of laws that have been impacted or the kind of work that you’ve done? Have you worked with different state governments? Governments where citizens of that particular state have been able to engage with the government about a certain law? or practice has been happening. Thank you.
Absolutely. So I’ll share one example from recent work with the government of Maharashtra that Civis led. The government of Maharashtra actually undertook a very ambitious mission of trying to understand how the next 22 years of the state can be governed by citizens’ voice. Now, this is something which is honestly quite remarkable on their part. What Civis was able to do is that we built out a very easy -to -use chatbot, wherein you could send in a voice note, you could send in any text messages, or you could even, we had people send in drawings, letters that they had personally written to the Chief Minister and other things. Civis aggregated all of that feedback. So that was almost 3 .8 lakh citizen responses from 37 districts across Maharashtra.
And that was aggregated, sorted through, and then shared with the government as well. The Vixit Maharashtra report, as it’s called, is now publicly available at the government of Maharashtra has put it. out on their own website as well. But in addition to that, what’s been really interesting about it is that they have said that every law that’s going to come out in the state for the next coming years has to, in some way, factor in what citizens are saying about that problem area or that district for where the law is being made. And you can only do that if you’re able to actually engage at scale. And I think that’s the beauty of what that entire project showed.
Absolutely. Professor, how do you feel about the government in terms of what approach should they be taking when it comes to AI and technology?
Yeah. One of the fears that I have when the government gets involved in technology development is that they want to start controlling the direction. They want to tell what to be done at a very micromanaging kind of level. And I recently had an article on Tuesday, I think it was in the Financial Express. There was an op -ed where we talked about me and a colleague of mine. We talked about, you know, looking, we looked at history. kind of successful and spectacularly unsuccessful involvements of the government when they wanted to direct technology. So I’ll just give you two quick examples. So in India, about 40 years ago, there was something called CDOT. So that developed some spectacular technology when it was left alone.
The government started to direct it and micromanage the flow of technology. Many of you probably don’t even know CDOT. They don’t even come to IIT Bombay campus, for example, for recruitment. That’s just one example. If you look at Japan, just to give you another badly successful story, many of you are too young to know about something called the fifth generation computing systems that they wanted to start off in. The AI boom that we see today was originally planned to be launched in Japan in the 1980s. There was a huge project that the government wanted to micromanage, develop native hardware for AI and everybody thought they would be successful. It was a spectacular failure. The failure essentially stemmed from the fact that the government was directing everything.
Governments are generalists. People who run governments are generalists. They are brilliant people. They know society. They understand administration. But they don’t understand technology. Especially a technology that is moving too damn fast has a very large surface area and they cannot control it. They cannot control that. So it is best that they just enable and let others, let the people on the ground, people with a track record and people who want to take risks manage them. They should be enablers. They should also be monitors. Monitors nudging it in a certain direction making sure bad things don’t happen. But that’s a very hard task. So the biggest role that the government should have is just enable and step away.
Just to give you one positive example the NPCI in India is a spectacular example of where the government started something and let the private sector and sort of technologies handle that. In the US many of you may be familiar with the internet. It was exactly that. It was just a vision that somebody had and said let’s build this. and the technology is built. That’s the way I would think the government should handle it, but we’ll have to see how that goes.
So just a quick question for the audience. You guys can shout the answers out loud. What emotions come to your mind when we think about AI? Are we feeling excitement? Are we feeling anxiety? Are we feeling FOMO? What are we feeling, guys? Curiosity. Dangerous, somebody said. What else? Definitely opportunity. Opportunity. The man over there? Confusion. Confusion. Anything else? Responsibility. Responsibility, fantastic. Great. So Mr. Bhai, this question is for you. There’s a lot of anxiety, a little bit of excitement as well about maybe AI replacing jobs, especially in India’s tech and services sector. From your experience working with different companies, where is AI genuinely replacing humans? and where is it actually creating new forms of value and roles?
Yeah, that’s a great question. Thank you. So I think the, let me try and give you the very brief answer, because I could talk about this for a long time. But there is a lot of focus on AI being used to replace humans in particular operations. So, you know, when you have an AI taking a call center call, that’s the simplest example of that. And what, and, you know, the math, the way it works is that, you know, if you’re spending 100 rupees on something, you can save 40 % of that roughly by replacing it with AI, with the current economics of the way it works. And obviously, if you’re in a high -cost geography, you can save more.
In a country like even in India, you can save that much. What we have found, though, is that most of the cases where you do this simple replacement of a human with AI, that’s not the case. cost reduction doesn’t really sustain. There’s a famous example of Klarna in Europe where they brought back a lot of the costs called center costs because they had to bring back some of the senior customer support people because a lot of the conversations were not going well and they were losing customer satisfaction. The same thing with IT, you can replace a lot of developers with this, but then people will come back with more projects and there’ll be more things to be done.
The real value unlock, which is sustaining, is actually when you get AI to do something which humans can’t do or are not able to do because it’s so time consuming and so difficult. For instance, a genuinely personalized customer engagement engine using the kind of recommendation system that he was talking about, which actually engages in a personalized way with every customer that I have as a company, for instance, or every entity that any organization is dealing with. That genuinely has value. It creates huge value unlock. So like for instance, I mean if I spend 2 -3 % of my revenue on say customer support and even if I save 40 % on that, I’m saving like 0 .8 % or 1%. But if I can generate even just 10 % more revenue from existing customers with hardly any marketing cost and I make 30 -40 % margin on that, I’m getting 3 -4 % more to the bottom line.
So that is a huge, it’s like almost 5x of what you can save. So the value unlock is very large and that’s sustainable because you’re really getting AI to do, no human being is going to sit and figure out exactly for millions of customers exactly what is the kind of personal message to send because the amount of experimentation you have to do and the kind of connections you have to draw between individuals and similarities and so on, which the recommendation engines are based on are impossible to do humanly. And that’s where the biggest value unlocks are at least that I’m seeing and those are sustainable and they’re actually even applicable in high cost geographies. It’s just that unfortunately a lot of the initial focus of the innovation has been on this just save, you know, do the easy stuff, right?
Have an AI agent replace a human agent. But that’s not the real power of where what AI can bring. So hopefully we’ll see a lot more of that type of innovation going forward as well.
Right. I think I see a lot of students here today. What kind of backgrounds do you all come from? Hands up if you’re from STEM at all? STEM backgrounds? Okay. Anybody from business, humanities, arts? Okay. So I read this LinkedIn post. I’m not sure whether it’s a great post or not. Apparently it’s going to be a little tough for STEM students to, you know, get into this world of AI because they could be replaced a lot easier. What kind of measures, businesses or like degrees does one, should one essentially come from to sustain in this world of AI, do you think? What should the next five years look like?
Yeah, I think there is some near -term potential impact on jobs and particularly on entry level coding jobs and so on. But honestly, there’s nothing which tells us that there is a, firstly, nobody knows exactly how the math is going to work. So between new work that people do for AI enabling versus the old work that may get more efficient because of AI enabled coding and so on, will we next see an increase or decrease of employment? Nobody actually knows. There are many, many forecasts and so on done by economists much more qualified than me. But what one can see is certainly that the enterprise adoption of AI has not really happened. So right now, the impact has not really happened of all of this.
So you’re seeing some initial hit on maybe, okay, this year I have promised I’m going to use AI and reduce my budget by a certain amount, so I’ll stop hiring. That’s the kind of, I would say, almost knee -jerk impact that you’re seeing right now. What eventually plays out will be… A mix of, okay, I will do the work more efficiently and use… a lot more automation, but now I have a lot more things to do as well. So I would say that students in general, actually forget just STEM, students in general need to be focusing a lot on how I can use AI to do the best possible thing that I can do in my field and in every possible field.
So whether I’m studying marketing or if I’m studying science degree or if I’m studying any form of the humanities, you know, there is a lot of journalism. If I’m, you know, whoever I am, right, there’s so many things that I can actually be doing with AI to do my, to do things which I was not humanly possible earlier. And that’s really what the students should be equipping themselves with. And then, you know, potentially innovating and also creating things, you know, around that, but also personally equipping themselves to actually leverage AI. And I think there are lots of examples of how that can play out and will serve people really well.
Absolutely. We are now going to get into a quick rapid fire round and then I want to open up the floor for audience questions. So the only rule here is I want short answers only. No explanations. We only have 10 seconds to answer. So I am going to start off by putting Antara on the spot. Does AI in governance shift power towards citizens or towards institutions today?
I want to say citizens because it allows for a lot more information asymmetry to be addressed which is where a lot of the power gaps come up today.
Professor Manjunath, are algorithms today more likely to reduce bias or hide bias better?
Hide bias. No, the options don’t look right to me. What would you put as the options then? The bias will start increasing. I think they are not trained. I don’t expect training to get better. I think it will be better in the immediate future, maybe much later. But I also want to disagree with what Antara said.
I’ll come back to you for that one. Professor Seth, what worries you more, AI being used with bad intent or AI being used widely without anyone fully understanding its consequences?
Well, they’re both terrible, aren’t they? I think people will always use technologies with bad intent and it can only really be addressed if a large number of people understand that technology and can then resist it. So I think the second is more important. Uplifting the public’s understanding of AI and kind of engagement with AI properly will protect us against malign uses of AI because we will be able to spot them.
Got it. Professor Nirav, what’s harder to design, ethical individuals or ethical systems?
I think that becomes tricky. Like what do we mean by ethical, right? So ethical individuals, if you’re combining ethical individuals and we say individuals combined together is a system, then ethical individuals.
Mr. Bal, in India, will AI mostly replace jobs, reshape jobs or polarize jobs?
Reshape.
That’s a very quick answer. You win the rapid fire answer. Right. Professor Saad, where does AI struggle more today, with people or with systems?
I mean, I think it struggles with people, but we don’t notice because it resembles the kind of natural language. When I say AI, I’m talking about something like chat GPT. So I think there’s a disguised problem with people there because… Those AIs, they don’t really mean what they say, they don’t really understand what they say, but it seems very strongly that they do. So I think that’s the problem. But what’s coming is AI embedded in all of our systems and then that will create its own set of problems as well.
Mr. Bal, who benefits more from AI today, companies or employees? AI mostly replace jobs, reshape jobs, or polarize jobs?
Reshape.
That’s a very quick answer. You win the rapid fire answer. Right. Professor Seth, where does AI struggle more today, with people or with systems?
I mean, I think it struggles with people, but we don’t notice because it resembles the kind of natural language. When I say AI, I’m talking about something like chat GPT. So I think there’s a disguised problem with people there because those AIs, they don’t really mean what they say. They don’t really understand what they say, but it seems very strongly that they do. So I think that’s the problem. But what’s coming is AI embedded in all of our systems, and then that will create its own set of problems as well.
Mr. Bahl, who benefits more from AI? Today, companies or employees?
I would say that right now, no one is benefiting from AI. But if I were to bet, it will be companies who will benefit first. And then employees will benefit. And the whole idea of having sessions like this is that we can get the employees to learn what we talked about, right? Students equipping themselves right from college. Absolutely.
Andra, for AI used in public systems, what matters more, transparency or effectiveness?
Transparency, off the bat. It’s the only way that we can actually design AI for public systems. It has to be at the front and center of all of our efforts.
Got it. Before we get into the last question for the entire panel, I do want to get your answer to Andra’s statement. If that’s fine. The question that I had asked. I’d ask, does AI in governance shift power to a citizen or to an institution?
Absolutely to the institutions. They have the money. to invest and discover what’s going on. There is no way citizens can beat that so easily. It requires a different whatever. I’m not allowed to say anything.
My last question for all the panelists before we open the floor for audience questions. If we get AI right, what is one everyday improvement people in this room would actually feel within the next five years?
So I think something that connects, there’s a thread that runs through this or there’s supposed to be and I think one thing that AI could give us is a greater sense that we are properly connected with each other and learning from each other. So the possibility for AI to break down barriers between people because of language and expertise and distance I think is huge. So the kind of traditional collective interaction intelligence that we’re used to where we put an X in a box when we vote for someone. It’s very, very simple, right? We can’t write an essay like the users of Antara’s system and send an essay to the government about what we want because there’s so many people, we can’t read all of those essays.
But AI can enable that kind of rich interactions. It’s an example of one of the things that Kush is talking about, that AI delivers something that is impossible for humans to do. It doesn’t just replace something that humans are already doing. So a future in which we all feel like we have a voice and AI is helping us mediate between each other, I think is something that is technically possible. There’s a whole bunch of political and social barriers to prevent that from happening. But I think five years is a timeline during which we could see the starts of those sorts of systems.
I can talk about what I’d like to see if we get AI right. We talk a lot about institutions, we talk about companies, we talk about individuals. But not enough talk happens specifically about small businesses. India is a country of self -employed people and small enterprise. I think there are about 150 million self -employed people. If each of those people could somehow earn 600 rupees more because of AI, and I’ll talk about how, that’s a unicorn. So 600 rupees more of allocation for each of these 150 million people is not, I mean there’s a lot of large numbers in India, but it’s true, right? It’s a unicorn. So I think we think of the next 50 unicorns. We may not think of like 50 companies worth a billion dollars, but we may think of 50 innovations that puts 600 rupees more in the pockets of 150 million people.
And how does one do that? I mean if you look at all the important things all of us use today, ride hailing, e -commerce, this restaurant ordering, food ordering, right? All of these created by… On institution, they make an app and then they do spend money on marketing and so on. Today, you have AI systems that are incredibly low cost. You know, 50 cab drivers can get organized. There’s an AI agent can do the scheduling and whatever. You have a WhatsApp chat with them and you can just find the driver, right? There’s no reason why we can’t have innovation like this. Very low cost. The price, the cost of the tokens can be funded in that ride.
It can be, right? That’s all that there is to run it. It’s an autonomous system which just runs off publicly available infrastructure. I think that, to me, is the real unlock that we can see. And those same systems can then serve anyone in the world. So you can do this for taxi drivers. You can do this for lawyers. And those lawyers can then serve anyone anywhere in the world. So I think that’s the real, real unlock that we are waiting for. These systems are very low cost to build. They can be built by anybody. They can be self -built by people. And it just takes a few groups, a group of a few of these self -employed people to get together.
And then, you know, suddenly this can go viral. So I would love to see that type of innovation coming. Rather than necessarily, you know, the stuff that we know for the companies we’ll do or the things that we’ll all play around with on our LLMs on ourselves.
Great. Antara?
Thank you. I think building on what Seth and Mr. Baral just said, there’s two things that I see happening. One is the disaggregation of systems and a lot of decentralized control mechanisms, right? When that happens, you have very fragmented channels to actually engage with institutions, to Seth’s point about building collective and new ways of collective intelligence. What I want to see happening for all of us in the room is greater access and connectivity to public institutions, which actually fuels us to get easier access to entitlements and benefits that the state is supposed to provide to us. If AI can get that right, if we can solve for that, I think there is a long and a big argument to be made about that being the sort of rising tide that lifts all boats.
Building on to what people have been talking and last on Antara’s point, thinking about collectives, right? So we can build systems which work for individuals, but how do we make sure that those individuals could be, like each individual have different preferences. How do we take into account different people’s preferences? How do we aggregate people’s preferences and then come up with a collective decision? If you are coming up with a collective decision, how does that decision affect various other people? How do we explain that decision to other people that, hey, we have taken into account your preferences in this particular way? So we need to get that part of AI right to make sure that people have a buy -in, people trust the system that we are designing.
So that is what I would want to see and I’m thinking that we are moving forward with that. We are thinking about fairness. We are thinking about, transparency. we are thinking about accountability and so on and so forth
yeah I can probably say what I already see the homeworks that my students submit are perfect the essays are spectacularly written the presentations are beautiful the only hope that I have is that they actually understand what they say so if that happens I will be very happy I think the output is perfect the understanding behind that output I hope will get better and better that’s my my wish for
I’m going to open up the floor for audience questions
my question is sir I want to understand what kind of impact AI will be having on management consultants and the business
I have no idea I have no idea really it’s very hard to say every industry is going to evolve Obviously, management consultants like everybody else are using AI for every possible thing that they can do with it. So they’re also trying to become more efficient, more productive with it. We don’t know what that means in terms of reshaping of the business. If you look at past tech innovations, which have also had a very big impact on productivity in many sectors, it’s not that entire sectors have disappeared or things have got, but things have got reshaped significantly. That has happened a lot. So I think the job that consultants do, like today when we do research, you don’t wait for one week for somebody to go and find things from everywhere.
It comes in a few minutes. Unfortunately, I find that a lot of the output, I have also seen a lot of the output, like Professor Manjunath said. I find two issues right now with the current versions of the AI. When it writes, it has no soul. So it’s correct, but it has no soul. And when it prepares a presentation or a piece of communication, it’s not inspiring. So it is correct, but it’s not inspiring. So I think there is a, so the consultants will spend more time on actually communicating in a way that’s inspiring while the desk, you know, the basic desk work will be done for you. So you spend time doing more, I would say, human tasks.
And that’s going to happen actually in a lot of other, in a lot of service jobs, right? You’re going to do, you’re going to spend time doing what humans are truly supposed to do and are really good at, which the AI models are not able to do.
Okay, thanks. So my question is for everyone. I have a younger cousin who is in high school and her entire life is on chat GPT at this point. So she shares everything, relationship issues, family issues, and it knows more about her than I do. And I kind of worry when I see the younger generation getting on these AI platforms. So what is your take on this, like, impact? What is the impact of this technology on young minds?
So. I share your concern I have slightly older kids I think we have to trust that we’ve been through these technological shifts before so my parents when they looked at me watching television had similar worries about they told me that my eyes would become square because I watched too much television so actually my generation became much more sophisticated consumers of television and were much more savvy about TV ads than my parents’ generation so I think we have to listen to our children about the way that they’re using these technologies they’re natives in this new world I’m calibrated for a world where AI doesn’t work where AI is not rolled out across the whole world so I’m the wrong person really to ask about how AI is going to change people we should ask young people how they’re using it and engage with them before they start to use their AI in a way that we don’t understand in secret
I have a funny answer and a short answer. But I think that one, I think the real danger actually is not with the chat GPTs of the world, but with the earlier addictive systems like the Instagrams of the world, right? Because they are genuinely playing on our brain’s dopamine circuits and are genuinely addictive and can therefore be harmful. I think with chat GPT, I think the only thing I would say is, I think it makes one actually question where we are as individuals, as parents, as family, that our children prefer to communicate to a relatively soulless communication device which answers everything like an American therapist textbook would, right? That they prefer to talk to that than to us.
It shows what a distance we have created. With each other, right? And that may be a good reminder to us as individuals around the task that we have to do in to rebuild bonds with each other.
I think on a very similar note actually to what Kusha just said, I think there have been studies from Youth Ki Awaaz and a number of other global youth -based organizations which have been looking at why exactly we turn to AI. And the phraseology is very interesting there because it indicates that turning to AI is something that you can also turn away from. I think the questions really come up where exactly what was just mentioned about understanding what are the kinds of tactile family bonds, what are the kinds of lived experience -based interactions that we can keep having with the younger generation to show that AI is a part of their life, but it’s not the only part of their life.
And I think that’s maybe my hypothesis on where we’re headed there.
I have a quick follow -up and you can connect with the previous question also. Many countries right now are trying to ban the new AI. Clearly there is evidence it is harmful in the course it’s coming. You mentioned Instagram or any other. AI is an amplifier. So unless we design, whether it’s regulation or whether it’s guardrails or whatever, what is our hope and what is the hope for a society not to get amplified harm than what they have already experienced, especially for the generation? Shall we start with you, sir?
Well, I think that’s basically what I wanted to say was to, the countries of Spain and Australia are two examples of where severe restrictions have been put on social media companies to at least give access to children. And that’s an interesting experiment. One has to see what’s going on. What will happen because it’s not an easy thing to do. I mean, I think technologically it’s not easy. Legally, I’m sure there are a lot of loopholes in all of this. We have to see how that evolves and potentially apply a similar. similar kind of guardrails with respect to AI. That’s the view, at least that’s the view that I have on that matter.
No, it has to start somewhere. I mean, this exactly goes to my point that I made earlier. Generalists in government cannot handle the space at which technology can move. You cannot put guardrails on that at the beginning. The moment you know something is happening, you have to get into the act as quickly as possible. Somebody is making an attempt. So let’s understand what’s going on. Maybe it’s, I mean, exactly what goes on is, what will happen is something that we have to see. I mean, what was interesting, at least in that attempt, was that the way in which the social media companies reacted to both the Australian and the Spanish ban. Okay, so to me, the most interesting part was they all said it was too fast, they’ve not thought about it.
things through. And then I remembered what Facebook’s slogan was, move fast and break things. They are allowed to move fast, but the legal system is not allowed to experiment. That seemed like an interesting contradiction for me to study.
Relatedly, so the first AI summit in London was very closed, right? Politicians and the leaders of big tech firms. And the idea that a couple of years later governments would actually be legislating in ways that limited in this case social media companies is very good news. After London, you could imagine that regulatory capture had happened, right? Governments were not going to be able to resist these big companies and their multinational power. So those first couple of steps of regulating social media for under -16s, even if it doesn’t quite work, even if it’s not exactly right, it at least is a step of introducing regulations and it will make AI companies… at least aware that that is a possibility.
Because they have to take that responsibility, I think.
Professor Nirav, do you have any other input on that as well?
I think I agree to the points that have been made. I think there could be different ways to think about a blanket ban, for instance. If you try to restrict something, people may not… They can have more curiosity in terms of why is it something which is getting banned. So we have to be thinking about that as well. But there is a step. There will have to be some regulations that should come into place. What those regulations would be, we need to be thinking about that. I think a lot of times the worry is people keep scrolling. And then the way the algorithms work, Professor Majunath knows better, but recommender systems would put you in a rabbit hole.
And you keep going into one direction. There could be echo chambers that could get informed. So the younger population is more vulnerable there, and that is where possibly a ban or restricted access helps. We have to be thinking about how can we, say in YouTube, there is YouTube Kids, and they only see kids’ content, but then there are malicious actors who would post some content which is targeted towards kids, but it is not actually kids’ content. There could be somebody could come up with a new social media platform for kids. I am not very sure what it would look like, but there would be new technology that would come, but that needs some guardrails to be put into place.
What kind of guardrails? Research and the legislation will have to be thinking about it.
Sure. I think we have time for one last question. Can we give it to somebody at the back? Yeah. The jean jacket. Yeah. Go for it. Can we pass the mic at the back, please?
So, definitely AI has enabled in the education and medical domain. But do we think that it has influenced, reached or violated the concept of the developers as well? There are singers who no longer exist. We are getting to hear those songs in the new generation. The ones who are alive, they definitely have a way to improve. But those who are not going to exist, it’s a breach of concept that, of course, it is falling under the domain of ethical AI. But just wanted to know your thoughts.
Is there someone that’s directed route for this question or is it open for all? Ethical. Okay, we can just, whoever would like to take that.
So, I think it’s a completely legitimate concern. Okay. And it’s difficult to understand where we go from here because the cat is already out of the bag, right? The models are already trained on everyone’s data without our consent. And how do we put that back in the box? I’m not sure that we can. I think, so there are currently legal cases that are going through the courts about the IP claims of musicians and artists, and it will be very interesting to see what law courts decide about that. I do think the kind of systems I’m interested in are systems that are built on consent. So a population of people that all have diabetes who sign up for an app that will track their disease, and then they gain by being part of a community where information is being shared to help people manage their diabetes.
So that’s a much more consenting model. It’s not about stealing people’s writing and art and music from the Internet, but that activity is already underway, and I don’t see a way of really putting it back in the box.
Let’s do one last question.
Yeah, I guess it’s not the… the topic of education and the internet is strong and all of those things. One thing that we have observed is that instant feedback even by AI tools in education especially, students do not go through the whole process, the step by step process of foundation. So if your, let’s say your courses work in a way or the tools work in a way that they are step by step trying to make learn the person, make learn the student instead of giving instant gratification with the output. So one thing, the question is like this that has any of the professors in the panel been approached for this kind of a thing for modeling of the education process or process of getting educated or learning especially.
And the other thing that would you, can we see a collaboration in that regard where we can try to create a regulatory thing for us or a guidelines that how AI tools should be constructed for imparting education in a step by step so that that is structured with gratification. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
Yeah, the short answer to honor his, whatever, never mind. I didn’t get that right. So, yeah, the short answer is no, nobody is thinking along those lines. And handling AI in a classroom has been quite painful. And to give you one example, there was an example in which I asked somebody to do something, essentially write a certain program to perform a certain task. I gave the data. The student, because the student went to chat GP to understand what the question was about, created her own data to do and did not know how to use the data that I was giving. So the point you are making is extremely valid. if you want to think about legislation or any other guardrails or anything like that, I’m up to discuss those with you offline.
Give a very brief answer today. More generally, I think every university is struggling with that question. And I’m hoping that there are lots of bright people and we will start to see some answers. But it’s not easy.
Well, a big thank you to all the panelists here. And a big thank you to all the audience members as well for being such great and engaging people. We have a token of appreciation from the University of Bristol side for all the panelists. From all of our sides. From somewhere. Thank you very much. Thank you. Thank you. Thank you. Thank you.
Thanks a lot. So it’s great to be here in India. I think this topic is extremely relevant to both the UK where I’m working and India. And I think the answer is that coordination is intelligence in thi…
EventArtificial intelligence | Social and economic development Professor Bullock argues that AI systems should be designed to support entire populations simultaneously rather than just answering individua…
EventHowever, several challenges remain unresolved. The technical issue of AI hallucinations continues to affect user trust, particularly in educational contexts where accuracy is critical. The broader sys…
EventLow to moderate disagreement level with significant implications for implementation strategies. The differences suggest that while there’s broad consensus on the importance of edge AI for the Global S…
EventAmandeep Singh Gill:Thank you very much. It’s a great pleasure to join you, and such an important topic. So, the interface of, as Laura put it, socio-technical systems, but I might even add socio-lega…
EventPeiChin Tay emphasizes the importance of leveraging technology to reduce barriers and create digital feedback loops in e-government systems. She argues that this approach can enhance civic engagement …
EventThere is a large asymmetry of power and information
EventJennifer Bachus: So, in addition to my very strong concern that essentially A.I. governance is going to strangle A.I. in its bed, for lack of a better term, I think the United States is also concerned…
EventNg emphasized that whilst efficiency gains from AI point solutions might yield modest improvements, transformative workflow redesign could deliver much greater value. However, he acknowledged that “wo…
Event### Task Automation vs. Job Replacement
EventJuliet Mann argues that artificial intelligence is advancing at an unprecedented pace compared to previous technologies. She emphasizes that this rapid advancement is fundamentally changing how people…
EventOverall Tone:The tone was consistently celebratory, appreciative, and inspirational throughout. It began formally with the awards ceremony, became more personal and engaging during founder testimonial…
EventThe tone is consistently optimistic, enthusiastic, and collaborative throughout. The speaker maintains an upbeat, mission-driven approach while acknowledging the challenges ahead. There are moments of…
EventThe tone is consistently optimistic and collaborative throughout both speeches. Both speakers maintain an encouraging, forward-looking perspective while acknowledging serious challenges. The tone is p…
EventAdditionally, SDG 17: Partnerships for the Goals accentuates the critical function of worldwide collaborations in realising sustainable development, inclusive of the crusade against misinformation. Th…
EventThe overall tone was collaborative, optimistic and forward-looking. Speakers shared positive examples and experiences from their countries/organizations, while emphasizing the need for continued coope…
EventSummary:The discussion revealed surprisingly few fundamental disagreements among speakers, with most conflicts arising around implementation approaches rather than core principles. Main areas of disag…
EventThe tone begins confrontational and personal as Hunter-Torricke distances himself from his tech industry past, then shifts to educational and expansive while presenting AI capabilities. It becomes inc…
EventAs AI agentsrapidly evolvefrom tools to autonomous actors, experts are raising existential questions about human value and purpose. These agents, equipped with advanced reasoning and decision-making c…
UpdatesAt TechCrunch Disrupt 2024, data management leadersadvisedAI-driven businesses to focus on incremental, practical applications rather than expansive, large-scale projects. Chet Kapoor, CEO of DataStax…
Updates– 19 Problems such as bias in AI systems and invidious AI-enabled surveillance are increasingly documented. Other risks are associated with the use of advanced AI, such as the confabulations of large …
ResourceThe discussion maintained a professional, collaborative tone throughout, with panelists building on each other’s insights constructively. The tone was pragmatic and solution-oriented, acknowledging si…
EventThe discussion maintained an optimistic and forward-looking tone throughout, characterized by enthusiasm for India’s AI potential and collaborative problem-solving. Speakers demonstrated confidence in…
EventThe discussion concluded with panelists predicting what AI summits might be called in five years’ time. Their responses reflected both optimism and realism:
EventThe conversation maintained an optimistic and patriotic tone throughout, with both participants expressing strong confidence in India’s AI capabilities and future. The tone was collaborative and suppo…
EventThe tone is pragmatic and solution-oriented throughout, with Pentland presenting a confident, business-like approach to addressing AI governance challenges. Rather than dwelling on problems or theoret…
EventThe discussion maintained a consistently positive and celebratory tone throughout, characterized by gratitude, accomplishment, and forward-looking optimism. Speakers expressed appreciation for the wee…
EventThe discussion maintains a consistently positive and collaborative tone throughout, characterized by gratitude, celebration of achievements, and forward-looking optimism. However, there are moments of…
EventThe overall tone was positive and forward-looking. Speakers expressed gratitude to the hosts and participants, emphasized the importance of collaboration, and conveyed optimism about the future of the…
EventThe tone of the discussion was formal yet collaborative and appreciative. There was a sense of accomplishment for the work done during the track, gratitude expressed to organizers and hosts, and enthu…
EventThe tone is consistently celebratory, optimistic, and forward-looking throughout the discussion. It maintains an enthusiastic and grateful atmosphere, with speakers expressing appreciation for partici…
Event“The panel opened with a playful “Avengers” metaphor, positioning each speaker as a superhero to illustrate the diversity of perspectives on artificial intelligence (AI) for the collective good.”
The moderator explicitly referenced the Avengers metaphor in the discussion, as recorded in the transcript excerpt [S3].
“Professor Seth Bullock argued that AI should move beyond answering isolated queries and become a tool for population‑scale coordination, supporting entire groups rather than individual queries.”
Bullock’s stance on designing AI for whole-population support rather than single-user queries is documented in the knowledge base entry [S4].
“Bullock called for new technologies, delivery models, and cross‑sector partnerships among researchers, private firms, non‑profits and governments to achieve population‑scale AI.”
The importance of multilateral, multi-stakeholder collaboration for AI deployment is highlighted in several sources, e.g., the call for broad sector participation in AI initiatives [S102] and the emphasis on multi-stakeholder partnerships for effective AI implementation [S103].
“Professor Manjunath characterised recommendation systems as learning agents that infer users’ utility functions set by platform owners, allowing platforms to reshape tastes and act as powerful, personalised advertisements.”
The knowledge base notes that platforms control massive information about users and use targeted advertising, which aligns with the description of platforms shaping user preferences [S107] and the critique of invasive targeted ads [S109].
The panel largely converged on four core themes: (1) AI must be built for collective, population‑scale coordination; (2) transparent, accountable governance and early regulatory guardrails are essential; (3) AI will reshape rather than merely replace jobs, creating new value; and (4) capacity building and public understanding are critical for responsible adoption.
High consensus across speakers on these themes, indicating a shared belief that AI’s future benefits hinge on coordinated design, transparent governance, and widespread capacity development. This alignment suggests strong support for policies that promote collective AI solutions, enforce transparency, and invest in education and public awareness.
The panel displayed several substantive disagreements, chiefly around who benefits from AI in governance (citizens vs institutions), the appropriate level of government intervention (enabling vs regulatory guardrails), and how bias in algorithmic systems should be handled. While there was broad consensus that AI should serve collective good and that system‑level coordination is essential, the pathways to achieve these goals diverged sharply.
Moderate to high – the core philosophical split on power dynamics and regulatory philosophy could shape policy outcomes significantly. The disagreements suggest that without a shared framework for governance, AI initiatives may oscillate between citizen‑centric empowerment and institutional control, potentially limiting the realization of inclusive, equitable AI benefits.
These pivotal comments collectively steered the panel from a broad, metaphor‑driven introduction toward concrete, systemic considerations of AI. Professor Seth’s framing of coordination and agentic cascades introduced the need for societal‑scale design and ethical safeguards, while Professor Manjunath’s insights on recommendation systems and governmental overreach highlighted hidden influences and policy pitfalls. Antaraa’s Maharashtra case grounded the discussion in real‑world civic empowerment, and Kushe Bahl’s distinction between cost‑cutting and value creation reshaped the narrative around job impact. Together, these remarks deepened the conversation, prompted new topics (population AI, consent, governance models), and shifted the tone from speculative optimism to a nuanced, solution‑oriented dialogue.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

