Harnessing Collective AI for India’s Social and Economic Development

20 Feb 2026 13:00h - 14:00h

Harnessing Collective AI for India’s Social and Economic Development

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by likening the debate on AI for the collective good to an “Avengers” narrative, assigning each speaker a superhero persona to highlight diverse viewpoints on technology’s societal role and asking whether AI will become an ally or a destructive “snap.” [1][13-15]


Professor Seth argued that AI should shift from answering isolated queries to coordinating whole populations during events such as floods or tax filing, turning coordination itself into a form of intelligence; he emphasized that this requires new technologies, cross-sector partnerships, and proactive policy guidance rather than leaving development to market forces. [25-31][32-33]


Professor Nirav described many societal challenges as socio-technical multi-agent problems, noting that individual optimization often yields local maxima that fail to maximize social welfare; he cited ride-sharing and epidemic prevention as domains where a global optimum would better serve collective needs. [38-55][47-56]


Professor Manjunath explained that recommendation systems act as learning agents that continuously nudge users by shaping the utility functions they optimize, thereby altering preferences at scale; he pointed to the Facebook scandal documented in Sarah Wynn Williams’s book as evidence of large-scale societal impact and argued that these systems function as powerful advertisements that make repeated exposure highly persuasive. [77-84][90-99][94-99]


Antaraa illustrated AI’s governance role through a Maharashtra project that gathered 380 000 citizen inputs via a chatbot and made this feedback mandatory for future law-making, showing how AI can amplify citizen voices while requiring transparent design to ensure equity; Kushe added that the greatest sustainable value of AI lies in personalized services that generate new revenue rather than simple cost-saving replacements, and the panel agreed that public education about AI is more effective than trying to block malicious use. [121-130][288-290][185-204][248-251] They concluded that if AI is built to enhance connectivity, give citizens a genuine voice, and be governed with transparency, tangible everyday improvements could be felt within five years. [301-311][312-321]


Keypoints

Major discussion points


AI as a coordination tool for whole populations, not just individual assistants – Seth argues that future AI should help coordinate large groups (e.g., flood victims, tax payers) and that this requires new technologies, partnerships, and a shift away from “AI-for-profit” pathways [24-32]. He later stresses that the biggest risk is widespread use without public understanding, not malicious intent [248-251].


Multi-agent and socio-technical systems as a framework for solving collective problems – Nirav explains that many social challenges (ride-sharing, pandemics, etc.) are inherently socio-technical and can be modeled as interacting agents, allowing a move from local to global optima and better social welfare [36-55].


Recommendation systems and algorithmic nudging shape preferences and can amplify bias – Manjunath describes how learning agents infer user utility functions, subtly steer choices, and can dramatically alter preferences over time, effectively acting as powerful advertisements [77-96].


AI in governance can both empower citizens and reinforce institutional power – Antaraa details a large-scale citizen-feedback chatbot used by Maharashtra, showing how AI can amplify voices when designed transparently [121-130]; she later argues that AI should shift power toward citizens by reducing information asymmetry [237]. Manjunath counters that institutions, with their resources, are more likely to capture AI benefits [294-297].


Impact of AI on work: replacement vs. reshaping and value creation – Kushe highlights that simple task automation often fails to sustain cost savings, whereas AI that enables uniquely human-scale personalization unlocks far greater value (e.g., revenue uplift) [185-202]. In the rapid-fire segment he predicts AI will primarily reshape jobs rather than merely replace them [257].


Overall purpose / goal of the discussion


The panel, framed through an “Avengers” metaphor, aimed to explore how AI can be harnessed for the collective good-by improving coordination, fairness, and citizen participation-while identifying technical, ethical, and governance challenges that must be addressed to prevent harm and ensure equitable outcomes [13-15][20-22].


Overall tone and its evolution


Opening (0:00-4:00): Playful, optimistic, and metaphor-rich, setting a collaborative mood.


Middle (4:00-22:00): Shifts to analytical and cautionary as experts present technical concepts (population-level AI, multi-agent models) and raise concerns about algorithmic nudging, government over-reach, and unintended resource consumption [58-68][133-160].


Rapid-fire & audience Q&A (22:00-45:00): Becomes pragmatic and solution-focused, with concise answers, concrete examples (Maharashtra chatbot, job-impact figures), and a mix of optimism about new value creation and realism about regulatory gaps [121-130][185-202][237][294-297].


Closing (45:00-53:00): Returns to a grateful, hopeful tone, thanking participants and emphasizing the need for continued collaboration [501-506].


Thus, the conversation moves from an enthusiastic framing to a nuanced, sometimes uneasy examination of AI’s societal role, ending on a constructive, forward-looking note.


Speakers

Moderator (Janhavi) – Moderator of the panel; serves as the voice asking questions.


Professor Seth Bullock – Professor; expertise in collective AI, coordination, societal systems, and shared values [S3][S4].


Professor Manjunath – Professor; focuses on recommendation systems, algorithmic bias, and AI ethics [S5][S6].


Professor Nirav Ajmeri – Professor at the University of Bristol; specializes in multi-agent systems and socio-technical networks [S21].


Antaraa Vasudev – Founder/Leader at Civis (NGO); works on civic technology, AI for citizen engagement and governance [S13][S14].


Kushe Bahl – Senior leader (Partner) at McKinsey - leads McKinsey Digital and McKinsey Analytics practices in India; expertise in AI implementation, consulting, and scaling AI for business [S28][S29].


Audience Member 1 – Founder of Corral Inc. [S10].


Audience Member 2 – Participant from Germany (group affiliation). [S25].


Audience Member 3 – Audience participant (no specific role mentioned). [S1].


Audience Member 4 – Intellectual property and business lawyer. [S23].


Audience Member 5 – Audience participant (no specific role mentioned). [S7].


Speaker 3 – Unspecified speaker (role/title not provided). [S15].


Additional speakers:


(None)


Full session reportComprehensive analysis and detailed insights

Opening & framing – The panel opened with a playful “Avengers” metaphor, positioning each speaker as a superhero to illustrate the diversity of perspectives on artificial intelligence (AI) for the collective good, and the moderator asked whether AI would become an ally or the “great snap” that could threaten society [1][13-15].


Population-scale AI – Professor Seth Bullock argued that AI should move beyond answering isolated queries and become a tool for population-scale coordination (i.e., coordinating whole groups of people rather than individual queries). He described intelligence as the ability to orchestrate whole communities-flood victims, patients with a common disease, or taxpayers-through shared knowledge and coordinated action [24-33]. To realise this, he called for new technologies, delivery models, and cross-sector partnerships among researchers, private firms, non-profits and governments, warning that reliance on “path of least resistance” commercial tools would be insufficient [31-33].


Multi-agent socio-technical systems – Professor Nirav Ajmeri framed many societal challenges as socio-technical multi-agent systems, explaining that intelligence emerges from the interaction of human and technical agents and that optimisation for individual users often yields local maxima that do not serve overall social welfare [36-55]. Using ride-sharing and pandemic-prevention examples, he showed how a global optimum derived from multi-agent modelling could improve collective outcomes and fairness [47-56].


Recommendation systems & nudging – Professor Manjunath characterised recommendation systems as learning agents that infer users’ utility functions and continuously nudge preferences. He noted that the utility functions are set by platform owners, not users, allowing platforms to reshape tastes over time and act as powerful, personalised advertisements [77-84][94-99]. He cited the Facebook scandal documented in Sarah Wynn Williams’s book as evidence of large-scale societal impact when recommendation engines “go berserk” [90-92].


AI-enabled governance example – Antaraa Vasudev presented a concrete example from Maharashtra, where a simple chatbot collected 380 000 citizen inputs (voice notes, text, drawings) and fed them into the policy-making pipeline, making citizen feedback a mandatory consideration for future laws [121-130]. She stressed that such systems must be transparent, accessible and equitable, and argued that AI can reduce information asymmetry to close power gaps [109-115][237]. Later she expanded the vision, noting that disaggregation of civic-tech platforms can enable decentralized control and broader citizen participation [380-386].


Rapid-fire exchange – In a brief rapid-fire segment, Antaraa asserted that AI shifts power toward citizens by amplifying their voices [260-262], while Professor Manjunath countered that institutions with greater resources are more likely to capture AI benefits, making it difficult for citizens to compete [280-283]. He also warned that algorithms can hide bias, a point raised during the same exchange [280-283]. Professor Bullock warned about the next wave of agentic AI, describing purposive agents that communicate with each other and could generate cascades of resource consumption from trivial requests (e.g., a picture of a dog on a skateboard), disadvantaging other users unless social responsibility is embedded in their design [58-68].


Role of government – Professor Manjunath critiqued heavy-handed state direction, citing India’s CDOT project and Japan’s Fifth-Generation computing initiative as examples where governments, as generalists, failed to keep pace with rapid technological change [139-152][155-162]. He advocated an enabling and monitoring stance rather than micromanagement, a view echoed by Antaraa’s call for transparent frameworks and by an audience member who cited recent bans on social-media use for minors in Spain and Australia as useful early guardrails [109-115][409-416].


Employment impact – Kushe Bahl distinguished between simple task replacement and value-creating personalization. He argued that replacing routine tasks rarely yields sustainable savings, whereas AI-driven personalised services-such as recommendation engines that boost revenue by up to ten percent-unlock far greater economic value and reshape rather than merely replace jobs [185-202][257].


Education concerns – The audience’s rapid-fire reactions (excitement, anxiety, FOMO, etc.) were tallied by the moderator, highlighting mixed emotions about AI’s role in learning [320-327]. Bahl warned that AI-generated content, while correct, often lacks “soul” and inspiration, making it unsuitable for deep learning [375-378]. Manjunath shared a classroom example where a student used ChatGPT to fabricate data, illustrating how instant AI feedback can bypass step-by-step learning and undermine understanding [490-496].


Intellectual-property & consent – Professor Bullock noted that generative models are already trained on copyrighted material without consent and that legal battles over musicians’ and artists’ rights are just beginning [470-478]. He proposed the development of consent-based data ecosystems in which participants voluntarily share information for collective benefit [476-478].


Regulatory experiments – An unnamed speaker highlighted early regulatory experiments restricting AI-enabled platforms for children, arguing that such steps, though imperfect, signal accountability and may influence industry behaviour [409-416]. Manjunath reinforced the need for agile, enabling regulation rather than rigid micromanagement, noting the difficulty of imposing guardrails on fast-moving technology [139-152][155-162].


Audience questions – When asked about AI’s impact on young minds, Professor Seth Bullock responded that education systems must adapt to foster critical thinking alongside AI tools [340-345]; Kushe Bahl added that over-reliance on AI can erode foundational skills [350-354]. A question on regulation of AI in education elicited Manjunath’s answer that standards should be flexible, outcome-oriented, and regularly updated [470-476].


Closing visions – Professor Bullock envisioned AI delivering a greater sense of connection by breaking down language, expertise and distance barriers, enabling richer citizen-government interactions that go beyond simple voting [301-311]. Kushe Bahl offered a concrete “‘unicorn-scale’ impact” scenario: if AI could raise the earnings of India’s 150 million self-employed workers by just ₹600 each, the aggregate effect would be transformative [312-321]. Antaraa reiterated that disaggregated, transparent AI systems can broaden access to governance, while Professor Ajmeri highlighted the potential for collective decision-making at scale, and Professor Manjunath warned that the quality of AI-generated output must be critically assessed [380-386][470-476].


Session transcriptComplete transcript of the session
Moderator

sci -fi movies that we grew up watching and what it primarily also reminds me of is in specific terms the avengers right the avengers are the superheroes and they’re trying to you know save the world and decide how one can do that and they all have very different strengths so i was wondering that if all our panelists were superheroes who would they be introducing our panelists i have our first avenger captain america principled steady under pressure obsessed with doing the right thing even when it’s unpopular professor seth is exactly that and reminds me of the lens that he brings in he studies how societies hold together how coordination succeeds or fails and why systems need shared values as much as intelligence next we have spider -man spider -man strength isn’t brute force it’s his ability to navigate through complex webs adapt quickly, and see connections that others miss.

Professor Nirav thinks the same way. At the University of Bristol, his work focuses on multi -Asian systems because societies like Spider -Man are all about networks. Andhra Vasudev reminds me of Captain Marvel, operating at scale, moving across institutions, pushing boundaries. Through her NGO service, she uses AI to amplify citizen voices and reshape how power flows between governments and people. And of course we have Iron Man, Iron Man who is obsessed with execution, iteration, and making ideas work in the real world. Mr. Bal is our Iron Man, focused on execution, scale, and impact in the real economy. He leads the McKinsey Digital and McKinsey Analytics practices in India. Last but not the least, no team is complete without Bruce Banner.

Deeply aware of the challenges that we face, of AI’s raw power and focused on how to control it before it controls us, Professor Mantunath’s work reminds us that intelligence at scale can cause damage if we don’t fully understand its consequences. My name is Janhavi and today I’m embodying Jarvis, except for being the one answering the questions, I’m the voice asking them. Every Avengers story has a Thanos. The real question is whether AI becomes our ally or the great snap that we didn’t see coming. So when we talk about AI for collective good, we’re not just talking about smarter apps, we’re talking about systems that influence how people live, work and participate in society. Before we start, I would request all my panelists to just stand up for a quick photo op.

So, quick show of hands from the audience. How many of you feel that technology today is only with those who have power or resources or information, that technology has been reserved for the elite few? Do we have a show of hands in the house by any chance? Okay, clearly we don’t really have an opinion as such over here. But moving on, Professor Seth, when we look at society, you know, governments, markets or online platforms, we often assume that problems exist because we don’t have enough intelligence or data. Your work suggests something a little bit deeper, that perhaps failures come from how decisions interact at scale. From a systems perspective, Do you think our biggest societal problems are intelligence problems

Professor Seth Bullock

Thanks a lot. So it’s great to be here in India. I think this topic is extremely relevant to both the UK where I’m working and India. And I think the answer is that coordination is intelligence in this situation that we’re interested in. So I guess we’re used to situations now where we interact with an AI as an individual. One person asks the AI a question and gets one answer. But really there’s the potential for us to develop AI systems that are designed to support a whole population at once. So a population of people that are affected by a flood, a population of people that all are coping with the same. disease or medical condition, a population of people that are all trying to get taxes to and from a summit.

So instead of AI answering individual questions, AI can help coordinate those people, share intelligence, share their knowledge, and achieve better outcomes. And I think that’s quite a different way of framing AI than many of the systems that we’re hearing about and requires different technologies and different ways of delivering that to people, different ways of engaging with populations. So I think that’s something that can only really be achieved by partnerships between researchers and companies and not -for -profit organizations and governments and requires probably interventions in the way that we promote AI rather than letting the sort of path of least resistance develop AI commercial tools. I think there are opportunities to really engage with the idea of making AI for populations.

Moderator

Wonderful. Professor Nirav, you’re also from the University of Bristol and your work focuses on multi -agent systems where basically intelligence emerges from all these entities interacting with one another. What kind of social problems are best suited for these multi -agent approaches?

Professor Nirav Ajmeri

Thanks, Chandni. Good question. And I think partly Seth already answered what multi -agents could do. So all problems that we’re thinking over here are in terms of, if you’re understanding those problems, they are socio -technical in nature. So there are social entities including people, organizations which interact. All of us also use some technical tools. These could be intelligent agents. These could be applications, softwares that we use. And all of these combined together, help us. So all problems… include or all domains are socio -technical in nature. Multi -agent inherently can encapsulate socio -technical systems. So that is how I would look at it. If you’re talking about, say, ride sharing, for instance, or hailing a ride, current system could be optimizing only for me, right?

And then what we end up with, we could end up with local maxima. So if we are optimizing for each one of us, we are doing a local optimal for each of us. But we may not be doing a global optimal. And global optimal would map to social welfare. What does social welfare mean? Does it mean just maximizing experience for everybody? Or are we meaning satisfactory experience? So I think any problem that we think about, say, epidemic, pandemic prevention, making sure. That is. are located properly, all of that would be multi -agent in nature.

Moderator

Interesting. Professor Seth, do you have anything that you’d like to add on to that?

Professor Seth Bullock

So, yeah, I think we’ve heard, I think, a little bit from some AI leaders about a next wave of AI that will be agentic, where we won’t be just interacting with ChatGPT as a monolith. We will be interacting with an agent that has purposive aims and is helping us to achieve tasks. And it might do that by communicating with other agents. Whenever we interact with AI, we would be, in fact, interacting with a population of AIs that are sending each other information, that are tasking each other with different jobs to do. And actually, it might not be clear whether one of those agents is artificial or a person. And so, if we enter into that sort of world, I think we have to really understand whether those agents are going to be able to do that.

I think those agents are interacting with each other. in a way that is likely to advantage the community of users because the amount of resources that will be consumed by these population of agents and the potential for them to interact in ways that have unforeseen consequences for other people are going to ramify. When we do that manually, really we can only hold so many interactions with other people at once, and so we’re limited in the scale. You know, one request does not create this kind of cascade of other requests in the system. But as we move to artificial systems, that scaling will rapidly increase and potentially one trivial request by me asking a computer to make a picture of a dog riding a skateboard could create a whole kind of wave of different agentic interactions that consume loads of resource and also, depending on what I’ve asked for, disadvantage other people.

So embedding some kind of social responsibility… into those agents, some appreciation for how their behavior impacts other agents in the system, I think is going to be imperative. Otherwise, we end up with systems that create conflict and contestation for resources.

Moderator

Interesting. Whenever I’m on Instagram or Facebook, and let’s say if I’m talking to my friends, I’m really thinking about buying this Dyson or a particular product, it’s always weird to me how the next time I open the app, it’s almost like the app has heard me, and I start seeing the ads for those exact things, even if I’ve not searched it. I’ve just talked about it to someone. Has anybody here also experienced the same thing, a show of hands quickly, where you feel that maybe the choices that we make, are they really our choices, or are we being nudged by algo somewhere? So, Professor Manjana, your work focuses so much on recommendation systems, and we often hear that these algos are just tools.

Perhaps your research suggests that they actively shape what people see, buy, believe. How much of human behavior today is genuinely chosen by us and how much is subtly nudged by these algorithms?

Professor Manjunath

Yeah, recommendation systems and the way they shape many of our feelings and our attitudes and our habits has essentially been a significant concern for me for a while. One of the things that you have to think about when you look at recommendation systems is that they’re essentially learning agents. So they want to learn your preferences, your likes, your dislikes, etc. And when they’re trying to do that learning, they do things. They’re trying to sort of give you options, different kinds of options, and then see how you react. So there is the first way in which the interaction between you and the learning system happens. The learning algorithm. So corresponding to our recommendations. So according to our recommendations.

system happens is this, they are showing you a variety of things and the way you react. And then your reaction is usually captured in some kind of utility function, something that the algorithm believes is positive for whoever is designing that algorithm. Now, what exactly is that utility function essentially determines what gets recommended to you in the future and what the system learns about you. Now, there is no such thing as the right utility function and every organization will figure out what they want for themselves. And if you just look at some of the, we have actually done several models, mathematical models on this and show that if I, depending on the kind of learning algorithm that I have and I am assuming a benign recommendation system here, depending on the kind of learning algorithm that I use.

Where I start off with a set of preferences. by the end of the day or over a certain time horizon, my preferences can be dramatically different. So there is a certain nudge that is steadily pushed by these algorithms and in which direction the nudge is pushed depends on the kind of algorithms they use and the kind of what we call utility functions that they use. So what exactly are they trying to optimize for themselves? And if you look at various analysis of many of these, especially Facebook algorithms, there is a very famous book that came out recently by somebody called Sarah Wynn Williams who was an insider. You can see the impact of what that had on some sections of some society elsewhere when the whole recommendation system went berserk.

So there is definitely a huge impact on the population’s preferences by the recommendation systems. And if you want to sort of give a quick understanding of that, recommendation systems essentially are advertisements. The difference between a standard… and this advertisement definitely shape our preferences. If you see something more often, you will start thinking about it and so on. The difference between, at least in my opinion, the difference between the advertising advertisements that you see on the street and the advertisement corresponding to a recommendation engine is that you are significantly more receptive. You are looking to do something. And when you are trying to look for something to do, if the recommendation pushes you in a certain direction, you are naturally going to go there.

So the impact of recommendation systems on the population’s preference, in my opinion, is spectacularly large.

Moderator

Wow. That’s quite a lot to actually digest and hear from. I really wonder how much my personality is my own at this point. Antara, so from your work in civic engagement, when AI enters governance, is it to primarily help citizens be heard or is it helping governments manage complexity? And where do… Where do citizens struggle the most when technology becomes the interface between them and the government?

Antaraa Vasudev

Thank you for that question. Just want to make sure that everyone can hear me. Thank you. Some problems like on -stage mics, AI cannot solve. No, but I just wanted to, of course, next year, correct. Thank you for that lovely question and lovely being here with all of you today. Jandi, to your point, I think AI currently is being used in both use cases. It’s allowing us to engage with citizens who perhaps have little or limited knowledge about law and policy and to be able to help them clarify doubts, for them to be able to air out their grievances, for them to actually be able to understand the frameworks of policy and law that govern their lives.

But in addition to that, it is also being used in a very large way for optimization. In a country of India’s size and diversity, I think the only other ways to perhaps not tackle circumstances, not an important governance does. So better than that is to actually build strong and robust frameworks for how governance can utilize AI, which is put out in a manner which is transparent, accessible, and one that actually has certain equity built in, which is really what the panel is also discussing today. And once you have that, to know that these optimization solutions can perhaps be built by AI rather than citizen -led. So at CIVIS, we’ve actually been working on gathering a lot more public feedback on draft laws and policies using AI.

And again, we see optimization in both ends, but very, very mindful of the fact that the frameworks that govern that level of optimization are what needs to be designed before perhaps even we race to the next model.

Moderator

Got it. Can you share some examples of the kind of laws that have been impacted or the kind of work that you’ve done? Have you worked with different state governments? Governments where citizens of that particular state have been able to engage with the government about a certain law? or practice has been happening. Thank you.

Antaraa Vasudev

Absolutely. So I’ll share one example from recent work with the government of Maharashtra that Civis led. The government of Maharashtra actually undertook a very ambitious mission of trying to understand how the next 22 years of the state can be governed by citizens’ voice. Now, this is something which is honestly quite remarkable on their part. What Civis was able to do is that we built out a very easy -to -use chatbot, wherein you could send in a voice note, you could send in any text messages, or you could even, we had people send in drawings, letters that they had personally written to the Chief Minister and other things. Civis aggregated all of that feedback. So that was almost 3 .8 lakh citizen responses from 37 districts across Maharashtra.

And that was aggregated, sorted through, and then shared with the government as well. The Vixit Maharashtra report, as it’s called, is now publicly available at the government of Maharashtra has put it. out on their own website as well. But in addition to that, what’s been really interesting about it is that they have said that every law that’s going to come out in the state for the next coming years has to, in some way, factor in what citizens are saying about that problem area or that district for where the law is being made. And you can only do that if you’re able to actually engage at scale. And I think that’s the beauty of what that entire project showed.

Moderator

Absolutely. Professor, how do you feel about the government in terms of what approach should they be taking when it comes to AI and technology?

Professor Manjunath

Yeah. One of the fears that I have when the government gets involved in technology development is that they want to start controlling the direction. They want to tell what to be done at a very micromanaging kind of level. And I recently had an article on Tuesday, I think it was in the Financial Express. There was an op -ed where we talked about me and a colleague of mine. We talked about, you know, looking, we looked at history. kind of successful and spectacularly unsuccessful involvements of the government when they wanted to direct technology. So I’ll just give you two quick examples. So in India, about 40 years ago, there was something called CDOT. So that developed some spectacular technology when it was left alone.

The government started to direct it and micromanage the flow of technology. Many of you probably don’t even know CDOT. They don’t even come to IIT Bombay campus, for example, for recruitment. That’s just one example. If you look at Japan, just to give you another badly successful story, many of you are too young to know about something called the fifth generation computing systems that they wanted to start off in. The AI boom that we see today was originally planned to be launched in Japan in the 1980s. There was a huge project that the government wanted to micromanage, develop native hardware for AI and everybody thought they would be successful. It was a spectacular failure. The failure essentially stemmed from the fact that the government was directing everything.

Governments are generalists. People who run governments are generalists. They are brilliant people. They know society. They understand administration. But they don’t understand technology. Especially a technology that is moving too damn fast has a very large surface area and they cannot control it. They cannot control that. So it is best that they just enable and let others, let the people on the ground, people with a track record and people who want to take risks manage them. They should be enablers. They should also be monitors. Monitors nudging it in a certain direction making sure bad things don’t happen. But that’s a very hard task. So the biggest role that the government should have is just enable and step away.

Just to give you one positive example the NPCI in India is a spectacular example of where the government started something and let the private sector and sort of technologies handle that. In the US many of you may be familiar with the internet. It was exactly that. It was just a vision that somebody had and said let’s build this. and the technology is built. That’s the way I would think the government should handle it, but we’ll have to see how that goes.

Moderator

So just a quick question for the audience. You guys can shout the answers out loud. What emotions come to your mind when we think about AI? Are we feeling excitement? Are we feeling anxiety? Are we feeling FOMO? What are we feeling, guys? Curiosity. Dangerous, somebody said. What else? Definitely opportunity. Opportunity. The man over there? Confusion. Confusion. Anything else? Responsibility. Responsibility, fantastic. Great. So Mr. Bhai, this question is for you. There’s a lot of anxiety, a little bit of excitement as well about maybe AI replacing jobs, especially in India’s tech and services sector. From your experience working with different companies, where is AI genuinely replacing humans? and where is it actually creating new forms of value and roles?

Kushe Bahl

Yeah, that’s a great question. Thank you. So I think the, let me try and give you the very brief answer, because I could talk about this for a long time. But there is a lot of focus on AI being used to replace humans in particular operations. So, you know, when you have an AI taking a call center call, that’s the simplest example of that. And what, and, you know, the math, the way it works is that, you know, if you’re spending 100 rupees on something, you can save 40 % of that roughly by replacing it with AI, with the current economics of the way it works. And obviously, if you’re in a high -cost geography, you can save more.

In a country like even in India, you can save that much. What we have found, though, is that most of the cases where you do this simple replacement of a human with AI, that’s not the case. cost reduction doesn’t really sustain. There’s a famous example of Klarna in Europe where they brought back a lot of the costs called center costs because they had to bring back some of the senior customer support people because a lot of the conversations were not going well and they were losing customer satisfaction. The same thing with IT, you can replace a lot of developers with this, but then people will come back with more projects and there’ll be more things to be done.

The real value unlock, which is sustaining, is actually when you get AI to do something which humans can’t do or are not able to do because it’s so time consuming and so difficult. For instance, a genuinely personalized customer engagement engine using the kind of recommendation system that he was talking about, which actually engages in a personalized way with every customer that I have as a company, for instance, or every entity that any organization is dealing with. That genuinely has value. It creates huge value unlock. So like for instance, I mean if I spend 2 -3 % of my revenue on say customer support and even if I save 40 % on that, I’m saving like 0 .8 % or 1%. But if I can generate even just 10 % more revenue from existing customers with hardly any marketing cost and I make 30 -40 % margin on that, I’m getting 3 -4 % more to the bottom line.

So that is a huge, it’s like almost 5x of what you can save. So the value unlock is very large and that’s sustainable because you’re really getting AI to do, no human being is going to sit and figure out exactly for millions of customers exactly what is the kind of personal message to send because the amount of experimentation you have to do and the kind of connections you have to draw between individuals and similarities and so on, which the recommendation engines are based on are impossible to do humanly. And that’s where the biggest value unlocks are at least that I’m seeing and those are sustainable and they’re actually even applicable in high cost geographies. It’s just that unfortunately a lot of the initial focus of the innovation has been on this just save, you know, do the easy stuff, right?

Have an AI agent replace a human agent. But that’s not the real power of where what AI can bring. So hopefully we’ll see a lot more of that type of innovation going forward as well.

Moderator

Right. I think I see a lot of students here today. What kind of backgrounds do you all come from? Hands up if you’re from STEM at all? STEM backgrounds? Okay. Anybody from business, humanities, arts? Okay. So I read this LinkedIn post. I’m not sure whether it’s a great post or not. Apparently it’s going to be a little tough for STEM students to, you know, get into this world of AI because they could be replaced a lot easier. What kind of measures, businesses or like degrees does one, should one essentially come from to sustain in this world of AI, do you think? What should the next five years look like?

Kushe Bahl

Yeah, I think there is some near -term potential impact on jobs and particularly on entry level coding jobs and so on. But honestly, there’s nothing which tells us that there is a, firstly, nobody knows exactly how the math is going to work. So between new work that people do for AI enabling versus the old work that may get more efficient because of AI enabled coding and so on, will we next see an increase or decrease of employment? Nobody actually knows. There are many, many forecasts and so on done by economists much more qualified than me. But what one can see is certainly that the enterprise adoption of AI has not really happened. So right now, the impact has not really happened of all of this.

So you’re seeing some initial hit on maybe, okay, this year I have promised I’m going to use AI and reduce my budget by a certain amount, so I’ll stop hiring. That’s the kind of, I would say, almost knee -jerk impact that you’re seeing right now. What eventually plays out will be… A mix of, okay, I will do the work more efficiently and use… a lot more automation, but now I have a lot more things to do as well. So I would say that students in general, actually forget just STEM, students in general need to be focusing a lot on how I can use AI to do the best possible thing that I can do in my field and in every possible field.

So whether I’m studying marketing or if I’m studying science degree or if I’m studying any form of the humanities, you know, there is a lot of journalism. If I’m, you know, whoever I am, right, there’s so many things that I can actually be doing with AI to do my, to do things which I was not humanly possible earlier. And that’s really what the students should be equipping themselves with. And then, you know, potentially innovating and also creating things, you know, around that, but also personally equipping themselves to actually leverage AI. And I think there are lots of examples of how that can play out and will serve people really well.

Moderator

Absolutely. We are now going to get into a quick rapid fire round and then I want to open up the floor for audience questions. So the only rule here is I want short answers only. No explanations. We only have 10 seconds to answer. So I am going to start off by putting Antara on the spot. Does AI in governance shift power towards citizens or towards institutions today?

Antaraa Vasudev

I want to say citizens because it allows for a lot more information asymmetry to be addressed which is where a lot of the power gaps come up today.

Moderator

Professor Manjunath, are algorithms today more likely to reduce bias or hide bias better?

Professor Manjunath

Hide bias. No, the options don’t look right to me. What would you put as the options then? The bias will start increasing. I think they are not trained. I don’t expect training to get better. I think it will be better in the immediate future, maybe much later. But I also want to disagree with what Antara said.

Moderator

I’ll come back to you for that one. Professor Seth, what worries you more, AI being used with bad intent or AI being used widely without anyone fully understanding its consequences?

Professor Seth Bullock

Well, they’re both terrible, aren’t they? I think people will always use technologies with bad intent and it can only really be addressed if a large number of people understand that technology and can then resist it. So I think the second is more important. Uplifting the public’s understanding of AI and kind of engagement with AI properly will protect us against malign uses of AI because we will be able to spot them.

Moderator

Got it. Professor Nirav, what’s harder to design, ethical individuals or ethical systems?

Professor Nirav Ajmeri

I think that becomes tricky. Like what do we mean by ethical, right? So ethical individuals, if you’re combining ethical individuals and we say individuals combined together is a system, then ethical individuals.

Moderator

Mr. Bal, in India, will AI mostly replace jobs, reshape jobs or polarize jobs?

Kushe Bahl

Reshape.

Moderator

That’s a very quick answer. You win the rapid fire answer. Right. Professor Saad, where does AI struggle more today, with people or with systems?

Professor Seth Bullock

I mean, I think it struggles with people, but we don’t notice because it resembles the kind of natural language. When I say AI, I’m talking about something like chat GPT. So I think there’s a disguised problem with people there because… Those AIs, they don’t really mean what they say, they don’t really understand what they say, but it seems very strongly that they do. So I think that’s the problem. But what’s coming is AI embedded in all of our systems and then that will create its own set of problems as well.

Moderator

Mr. Bal, who benefits more from AI today, companies or employees? AI mostly replace jobs, reshape jobs, or polarize jobs?

Kushe Bahl

Reshape.

Moderator

That’s a very quick answer. You win the rapid fire answer. Right. Professor Seth, where does AI struggle more today, with people or with systems?

Professor Seth Bullock

I mean, I think it struggles with people, but we don’t notice because it resembles the kind of natural language. When I say AI, I’m talking about something like chat GPT. So I think there’s a disguised problem with people there because those AIs, they don’t really mean what they say. They don’t really understand what they say, but it seems very strongly that they do. So I think that’s the problem. But what’s coming is AI embedded in all of our systems, and then that will create its own set of problems as well.

Moderator

Mr. Bahl, who benefits more from AI? Today, companies or employees?

Kushe Bahl

I would say that right now, no one is benefiting from AI. But if I were to bet, it will be companies who will benefit first. And then employees will benefit. And the whole idea of having sessions like this is that we can get the employees to learn what we talked about, right? Students equipping themselves right from college. Absolutely.

Moderator

Andra, for AI used in public systems, what matters more, transparency or effectiveness?

Antaraa Vasudev

Transparency, off the bat. It’s the only way that we can actually design AI for public systems. It has to be at the front and center of all of our efforts.

Moderator

Got it. Before we get into the last question for the entire panel, I do want to get your answer to Andra’s statement. If that’s fine. The question that I had asked. I’d ask, does AI in governance shift power to a citizen or to an institution?

Professor Manjunath

Absolutely to the institutions. They have the money. to invest and discover what’s going on. There is no way citizens can beat that so easily. It requires a different whatever. I’m not allowed to say anything.

Moderator

My last question for all the panelists before we open the floor for audience questions. If we get AI right, what is one everyday improvement people in this room would actually feel within the next five years?

Professor Seth Bullock

So I think something that connects, there’s a thread that runs through this or there’s supposed to be and I think one thing that AI could give us is a greater sense that we are properly connected with each other and learning from each other. So the possibility for AI to break down barriers between people because of language and expertise and distance I think is huge. So the kind of traditional collective interaction intelligence that we’re used to where we put an X in a box when we vote for someone. It’s very, very simple, right? We can’t write an essay like the users of Antara’s system and send an essay to the government about what we want because there’s so many people, we can’t read all of those essays.

But AI can enable that kind of rich interactions. It’s an example of one of the things that Kush is talking about, that AI delivers something that is impossible for humans to do. It doesn’t just replace something that humans are already doing. So a future in which we all feel like we have a voice and AI is helping us mediate between each other, I think is something that is technically possible. There’s a whole bunch of political and social barriers to prevent that from happening. But I think five years is a timeline during which we could see the starts of those sorts of systems.

Kushe Bahl

I can talk about what I’d like to see if we get AI right. We talk a lot about institutions, we talk about companies, we talk about individuals. But not enough talk happens specifically about small businesses. India is a country of self -employed people and small enterprise. I think there are about 150 million self -employed people. If each of those people could somehow earn 600 rupees more because of AI, and I’ll talk about how, that’s a unicorn. So 600 rupees more of allocation for each of these 150 million people is not, I mean there’s a lot of large numbers in India, but it’s true, right? It’s a unicorn. So I think we think of the next 50 unicorns. We may not think of like 50 companies worth a billion dollars, but we may think of 50 innovations that puts 600 rupees more in the pockets of 150 million people.

And how does one do that? I mean if you look at all the important things all of us use today, ride hailing, e -commerce, this restaurant ordering, food ordering, right? All of these created by… On institution, they make an app and then they do spend money on marketing and so on. Today, you have AI systems that are incredibly low cost. You know, 50 cab drivers can get organized. There’s an AI agent can do the scheduling and whatever. You have a WhatsApp chat with them and you can just find the driver, right? There’s no reason why we can’t have innovation like this. Very low cost. The price, the cost of the tokens can be funded in that ride.

It can be, right? That’s all that there is to run it. It’s an autonomous system which just runs off publicly available infrastructure. I think that, to me, is the real unlock that we can see. And those same systems can then serve anyone in the world. So you can do this for taxi drivers. You can do this for lawyers. And those lawyers can then serve anyone anywhere in the world. So I think that’s the real, real unlock that we are waiting for. These systems are very low cost to build. They can be built by anybody. They can be self -built by people. And it just takes a few groups, a group of a few of these self -employed people to get together.

And then, you know, suddenly this can go viral. So I would love to see that type of innovation coming. Rather than necessarily, you know, the stuff that we know for the companies we’ll do or the things that we’ll all play around with on our LLMs on ourselves.

Moderator

Great. Antara?

Antaraa Vasudev

Thank you. I think building on what Seth and Mr. Baral just said, there’s two things that I see happening. One is the disaggregation of systems and a lot of decentralized control mechanisms, right? When that happens, you have very fragmented channels to actually engage with institutions, to Seth’s point about building collective and new ways of collective intelligence. What I want to see happening for all of us in the room is greater access and connectivity to public institutions, which actually fuels us to get easier access to entitlements and benefits that the state is supposed to provide to us. If AI can get that right, if we can solve for that, I think there is a long and a big argument to be made about that being the sort of rising tide that lifts all boats.

Professor Nirav Ajmeri

Building on to what people have been talking and last on Antara’s point, thinking about collectives, right? So we can build systems which work for individuals, but how do we make sure that those individuals could be, like each individual have different preferences. How do we take into account different people’s preferences? How do we aggregate people’s preferences and then come up with a collective decision? If you are coming up with a collective decision, how does that decision affect various other people? How do we explain that decision to other people that, hey, we have taken into account your preferences in this particular way? So we need to get that part of AI right to make sure that people have a buy -in, people trust the system that we are designing.

So that is what I would want to see and I’m thinking that we are moving forward with that. We are thinking about fairness. We are thinking about, transparency. we are thinking about accountability and so on and so forth

Professor Manjunath

yeah I can probably say what I already see the homeworks that my students submit are perfect the essays are spectacularly written the presentations are beautiful the only hope that I have is that they actually understand what they say so if that happens I will be very happy I think the output is perfect the understanding behind that output I hope will get better and better that’s my my wish for

Moderator

I’m going to open up the floor for audience questions

Audience Member 1

my question is sir I want to understand what kind of impact AI will be having on management consultants and the business

Kushe Bahl

I have no idea I have no idea really it’s very hard to say every industry is going to evolve Obviously, management consultants like everybody else are using AI for every possible thing that they can do with it. So they’re also trying to become more efficient, more productive with it. We don’t know what that means in terms of reshaping of the business. If you look at past tech innovations, which have also had a very big impact on productivity in many sectors, it’s not that entire sectors have disappeared or things have got, but things have got reshaped significantly. That has happened a lot. So I think the job that consultants do, like today when we do research, you don’t wait for one week for somebody to go and find things from everywhere.

It comes in a few minutes. Unfortunately, I find that a lot of the output, I have also seen a lot of the output, like Professor Manjunath said. I find two issues right now with the current versions of the AI. When it writes, it has no soul. So it’s correct, but it has no soul. And when it prepares a presentation or a piece of communication, it’s not inspiring. So it is correct, but it’s not inspiring. So I think there is a, so the consultants will spend more time on actually communicating in a way that’s inspiring while the desk, you know, the basic desk work will be done for you. So you spend time doing more, I would say, human tasks.

And that’s going to happen actually in a lot of other, in a lot of service jobs, right? You’re going to do, you’re going to spend time doing what humans are truly supposed to do and are really good at, which the AI models are not able to do.

Audience Member 2

Okay, thanks. So my question is for everyone. I have a younger cousin who is in high school and her entire life is on chat GPT at this point. So she shares everything, relationship issues, family issues, and it knows more about her than I do. And I kind of worry when I see the younger generation getting on these AI platforms. So what is your take on this, like, impact? What is the impact of this technology on young minds?

Professor Seth Bullock

So. I share your concern I have slightly older kids I think we have to trust that we’ve been through these technological shifts before so my parents when they looked at me watching television had similar worries about they told me that my eyes would become square because I watched too much television so actually my generation became much more sophisticated consumers of television and were much more savvy about TV ads than my parents’ generation so I think we have to listen to our children about the way that they’re using these technologies they’re natives in this new world I’m calibrated for a world where AI doesn’t work where AI is not rolled out across the whole world so I’m the wrong person really to ask about how AI is going to change people we should ask young people how they’re using it and engage with them before they start to use their AI in a way that we don’t understand in secret

Kushe Bahl

I have a funny answer and a short answer. But I think that one, I think the real danger actually is not with the chat GPTs of the world, but with the earlier addictive systems like the Instagrams of the world, right? Because they are genuinely playing on our brain’s dopamine circuits and are genuinely addictive and can therefore be harmful. I think with chat GPT, I think the only thing I would say is, I think it makes one actually question where we are as individuals, as parents, as family, that our children prefer to communicate to a relatively soulless communication device which answers everything like an American therapist textbook would, right? That they prefer to talk to that than to us.

It shows what a distance we have created. With each other, right? And that may be a good reminder to us as individuals around the task that we have to do in to rebuild bonds with each other.

Antaraa Vasudev

I think on a very similar note actually to what Kusha just said, I think there have been studies from Youth Ki Awaaz and a number of other global youth -based organizations which have been looking at why exactly we turn to AI. And the phraseology is very interesting there because it indicates that turning to AI is something that you can also turn away from. I think the questions really come up where exactly what was just mentioned about understanding what are the kinds of tactile family bonds, what are the kinds of lived experience -based interactions that we can keep having with the younger generation to show that AI is a part of their life, but it’s not the only part of their life.

And I think that’s maybe my hypothesis on where we’re headed there.

Audience Member 3

I have a quick follow -up and you can connect with the previous question also. Many countries right now are trying to ban the new AI. Clearly there is evidence it is harmful in the course it’s coming. You mentioned Instagram or any other. AI is an amplifier. So unless we design, whether it’s regulation or whether it’s guardrails or whatever, what is our hope and what is the hope for a society not to get amplified harm than what they have already experienced, especially for the generation? Shall we start with you, sir?

Speaker 3

Well, I think that’s basically what I wanted to say was to, the countries of Spain and Australia are two examples of where severe restrictions have been put on social media companies to at least give access to children. And that’s an interesting experiment. One has to see what’s going on. What will happen because it’s not an easy thing to do. I mean, I think technologically it’s not easy. Legally, I’m sure there are a lot of loopholes in all of this. We have to see how that evolves and potentially apply a similar. similar kind of guardrails with respect to AI. That’s the view, at least that’s the view that I have on that matter.

Professor Manjunath

No, it has to start somewhere. I mean, this exactly goes to my point that I made earlier. Generalists in government cannot handle the space at which technology can move. You cannot put guardrails on that at the beginning. The moment you know something is happening, you have to get into the act as quickly as possible. Somebody is making an attempt. So let’s understand what’s going on. Maybe it’s, I mean, exactly what goes on is, what will happen is something that we have to see. I mean, what was interesting, at least in that attempt, was that the way in which the social media companies reacted to both the Australian and the Spanish ban. Okay, so to me, the most interesting part was they all said it was too fast, they’ve not thought about it.

things through. And then I remembered what Facebook’s slogan was, move fast and break things. They are allowed to move fast, but the legal system is not allowed to experiment. That seemed like an interesting contradiction for me to study.

Professor Seth Bullock

Relatedly, so the first AI summit in London was very closed, right? Politicians and the leaders of big tech firms. And the idea that a couple of years later governments would actually be legislating in ways that limited in this case social media companies is very good news. After London, you could imagine that regulatory capture had happened, right? Governments were not going to be able to resist these big companies and their multinational power. So those first couple of steps of regulating social media for under -16s, even if it doesn’t quite work, even if it’s not exactly right, it at least is a step of introducing regulations and it will make AI companies… at least aware that that is a possibility.

Because they have to take that responsibility, I think.

Moderator

Professor Nirav, do you have any other input on that as well?

Professor Nirav Ajmeri

I think I agree to the points that have been made. I think there could be different ways to think about a blanket ban, for instance. If you try to restrict something, people may not… They can have more curiosity in terms of why is it something which is getting banned. So we have to be thinking about that as well. But there is a step. There will have to be some regulations that should come into place. What those regulations would be, we need to be thinking about that. I think a lot of times the worry is people keep scrolling. And then the way the algorithms work, Professor Majunath knows better, but recommender systems would put you in a rabbit hole.

And you keep going into one direction. There could be echo chambers that could get informed. So the younger population is more vulnerable there, and that is where possibly a ban or restricted access helps. We have to be thinking about how can we, say in YouTube, there is YouTube Kids, and they only see kids’ content, but then there are malicious actors who would post some content which is targeted towards kids, but it is not actually kids’ content. There could be somebody could come up with a new social media platform for kids. I am not very sure what it would look like, but there would be new technology that would come, but that needs some guardrails to be put into place.

What kind of guardrails? Research and the legislation will have to be thinking about it.

Moderator

Sure. I think we have time for one last question. Can we give it to somebody at the back? Yeah. The jean jacket. Yeah. Go for it. Can we pass the mic at the back, please?

Audience Member 4

So, definitely AI has enabled in the education and medical domain. But do we think that it has influenced, reached or violated the concept of the developers as well? There are singers who no longer exist. We are getting to hear those songs in the new generation. The ones who are alive, they definitely have a way to improve. But those who are not going to exist, it’s a breach of concept that, of course, it is falling under the domain of ethical AI. But just wanted to know your thoughts.

Moderator

Is there someone that’s directed route for this question or is it open for all? Ethical. Okay, we can just, whoever would like to take that.

Professor Seth Bullock

So, I think it’s a completely legitimate concern. Okay. And it’s difficult to understand where we go from here because the cat is already out of the bag, right? The models are already trained on everyone’s data without our consent. And how do we put that back in the box? I’m not sure that we can. I think, so there are currently legal cases that are going through the courts about the IP claims of musicians and artists, and it will be very interesting to see what law courts decide about that. I do think the kind of systems I’m interested in are systems that are built on consent. So a population of people that all have diabetes who sign up for an app that will track their disease, and then they gain by being part of a community where information is being shared to help people manage their diabetes.

So that’s a much more consenting model. It’s not about stealing people’s writing and art and music from the Internet, but that activity is already underway, and I don’t see a way of really putting it back in the box.

Moderator

Let’s do one last question.

Audience Member 5

Yeah, I guess it’s not the… the topic of education and the internet is strong and all of those things. One thing that we have observed is that instant feedback even by AI tools in education especially, students do not go through the whole process, the step by step process of foundation. So if your, let’s say your courses work in a way or the tools work in a way that they are step by step trying to make learn the person, make learn the student instead of giving instant gratification with the output. So one thing, the question is like this that has any of the professors in the panel been approached for this kind of a thing for modeling of the education process or process of getting educated or learning especially.

And the other thing that would you, can we see a collaboration in that regard where we can try to create a regulatory thing for us or a guidelines that how AI tools should be constructed for imparting education in a step by step so that that is structured with gratification. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Professor Manjunath

Yeah, the short answer to honor his, whatever, never mind. I didn’t get that right. So, yeah, the short answer is no, nobody is thinking along those lines. And handling AI in a classroom has been quite painful. And to give you one example, there was an example in which I asked somebody to do something, essentially write a certain program to perform a certain task. I gave the data. The student, because the student went to chat GP to understand what the question was about, created her own data to do and did not know how to use the data that I was giving. So the point you are making is extremely valid. if you want to think about legislation or any other guardrails or anything like that, I’m up to discuss those with you offline.

Give a very brief answer today. More generally, I think every university is struggling with that question. And I’m hoping that there are lots of bright people and we will start to see some answers. But it’s not easy.

Moderator

Well, a big thank you to all the panelists here. And a big thank you to all the audience members as well for being such great and engaging people. We have a token of appreciation from the University of Bristol side for all the panelists. From all of our sides. From somewhere. Thank you very much. Thank you. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (31)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“The panel opened with a playful “Avengers” metaphor, positioning each speaker as a superhero to illustrate the diversity of perspectives on artificial intelligence (AI) for the collective good.”

The moderator explicitly referenced the Avengers metaphor in the discussion, as recorded in the transcript excerpt [S3].

Confirmedhigh

“Professor Seth Bullock argued that AI should move beyond answering isolated queries and become a tool for population‑scale coordination, supporting entire groups rather than individual queries.”

Bullock’s stance on designing AI for whole-population support rather than single-user queries is documented in the knowledge base entry [S4].

Additional Contextmedium

“Bullock called for new technologies, delivery models, and cross‑sector partnerships among researchers, private firms, non‑profits and governments to achieve population‑scale AI.”

The importance of multilateral, multi-stakeholder collaboration for AI deployment is highlighted in several sources, e.g., the call for broad sector participation in AI initiatives [S102] and the emphasis on multi-stakeholder partnerships for effective AI implementation [S103].

Additional Contextmedium

“Professor Manjunath characterised recommendation systems as learning agents that infer users’ utility functions set by platform owners, allowing platforms to reshape tastes and act as powerful, personalised advertisements.”

The knowledge base notes that platforms control massive information about users and use targeted advertising, which aligns with the description of platforms shaping user preferences [S107] and the critique of invasive targeted ads [S109].

External Sources (109)
S1
AI Transformation in Practice_ Insights from India’s Consulting Leaders — – Romal Shetty- Sanjeev Krishan- Audience member 3- Audience member 4
S2
Global Perspectives on Openness and Trust in AI — Speakers:Alondra Nelson, Audience member 3 Speakers:Anne Bouverot, Alondra Nelson, Audience member 3
S3
Harnessing Collective AI for India’s Social and Economic Development — -Professor Seth Bullock- Professor studying how societies hold together, coordination systems, and shared values; works …
S4
Harnessing Collective AI for India’s Social and Economic Development — Speakers:Kushe Bahl, Professor Seth Bullock Speakers:Professor Manjunath, Professor Seth Bullock Speakers:Professor Se…
S5
Harnessing Collective AI for India’s Social and Economic Development — – Antaraa Vasudev- Professor Manjunath – Professor Manjunath- Professor Seth Bullock
S6
Harnessing Collective AI for India’s Social and Economic Development — Speakers:Antaraa Vasudev, Professor Manjunath Speakers:Professor Manjunath, Antaraa Vasudev Speakers:Professor Manjuna…
S7
Global Perspectives on Openness and Trust in AI — – Karen Hao- Audience member 1- Audience member 5
S8
Global Perspectives on Openness and Trust in AI — Speakers:Karen Hao, Audience member 1, Audience member 5
S10
AI Transformation in Practice_ Insights from India’s Consulting Leaders — -Audience member 1- Founder of Corral Inc -Audience member 6- Role/title not mentioned
S11
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S12
Global Perspectives on Openness and Trust in AI — – Karen Hao- Audience member 1- Audience member 5
S13
S14
Harnessing Collective AI for India’s Social and Economic Development — Speakers:Antaraa Vasudev, Professor Manjunath Speakers:Antaraa Vasudev, Professor Nirav Ajmeri Speakers:Kushe Bahl, An…
S15
Building the Workforce_ AI for Viksit Bharat 2047 — -Speaker 1- Role/Title: Not specified, Area of expertise: Not specified -Speaker 3- Role/Title: Not specified, Area of …
S16
S18
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S19
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S20
Conversation: 02 — -Moderator: Role/Title: Event moderator; Area of expertise: Not specified
S21
Harnessing Collective AI for India’s Social and Economic Development — -Professor Nirav Ajmeri- Professor at University of Bristol focusing on multi-agent systems and socio-technical networks
S22
AI Transformation in Practice_ Insights from India’s Consulting Leaders — – Romal Shetty- Sanjeev Krishan- Audience member 3- Audience member 4
S23
Global Perspectives on Openness and Trust in AI — -Audience member 4- Intellectual property and business lawyer
S24
https://dig.watch/event/india-ai-impact-summit-2026/ai-transformation-in-practice_-insights-from-indias-consulting-leaders — Sorry, we have a lot of people who’ve raised their hands. I think we can just probably take a couple of questions. I thi…
S25
Global Perspectives on Openness and Trust in AI — -Audience member 2- Part of a group from Germany
S26
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S27
The Arc of Progress in the 21st Century / DAVOS 2025 — – Paula Escobar Chavez: Audience member asking a question (specific role/title not mentioned)
S28
S29
Harnessing Collective AI for India’s Social and Economic Development — Speakers:Kushe Bahl, Antaraa Vasudev Speakers:Kushe Bahl, Antaraa Vasudev, Audience Member 2 Speakers:Kushe Bahl, Prof…
S30
From Innovation to Impact_ Bringing AI to the Public — “we are all in committed towards agent -first interfaces.”[91]. “The agent will talk to agent.”[82]. Sharma states that…
S31
Revitalising trust with AI: Boosting governance and public services — AI is reshaping public governance, offering innovative ways to enhance services and restore trust in institutions. The d…
S32
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Lt Gen Vipul Shinghal — Third, ensuring transparency in AI systems:Commanders must understand the data sources, training methodologies, and deci…
S33
AI for Democracy_ Reimagining Governance in the Age of Intelligence — Olahaji highlights AI’s potential to improve democratic governance by analyzing citizen feedback, enabling online consul…
S34
Education meets AI — Lastly, the analysis supports teaching critical thinking as a basic skill. It is agreed that students should learn how t…
S35
Panel Discussion AI & Cybersecurity _ India AI Impact Summit — This comment humanized the capacity building challenge and validated the struggles many educators face. It shifted the d…
S36
https://app.faicon.ai/ai-impact-summit-2026/harnessing-collective-ai-for-indias-social-and-economic-development — Absolutely to the institutions. They have the money. to invest and discover what’s going on. There is no way citizens ca…
S37
Building Population-Scale Digital Public Infrastructure for AI — Excellent point. Excellent point, Trevor. And I think you brought out the inherent stress in the phrase diffusion pathwa…
S38
How nonprofits are using AI-based innovations to scale their impact — However, several challenges remain unresolved. The technical issue of AI hallucinations continues to affect user trust, …
S39
AI for Democracy_ Reimagining Governance in the Age of Intelligence — Thank you. The principle that elected legislatures shape the rules governing society is… the cornerstone of democracy….
S41
Comprehensive Report: AI’s Impact on the Future of Work – Davos 2026 Panel Discussion — Ng emphasized that whilst efficiency gains from AI point solutions might yield modest improvements, transformative workf…
S42
The State of the model: What frontier AI means for AI Governance — ### Task Automation vs. Job Replacement
S43
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Development | Legal and regulatory Evidence-Based Policymaking and Research Integration Part of the roadmap emphasizes…
S44
Artificial intelligence (AI) – UN Security Council — In conclusion, the discussions highlighted the importance of fostering transparency and accountability in AI systems. En…
S45
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — Galia Daor:Yeah, thanks very much. I admit it’s a bit challenging to speak after Allison on that front, but I will try, …
S46
Harnessing Collective AI for India’s Social and Economic Development — Kushe Bahl believes that AI will fundamentally reshape jobs rather than just replacing them outright. He suggests this t…
S47
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Rather than following historical patterns of automation that replace workers, AI development should prioritize applicati…
S48
Shaping the Future AI Strategies for Jobs and Economic Development — This discussion focused on AI-driven strategies for workforce and economic growth, examining how artificial intelligence…
S49
Shaping the Future AI Strategies for Jobs and Economic Development — A central theme emerged around collaboration rather than displacement of human workers. Panelists emphasized that AI sho…
S50
Harnessing Collective AI for India’s Social and Economic Development — Professor Bullock argues that AI systems should be designed to support entire populations simultaneously rather than jus…
S51
Building Population-Scale Digital Public Infrastructure for AI — Summary:All speakers agree that moving from fragmented pilot projects to systematic, coordinated approaches is essential…
S52
How to make AI governance fit for purpose? — Focus should be on actions and practical outcomes rather than regulation, with emphasis on innovation over regulatory co…
S53
Scaling Enterprise-Grade Responsible AI Across the Global South — “And those engineered systems might require, for example, yes, human in the loop or on the loop, for sure, but also agen…
S54
AI governance in India: A call for guardrails, not strict regulations — The TRAI’srecent call to regulateAI comes at a time when policymakers must address rapidly evolving technological innova…
S55
From principles to practice: Governing advanced AI in action — Juha Heikkila: Thank you. Thank you very much. It’s indeed a great pleasure to be here and to be a member of this panel….
S56
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Factors such as restricted access to computing resources and data further impede policy efficacy. Nevertheless, the cont…
S57
Artificial intelligence — Capacity development Content policy Online education
S58
Why science metters in global AI governance — But now I don’t know what is the causal factor there. I don’t know if the causal factor is whether they are using AI mor…
S59
Empowering India & the Global South Through AI Literacy — Explanation:The unexpected consensus emerges around the government’s commitment to introduce AI education from class thr…
S60
How Trust and Safety Drive Innovation and Sustainable Growth — Explanation:Despite representing different perspectives (UK regulator, Singapore regulator, and industry), there was une…
S61
Safeguarding Children with Responsible AI — High level of consensus across diverse stakeholders (government, industry, academia, and youth representatives) suggests…
S62
Open Forum #30 High Level Review of AI Governance Including the Discussion — High level of consensus with significant implications for AI governance development. The alignment suggests that despite…
S63
Safeguarding Children with Responsible AI — Consensus level:High level of consensus across diverse stakeholders (government, industry, academia, and youth represent…
S64
AI for Social Empowerment_ Driving Change and Inclusion — This discussion focused on the impact of artificial intelligence on labor markets and employment, featuring perspectives…
S65
Generative AI is enhancing employment opportunities and shaping job quality, says ILO report — A new study conducted by the International Labour Organization (ILO) investigates the consequences of Generative AI on t…
S66
Anthropic report shows AI is reshaping work instead of replacing jobs — A new report by Anthropicsuggestsfears that AI will replace jobs remain overstated, with current use showing AI supporti…
S67
Harnessing Collective AI for India’s Social and Economic Development — Thanks a lot. So it’s great to be here in India. I think this topic is extremely relevant to both the UK where I’m worki…
S68
Harnessing Collective AI for India’s Social and Economic Development — Artificial intelligence | Social and economic development Professor Bullock argues that AI systems should be designed t…
S69
How nonprofits are using AI-based innovations to scale their impact — However, several challenges remain unresolved. The technical issue of AI hallucinations continues to affect user trust, …
S70
AI for Good Technology That Empowers People — Low to moderate disagreement level with significant implications for implementation strategies. The differences suggest …
S71
Gathering and Sharing Session: Digital ID and Human Rights C | IGF 2023 Networking Session #166 — Amandeep Singh Gill:Thank you very much. It’s a great pleasure to join you, and such an important topic. So, the interfa…
S72
WS #86 The Role of Citizens: Informing and Maintaining e-Government — PeiChin Tay emphasizes the importance of leveraging technology to reduce barriers and create digital feedback loops in e…
S74
How to make AI governance fit for purpose? — Jennifer Bachus: So, in addition to my very strong concern that essentially A.I. governance is going to strangle A.I. in…
S75
Comprehensive Report: AI’s Impact on the Future of Work – Davos 2026 Panel Discussion — Ng emphasized that whilst efficiency gains from AI point solutions might yield modest improvements, transformative workf…
S76
The State of the model: What frontier AI means for AI Governance — ### Task Automation vs. Job Replacement
S77
When Code and Creativity Collide: AI’s Transformation of Music and Creative Expression — Juliet Mann argues that artificial intelligence is advancing at an unprecedented pace compared to previous technologies….
S78
Scaling Innovation Building a Robust AI Startup Ecosystem — Overall Tone:The tone was consistently celebratory, appreciative, and inspirational throughout. It began formally with t…
S79
Elevating AI skills for all — The tone is consistently optimistic, enthusiastic, and collaborative throughout. The speaker maintains an upbeat, missio…
S80
AI Policy Summit Opening Remarks: Discussion Report — The tone is consistently optimistic and collaborative throughout both speeches. Both speakers maintain an encouraging, f…
S81
Managing Change in Media Space: Social Media, Information Disorder, and Voting Dynamics 2 — Additionally, SDG 17: Partnerships for the Goals accentuates the critical function of worldwide collaborations in realis…
S82
Open Forum #7 Deepen Cooperation on Governance, Bridge the Digital Divide — The overall tone was collaborative, optimistic and forward-looking. Speakers shared positive examples and experiences fr…
S83
Why science metters in global AI governance — Summary:The discussion revealed surprisingly few fundamental disagreements among speakers, with most conflicts arising a…
S84
Comprehensive Summary: AI Governance and Societal Transformation – A Keynote Discussion — The tone begins confrontational and personal as Hunter-Torricke distances himself from his tech industry past, then shif…
S85
As AI agents proliferate, human purpose is being reconsidered — As AI agentsrapidly evolvefrom tools to autonomous actors, experts are raising existential questions about human value a…
S86
Strategic prudence in AI: Experts advise incremental approach for meaningful advancements — At TechCrunch Disrupt 2024, data management leadersadvisedAI-driven businesses to focus on incremental, practical applic…
S87
GOVERNING AI FOR HUMANITY — – 19 Problems such as bias in AI systems and invidious AI-enabled surveillance are increasingly documented. Other risks …
S88
Panel 4 – Resilient Subsea Infrastructure for Underserved Regions  — The discussion maintained a professional, collaborative tone throughout, with panelists building on each other’s insight…
S89
Driving Indias AI Future Growth Innovation and Impact — The discussion maintained an optimistic and forward-looking tone throughout, characterized by enthusiasm for India’s AI …
S90
How Trust and Safety Drive Innovation and Sustainable Growth — The discussion concluded with panelists predicting what AI summits might be called in five years’ time. Their responses …
S91
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — The conversation maintained an optimistic and patriotic tone throughout, with both participants expressing strong confid…
S92
Comprehensive Discussion Report: AI Agents and Fiduciary Standards — The tone is pragmatic and solution-oriented throughout, with Pentland presenting a confident, business-like approach to …
S93
High-Level Track Facilitators Summary and Certificates — The discussion maintained a consistently positive and celebratory tone throughout, characterized by gratitude, accomplis…
S94
Closing Ceremony — The discussion maintains a consistently positive and collaborative tone throughout, characterized by gratitude, celebrat…
S95
Closing Ceremony — The overall tone was positive and forward-looking. Speakers expressed gratitude to the hosts and participants, emphasize…
S96
[Parliamentary Session Closing] Closing remarks — The tone of the discussion was formal yet collaborative and appreciative. There was a sense of accomplishment for the wo…
S97
Closing remarks — The tone is consistently celebratory, optimistic, and forward-looking throughout the discussion. It maintains an enthusi…
S98
Inclusive AI Starts with People Not Just Algorithms — The tone was consistently optimistic and empowering throughout the discussion. Speakers maintained an enthusiastic, forw…
S99
Panel Discussion AI and the Creative Economy — This panel discussion examined the complex relationship between artificial intelligence and cultural diversity in creati…
S100
Panel Discussion AI and the Creative Economy — This panel discussion examined the complex relationship between artificial intelligence and cultural diversity in creati…
S101
AI for agriculture Scaling Intelegence for food and climate resiliance — Thank you. Thank you, sir, for your visionary address. You always continue to inspire us to aim higher and achieve bette…
S102
All hands on deck to connect the next billions | IGF 2023 WS #198 — Additionally, Joe Welch affirms the value of a multilateral, multistakeholder approach. He emphasizes the need for colla…
S103
AI/Gen AI for the Global Goals — Speakers consistently emphasized the crucial role of multi-stakeholder collaboration in effectively developing and imple…
S104
https://dig.watch/event/india-ai-impact-summit-2026/welfare-for-all-ensuring-equitable-ai-in-the-worlds-democracies — And I think, you know, more globally, you know, efforts like the Hiroshima AI process, there are sort of all these pre -…
S105
The History of Cyber Diplomacy Future — Pascal Lamy challenged traditional approaches to international cooperation, arguing that “Classical multilateralism… i…
S106
Sangeet Paul Choudary — Another issue that affects drivers arises from the implementation of surge pricing on ride-hailing platforms. Platforms …
S107
© 2019, United Nations — In the digital economy, platforms unilaterally control massive amounts of information about producers and consumer…
S108
7th edition — The net neutrality debate triggers linguistic debates. Proponents of net neutrality focus on Internet ‘users’, while the…
S109
Digital democracy and future realities | IGF 2023 WS #476 — These corporations, with their established platforms and significant influence, can create barriers for competing servic…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
P
Professor Seth Bullock
2 arguments165 words per minute1587 words575 seconds
Argument 1
Population‑scale AI should enable coordination of whole communities rather than single‑user queries
EXPLANATION
Professor Bullock argues that AI should move beyond answering individual questions and be designed to support entire populations facing common challenges, such as floods or disease outbreaks. By coordinating many users simultaneously, AI can share intelligence and improve collective outcomes.
EVIDENCE
He explains that instead of a single person asking an AI a question, AI can be built to help a whole population affected by a flood, a disease, or tax filing, enabling coordination and better outcomes for many people at once [28-30]. He adds that achieving this requires new technologies, partnerships between researchers, companies, and governments, and a shift away from purely commercial AI tools [31-33].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bullock’s claim that AI should support entire populations and enable coordinated action is corroborated by S4, which emphasizes designing AI for whole-community challenges rather than individual queries [S4].
MAJOR DISCUSSION POINT
Population‑scale coordination
AGREED WITH
Antaraa Vasudev, Professor Nirav Ajmeri
Argument 2
Future AI agents will act purposively and communicate with each other, requiring embedded social responsibility
EXPLANATION
Bullock warns that upcoming AI systems will be agentic, pursuing specific goals and interacting with other agents, which could lead to unintended resource consumption and conflicts. Embedding social responsibility into these agents is essential to prevent harmful cascades.
EVIDENCE
He describes a next wave of AI where agents have purposive aims, communicate, and may task each other, creating cascades of requests that consume resources and could disadvantage others, emphasizing the need for social responsibility in their design [58-65]. He illustrates how a trivial request could trigger a large chain of agent interactions, highlighting potential unforeseen consequences [66-68].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for socially responsible, purposive AI agents that interact at scale is highlighted in S4 and further detailed in S30, which discusses agent-first interfaces and agents talking to agents [S4][S30].
MAJOR DISCUSSION POINT
Agentic AI and responsibility
P
Professor Nirav Ajmeri
2 arguments148 words per minute695 words280 seconds
Argument 1
Multi‑agent approaches can model socio‑technical systems to achieve socially optimal outcomes
EXPLANATION
Ajmeri states that multi‑agent systems can capture the interaction of people, organizations, and technical tools, allowing the design of solutions that aim for global optima rather than local, individual optima. This can improve social welfare in domains such as ride‑sharing and pandemic prevention.
EVIDENCE
He explains that current ride-sharing optimizes for each individual, leading to local maxima, whereas a multi-agent approach can target a global optimum that maps to social welfare, questioning what social welfare means and how to achieve it [47-52]. He also mentions that epidemic and pandemic prevention are inherently multi-agent problems requiring coordinated solutions [55-56].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Ajmeri’s argument that multi-agent systems can achieve global, socially optimal outcomes is supported by S4, which describes his view on moving from individual to collective optimization [S4].
MAJOR DISCUSSION POINT
Socio‑technical optimization
AGREED WITH
Professor Seth Bullock, Antaraa Vasudev
Argument 2
Intelligence emerges from interacting agents; suitable for problems like ride‑sharing, pandemics, and social welfare
EXPLANATION
Ajmeri emphasizes that intelligence is not isolated but arises from the interaction of many agents, making multi‑agent frameworks appropriate for complex societal challenges. By modeling these interactions, AI can help design fair and effective collective decisions.
EVIDENCE
He notes that intelligence emerges when social entities (people, organizations) and technical tools (intelligent agents, applications) interact, and that this structure fits problems such as ride-sharing, where individual optimization leads to sub-optimal global outcomes, and public health crises that require coordinated action [47-52][55-56].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S4 provides additional context for Ajmeri’s point that intelligence emerges from the interaction of many agents and is apt for complex societal problems such as ride-sharing and pandemic response [S4].
MAJOR DISCUSSION POINT
Emergent intelligence in multi‑agent systems
A
Antaraa Vasudev
2 arguments170 words per minute883 words310 seconds
Argument 1
AI can both help citizens voice concerns and optimize governmental processes; transparency is essential
EXPLANATION
Vasudev explains that AI is currently used to enable citizens with limited legal knowledge to ask questions, air grievances, and understand policies, while also being employed for large‑scale optimization of government functions. She stresses that transparent, accessible, and equitable frameworks are needed to ensure AI benefits are fairly distributed.
EVIDENCE
She describes AI-driven citizen engagement tools that clarify doubts, collect grievances, and explain policy frameworks, alongside AI-based optimization for a country as large and diverse as India, calling for transparent and equitable frameworks before scaling AI solutions [109-115].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Vasudev’s emphasis on AI-enabled citizen engagement and the need for transparent, equitable frameworks is echoed in S4, which discusses AI tools for large-scale government optimization and calls for transparency [S4].
MAJOR DISCUSSION POINT
Civic engagement and transparent AI governance
AGREED WITH
Professor Manjunath, Speaker 3
Argument 2
AI can empower citizens by aggregating massive feedback and informing policy decisions
EXPLANATION
Vasudev highlights a project with the Maharashtra government where AI collected hundreds of thousands of citizen inputs via a chatbot, aggregated them, and fed the results back into policy making, ensuring future laws consider citizen perspectives.
EVIDENCE
She details how Civis built an easy-to-use chatbot that gathered 3.8 lakh responses from 37 districts, aggregated the feedback, and produced the publicly available Vixit Maharashtra report, after which the state mandated that upcoming laws factor in citizen input [121-130].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Maharashtra chatbot project described in S4, which gathered 3.8 lakh responses and fed them into policy making, directly supports this argument [S4].
MAJOR DISCUSSION POINT
Citizen‑centric policy design
AGREED WITH
Professor Seth Bullock, Professor Nirav Ajmeri
P
Professor Manjunath
4 arguments169 words per minute1529 words540 seconds
Argument 1
Recommendation engines act as powerful nudges that reshape user preferences and can hide bias
EXPLANATION
Manjunath argues that recommendation systems learn users’ likes and dislikes through utility functions defined by platform owners, subtly steering preferences over time. This nudging effect can be large and may conceal underlying biases.
EVIDENCE
He explains that recommendation systems act as learning agents that present options, capture reactions via utility functions, and over time can dramatically change user preferences, acting as advertisements that heavily influence population tastes [77-99].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Manjunath’s claim about recommendation systems learning utility functions and subtly shifting preferences is substantiated by S4, which outlines how such systems act as nudges and can conceal bias [S4].
MAJOR DISCUSSION POINT
Algorithmic nudging and hidden bias
Argument 2
Governments should act as enablers and monitors, not micromanage technology development
EXPLANATION
Manjunath cautions that when governments overly direct technology projects, such as India’s CDOT or Japan’s Fifth Generation computing, they often fail. He recommends that governments enable private innovation, monitor outcomes, and intervene only to prevent harms.
EVIDENCE
He cites the failure of India’s CDOT after government micromanagement and Japan’s Fifth Generation AI project as examples of over-directed initiatives, then argues that governments should enable, monitor, and nudge rather than control technology development [139-152][155-162].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S4 cites Manjunath’s examples of CDOT and Japan’s Fifth Generation project to illustrate the pitfalls of government micromanagement and his recommendation for an enabling role [S4].
MAJOR DISCUSSION POINT
Government role as enabler vs. director
AGREED WITH
Antaraa Vasudev, Speaker 3
Argument 3
Educators face challenges with AI‑generated work lacking depth and inspiration
EXPLANATION
Manjunath observes that AI‑produced content, while correct, often lacks the ‘soul’ and inspirational quality of human‑crafted material, making it insufficient for educational purposes.
EVIDENCE
He notes that AI-generated presentations and essays are accurate but have no soul and are not inspiring, highlighting a limitation for teaching and learning [375-378].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Manjunath’s observation that AI-generated content is accurate but lacks ‘soul’ and inspiration is documented in S4, providing direct support for this concern [S4].
MAJOR DISCUSSION POINT
Quality of AI‑generated educational content
Argument 4
Over‑directed government projects often fail; better to enable private innovation while monitoring risks
EXPLANATION
Reiterating his earlier point, Manjunath emphasizes that government‑driven tech projects frequently underperform, and a more effective approach is to let the private sector lead while the state ensures safety and fairness.
EVIDENCE
He repeats the CDOT and Fifth Generation examples to illustrate failure of government-led tech, and advocates for an enabling role with monitoring and risk mitigation [139-152][155-162].
MAJOR DISCUSSION POINT
Policy approach to AI development
S
Speaker 3
2 arguments179 words per minute117 words39 seconds
Argument 1
Regulatory guardrails (e.g., bans for minors) are needed to limit amplified harms
EXPLANATION
Speaker 3 points out that countries like Spain and Australia have imposed strict restrictions on social‑media platforms for children, serving as experimental guardrails that could inform similar measures for AI.
EVIDENCE
He mentions that Spain and Australia have placed severe restrictions on social-media companies to protect children, describing these as interesting experiments whose outcomes need to be observed [409-416].
MAJOR DISCUSSION POINT
Early regulatory safeguards for vulnerable users
AGREED WITH
Antaraa Vasudev, Professor Manjunath
Argument 2
Early regulatory steps (e.g., social‑media restrictions for youth) can signal accountability and shape industry behavior
EXPLANATION
Speaker 3 argues that imposing early limits on technology use by minors sends a clear signal to industry that regulation is possible, encouraging responsible behavior even if the measures are imperfect.
EVIDENCE
He explains that the bans in Spain and Australia, though not easy to implement, represent a step toward accountability that may influence how AI companies operate [409-416].
MAJOR DISCUSSION POINT
Regulation as a catalyst for industry responsibility
A
Audience Member 1
1 argument100 words per minute22 words13 seconds
Argument 1
Concern about AI’s effect on management consulting and the need to focus on human‑centric tasks
EXPLANATION
The audience member asks how AI will impact management consultants, expressing worry that AI might replace human roles and emphasizing the importance of retaining tasks that require human creativity and inspiration.
EVIDENCE
He poses the question about AI’s impact on management consultants and the business, seeking insight into replacement versus value creation [365].
MAJOR DISCUSSION POINT
AI impact on consulting profession
A
Audience Member 3
2 arguments142 words per minute94 words39 seconds
Argument 1
AI in governance currently shifts power toward institutions rather than citizens
EXPLANATION
The audience member asserts that, despite AI’s potential to empower citizens, current implementations tend to concentrate power with institutions that have the resources to leverage AI.
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S36 offers a counterpoint, noting that institutions possess the resources to dominate AI deployment, suggesting a power shift toward institutions rather than citizens [S36].
MAJOR DISCUSSION POINT
Power dynamics in AI‑enabled governance
Argument 2
Early regulatory steps (e.g., social‑media restrictions for youth) can signal accountability and shape industry behavior
EXPLANATION
The audience member highlights that imposing restrictions on technology for minors can act as a precedent for AI regulation, encouraging responsible industry practices.
MAJOR DISCUSSION POINT
Regulatory precedents for AI
A
Audience Member 4
2 arguments127 words per minute91 words42 seconds
Argument 1
Algorithms learn utility functions set by owners, leading to drift in user preferences over time
EXPLANATION
The audience member notes that recommendation algorithms are programmed with utility functions defined by platform owners, which can gradually shift users’ preferences in directions aligned with those owners’ goals.
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion in S4 about recommendation systems using owner-defined utility functions and causing preference drift aligns with this audience observation [S4].
MAJOR DISCUSSION POINT
Algorithmic ownership and preference drift
Argument 2
AI models trained on copyrighted material raise consent and IP issues; legal resolution is pending
EXPLANATION
The audience member raises concerns that AI systems are trained on artists’ and creators’ works without consent, creating intellectual‑property disputes that are currently being litigated.
EVIDENCE
He asks whether AI-generated content violates developers’ rights, citing examples of singers whose voices are reproduced and questioning the ethical implications [462-467].
MAJOR DISCUSSION POINT
IP and consent in AI training data
A
Audience Member 5
1 argument186 words per minute196 words62 seconds
Argument 1
Instant AI feedback can bypass step‑by‑step learning, risking shallow understanding; calls for structured guidelines
EXPLANATION
The audience member worries that AI tools providing immediate answers prevent students from engaging in the gradual learning process, and suggests the need for regulatory or guideline frameworks to ensure educational AI supports deep learning.
EVIDENCE
He describes how instant AI feedback leads students to skip foundational steps, and asks whether professors have been approached to develop structured, step-by-step AI tools for education [481-486].
MAJOR DISCUSSION POINT
AI in education and learning depth
K
Kushe Bahl
2 arguments188 words per minute1945 words620 seconds
Argument 1
AI will reshape rather than simply replace jobs, creating new value through personalization
EXPLANATION
Bahl explains that while AI can automate routine tasks, its greatest economic impact comes from enabling personalized services that generate new revenue streams, thereby reshaping job roles rather than merely eliminating them.
EVIDENCE
He cites examples such as AI replacing call-center work but notes limited cost savings, and emphasizes that personalized customer engagement engines can increase revenue by 10 % with high margins, delivering far greater value than simple cost cuts [186-199].
MAJOR DISCUSSION POINT
Job transformation and value creation
AGREED WITH
Professor Seth Bullock
Argument 2
Reshape (as answer to rapid‑fire question about AI’s impact on jobs)
EXPLANATION
In the rapid‑fire segment, Bahl succinctly states that AI will reshape jobs rather than merely replace or polarize them.
EVIDENCE
He answers “Reshape” to the moderator’s rapid-fire question about AI’s impact on jobs in India [257].
MAJOR DISCUSSION POINT
Rapid‑fire view on job impact
M
Moderator
1 argument147 words per minute1619 words659 seconds
Argument 1
Rapid‑fire insights highlight differing views on bias, power shift, and who benefits from AI
EXPLANATION
The moderator summarizes a rapid‑fire round where panelists offered brief, contrasting perspectives on algorithmic bias, the direction of power in AI‑governance, and whether companies or employees stand to gain most from AI.
EVIDENCE
During the rapid-fire, Antaraa said AI shifts power to citizens, Manjunath argued it shifts to institutions, and Bahl answered that AI will reshape jobs, illustrating varied viewpoints on bias, power, and benefit distribution [237-245][248-251][257][281-284].
MAJOR DISCUSSION POINT
Diverse panel perspectives in rapid fire
Agreements
Agreement Points
AI should be designed for population‑scale coordination rather than isolated individual queries
Speakers: Professor Seth Bullock, Antaraa Vasudev, Professor Nirav Ajmeri
Population‑scale AI should enable coordination of whole communities rather than single‑user queries AI can empower citizens by aggregating massive feedback and informing policy decisions Multi‑agent approaches can model socio‑technical systems to achieve socially optimal outcomes
All three speakers stress that AI systems need to operate at the scale of whole populations or societies, coordinating many users (e.g., flood victims, citizens providing feedback) and moving beyond single-question interactions to achieve collective benefits [28-30][31-33][121-130][47-52][55-56].
POLICY CONTEXT (KNOWLEDGE BASE)
This view reflects an emerging consensus that AI systems should function as shared digital public infrastructure, enabling coordinated outcomes across whole societies rather than siloed personal assistants. The need for systematic, population-scale approaches is highlighted in discussions on building digital public infrastructure for AI [S51] and in Professor Bullock’s argument that coordination itself is a form of intelligence supporting entire populations [S50].
Transparency and accountable governance are essential for AI deployment in the public sector
Speakers: Antaraa Vasudev, Professor Manjunath, Speaker 3
AI can both help citizens voice concerns and optimize governmental processes; transparency is essential Governments should act as enablers and monitors, not micromanage technology development Regulatory guardrails (e.g., bans for minors) are needed to limit amplified harms
Vasudev calls for transparent, accessible, and equitable AI frameworks for citizen engagement, Manjunath warns against government micromanagement and advocates an enabling, monitoring role, while Speaker 3 points to early regulatory experiments as necessary safeguards [109-115][288-290][139-152][155-162][409-416].
POLICY CONTEXT (KNOWLEDGE BASE)
The UN Security Council emphasized that AI systems must be transparent, explainable and accountable to maintain public trust and ensure ethical outcomes, framing these principles as core to AI governance in the public sector [S44].
AI will reshape jobs and create new value rather than simply replace workers
Speakers: Kushe Bahl, Professor Seth Bullock
AI will reshape rather than simply replace jobs, creating new value through personalization AI will break down barriers between people, enabling richer interactions that were previously impossible
Bahl emphasizes that AI’s biggest economic impact comes from personalized services that generate new revenue, reshaping roles, while Bullock highlights AI’s potential to connect people and enable capabilities beyond human limits, both indicating a transformation rather than outright replacement [257][186-199][301-309].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple expert panels have argued that AI will transform work by augmenting human capabilities and generating new employment opportunities, rather than causing wholesale displacement. This perspective appears in discussions on AI’s impact on jobs in India [S46], policy-focused forums stressing complementary design choices [S47][S48][S49], and ILO research showing generative AI can enhance employment prospects [S65][S66].
Building public understanding and capacity is crucial for responsible AI adoption
Speakers: Kushe Bahl, Professor Seth Bullock
Students need to focus on how to use AI across fields and equip themselves with relevant skills Uplifting public understanding of AI will protect against malicious uses
Both Bahl and Bullock argue that widespread AI literacy-students learning to apply AI in their domains and the general public grasping AI’s implications-is essential to mitigate risks and harness benefits [226-230][250-251].
POLICY CONTEXT (KNOWLEDGE BASE)
The AI Policy Research Roadmap calls for capacity-building initiatives to raise awareness and enable effective navigation of AI systems in the public sector [S43]. Complementary efforts on AI literacy, such as introducing AI education from primary school onward, reinforce the policy priority of public understanding [S59][S57].
Similar Viewpoints
Both speakers stress that as AI becomes more autonomous and agentic, its design must incorporate social responsibility and governance mechanisms to prevent unintended harms, calling for collaborative oversight rather than unchecked deployment [58-65][139-152][155-162].
Speakers: Professor Seth Bullock, Professor Manjunath
Future AI agents will act purposively and communicate with each other, requiring embedded social responsibility Governments should act as enablers and monitors, not micromanage technology development
Both see the state’s role as facilitating transparent, citizen‑centric AI tools while avoiding heavy‑handed control, emphasizing enabling frameworks that protect public interest [109-115][288-290][139-152][155-162].
Speakers: Antaraa Vasudev, Professor Manjunath
AI can empower citizens by aggregating massive feedback and informing policy decisions Governments should act as enablers and monitors, not micromanage technology development
Unexpected Consensus
Agreement across diverse participants that early regulatory interventions (e.g., bans for minors) are a useful experiment for AI governance
Speakers: Speaker 3, Professor Manjunath, Audience Member 3
Regulatory guardrails (e.g., bans for minors) are needed to limit amplified harms Governments should enable and monitor rather than micromanage, implying a need for early safeguards AI in governance currently shifts power toward institutions, suggesting regulation is required
While speakers came from different domains-policy, academia, and audience-their statements converge on the idea that early, targeted regulatory steps are valuable for managing AI’s societal impact, a consensus not explicitly anticipated at the start of the panel [409-416][139-152][155-162].
POLICY CONTEXT (KNOWLEDGE BASE)
High-level consensus on safeguarding children through targeted AI restrictions has been documented in UN-backed child-focused AI governance forums, which view early bans for minors as a pragmatic experiment [S61]. Similar multi-stakeholder dialogues favor targeted, harm-focused interventions over sweeping legislation [S60][S52].
Overall Assessment

The panel largely converged on four core themes: (1) AI must be built for collective, population‑scale coordination; (2) transparent, accountable governance and early regulatory guardrails are essential; (3) AI will reshape rather than merely replace jobs, creating new value; and (4) capacity building and public understanding are critical for responsible adoption.

High consensus across speakers on these themes, indicating a shared belief that AI’s future benefits hinge on coordinated design, transparent governance, and widespread capacity development. This alignment suggests strong support for policies that promote collective AI solutions, enforce transparency, and invest in education and public awareness.

Differences
Different Viewpoints
Who gains power from AI in governance – citizens or institutions
Speakers: Antaraa Vasudev, Professor Manjunath
AI can empower citizens by aggregating massive feedback and informing policy decisions (Antaraa) AI in governance shifts power toward institutions that have the resources to invest and control AI (Manjunath)
Antaraa asserts that AI shifts power to citizens by enabling their voices to be heard (e.g., the Maharashtra chatbot project) [237][121-130]. Manjunath counters that, in practice, AI gives institutions the advantage because they control the data, funding and deployment, making it hard for citizens to compete [294-296].
Approach to government involvement in AI – enable‑and‑monitor vs regulatory guardrails
Speakers: Professor Manjunath, Speaker 3, Antaraa Vasudev
Governments should act as enablers and monitors, avoiding micromanagement of technology projects (Manjunath) Early regulatory steps such as bans for minors are needed to limit amplified harms and signal accountability (Speaker 3) AI governance requires transparent, equitable frameworks before scaling, implying some level of oversight (Antaraa)
Manjunath warns that government micromanagement leads to failure (e.g., CDOT, Japan’s Fifth Generation) and recommends an enabling role with monitoring [139-152][155-162]. Speaker 3 argues that strict bans for children (Spain, Australia) are useful guardrails, suggesting a more proactive regulatory stance [409-416]. Antaraa calls for transparent, accessible frameworks, indicating a need for structured oversight rather than pure hands-off enabling [109-115]. The three positions diverge on how much direct regulation is appropriate.
POLICY CONTEXT (KNOWLEDGE BASE)
Recent policy debates highlight a split between action-oriented, enable-and-monitor models and calls for explicit guardrails. Some reports advocate focusing on practical outcomes and innovation-friendly approaches rather than heavy regulation [S52], while others stress the necessity of guardrails to balance trust and risk [S53][S54][S60].
How bias in recommendation systems should be addressed – hide bias vs reduce bias
Speakers: Professor Manjunath, Moderator (implicit)
Algorithms tend to hide bias and may increase it over time (Manjunath) The rapid‑fire question asked whether algorithms today are more likely to reduce bias or hide bias, implying an expectation of reduction (Moderator)
When asked about bias, Manjunath responded that algorithms are more likely to hide bias and may even increase it, showing skepticism about current mitigation efforts [239-245]. The moderator’s framing of the question suggested a hope that bias could be reduced, revealing a tension between expectations of bias reduction and Manjunath’s assessment that bias is being concealed.
Unexpected Differences
Perceived impact of AI on jobs – replacement vs value creation
Speakers: Audience Member 1, Kushe Bahl
Concern that AI will replace management consultants and reduce human‑centric tasks (Audience Member 1) AI will reshape jobs by creating new value through personalization rather than simply replacing roles (Bahl)
The audience member expressed anxiety that AI might replace consultants, whereas Bahl argued that the real economic benefit comes from AI-enabled personalization that creates new revenue streams and reshapes work, not wholesale replacement [365][186-199][257]. This contrast between fear of job loss and optimism about job transformation was not anticipated given the broader discussion on AI for collective good.
POLICY CONTEXT (KNOWLEDGE BASE)
Expert analyses consistently argue that AI is more likely to create value and new roles than to replace workers outright, countering alarmist narratives about job loss. This view is supported by discussions on AI reshaping work in India and global forums, as well as ILO and Anthropic reports highlighting augmentation over replacement [S46][S47][S48][S49][S64][S65][S66].
Effectiveness of AI‑generated educational content
Speakers: Professor Manjunath, Professor Seth Bullock
AI‑generated essays and presentations lack ‘soul’ and are not inspiring for learning (Manjunath) Bullock envisions AI breaking down barriers and enabling richer, meaningful interactions (Bullock)
Manjunath criticizes AI output for being correct but soulless, limiting its educational value [375-378]. Bullock, while not directly addressing education, promotes AI as a means to connect people and facilitate deep collective interaction, implying a more positive view of AI’s educational potential [301-307]. The tension between AI’s perceived superficiality and its potential to enhance learning was not a primary focus of the panel, making it an unexpected point of disagreement.
Overall Assessment

The panel displayed several substantive disagreements, chiefly around who benefits from AI in governance (citizens vs institutions), the appropriate level of government intervention (enabling vs regulatory guardrails), and how bias in algorithmic systems should be handled. While there was broad consensus that AI should serve collective good and that system‑level coordination is essential, the pathways to achieve these goals diverged sharply.

Moderate to high – the core philosophical split on power dynamics and regulatory philosophy could shape policy outcomes significantly. The disagreements suggest that without a shared framework for governance, AI initiatives may oscillate between citizen‑centric empowerment and institutional control, potentially limiting the realization of inclusive, equitable AI benefits.

Partial Agreements
Both agree that AI must move beyond isolated individual interactions toward coordinated, system‑level solutions. Bullock emphasizes population‑scale coordination for floods, disease, taxes [28-30][31-33]; Ajmeri stresses that intelligence emerges from interacting agents and that multi‑agent approaches are suited for collective problems like ride‑sharing and pandemics [36-46][47-56]. They differ in terminology (population‑scale AI vs multi‑agent modeling) but share the same overarching goal.
Speakers: Professor Seth Bullock, Professor Nirav Ajmeri
Population‑scale AI should coordinate whole communities rather than answer single‑user queries (Bullock) Multi‑agent systems can model socio‑technical interactions to achieve socially optimal outcomes (Ajmeri)
Both see AI as a tool for enhancing citizen participation and collective intelligence. Antaraa describes AI‑driven citizen engagement platforms and the need for transparency [109-115][121-130]; Bullock envisions AI breaking barriers between people and enabling richer interaction with governments [301-307]. Their convergence is on the desired outcome (empowered citizens), while their focus differs (transparent platforms vs agentic coordination).
Speakers: Antaraa Vasudev, Professor Seth Bullock
AI can empower citizens by providing access to information and enabling collective decision‑making (Antaraa) AI agents that communicate and coordinate can give people a greater sense of connection and a voice in collective decisions (Bullock)
Takeaways
Key takeaways
AI should move from individual‑query tools to population‑scale coordination systems that can help whole communities manage floods, disease outbreaks, tax collection, etc. (Prof. Seth Bullock) Multi‑agent and socio‑technical approaches are essential for problems where many human and technical agents interact (Prof. Nirav Ajmeri). These can improve social welfare in domains such as ride‑sharing, pandemic response, and public policy. Recommendation and advertising algorithms act as powerful nudges that can reshape user preferences and often hide bias; the utility functions they optimise are set by owners, not users (Prof. Manjunath). AI in governance can both amplify citizen voice and optimise government processes, but transparency, accessibility, and equity must be built into frameworks (Antaraa Vasudev). Governments are better positioned as enablers and monitors rather than micromanagers of technology development; over‑directed projects tend to fail (Prof. Manjunath). AI will more likely reshape jobs than simply replace them, creating new value through personalization and automation of tasks that are infeasible for humans (Kushe Bahl). In education, unchecked AI feedback can bypass step‑by‑step learning, leading to shallow understanding; structured guidelines are needed (Audience Q5, Prof. Manjunath). Ethical concerns around AI‑generated content and IP arise because models are trained on copyrighted material without consent; consent‑based data collection is advocated (Prof. Seth Bullock). Early regulatory steps (e.g., age‑based bans on social‑media) signal accountability and can influence industry behaviour, though they are imperfect (Audience Q3, Prof. Seth Bullock).
Resolutions and action items
Develop transparent, citizen‑centric AI frameworks for public services, emphasizing consent‑based data collection (Antaraa Vasudev). Encourage partnerships between researchers, private firms, and governments to build AI systems that serve whole populations rather than individual queries (Prof. Seth Bullock). Create guidelines for AI use in education that enforce step‑by‑step learning and prevent over‑reliance on instant AI answers (Audience Q5, Prof. Manjunath). Promote the design of AI agents with embedded social responsibility to mitigate unintended resource consumption and conflicts (Prof. Seth Bullock). Monitor and evaluate early regulatory experiments (e.g., youth‑focused bans) to inform future AI governance policies (Audience Q3, Prof. Seth Bullock).
Unresolved issues
How to concretely shift AI‑enabled governance power toward citizens rather than institutions; current perception is that power still leans toward institutions. Effective methods for reducing hidden bias in recommendation systems and ensuring algorithms are accountable to public values. Specific regulatory mechanisms that balance transparency with effectiveness of AI in public systems; no consensus reached. Legal and practical solutions for intellectual‑property rights of creators whose works are used to train generative models. Detailed strategies for up‑skilling the workforce and integrating AI into job roles without causing large‑scale displacement. Implementation pathways for consent‑based data ecosystems at scale, especially in health or civic domains. Standardised, enforceable guidelines for AI use in classrooms and assessment of learning outcomes.
Suggested compromises
Adopt a transparency‑first approach for AI in public systems while still pursuing effectiveness, acknowledging that transparency is a prerequisite for trust (Antaraa Vasudev). Governments act as enablers and monitors rather than direct developers, allowing private innovation to flourish while providing oversight (Prof. Manjunath). Introduce targeted, age‑based restrictions on AI‑enabled platforms as an interim safeguard while broader regulatory frameworks are developed (Audience Q3). Balance AI‑driven job automation with a focus on augmenting human‑centric tasks, reshaping roles instead of pure replacement (Kushe Bahl). Combine multi‑agent system design with ethical guidelines to ensure that emergent behaviours align with societal welfare (Prof. Nirav Ajmeri & Prof. Seth Bullock).
Thought Provoking Comments
Coordination is intelligence. Instead of AI answering individual questions, we can design AI systems that support whole populations—e.g., coordinating flood response, disease management, or tax collection.
Reframes AI from a personal tool to a societal coordination mechanism, highlighting a shift in purpose and scale.
Opened a new line of discussion about population‑level AI, prompting follow‑up questions on multi‑agent systems and leading the panel to explore how AI can be structured for collective coordination rather than isolated queries.
Speaker: Professor Seth Bullock
When AI becomes agentic, even a trivial request (like a picture of a dog on a skateboard) can trigger cascades of interactions that consume resources and potentially disadvantage others; we need to embed social responsibility into these agents.
Identifies a hidden risk of emergent, large‑scale AI interactions and calls for proactive ethical design.
Shifted the tone from optimism to caution, steering the conversation toward the unintended consequences of AI ecosystems and influencing later remarks about regulation and public understanding.
Speaker: Professor Seth Bullock
Recommendation systems are learning agents that actively shape preferences; depending on the utility function they optimize, they can dramatically alter users’ tastes over time, essentially acting as powerful advertisements.
Highlights how algorithmic design directly influences human behavior, moving beyond the notion of neutral tools.
Deepened the analysis of algorithmic nudging, leading to audience concerns about autonomy and prompting further discussion on bias, transparency, and the need for oversight.
Speaker: Professor Manjunath
In Maharashtra, we built a simple chatbot that collected 380,000 citizen inputs (voice notes, texts, drawings) and fed them into the policy‑making process; now every law must consider this citizen feedback.
Provides a concrete, scalable example of AI empowering citizens in governance, illustrating practical impact.
Grounded the abstract debate in a real‑world case, encouraging other panelists to discuss how AI can be used for civic engagement and influencing the later focus on transparency and decentralization.
Speaker: Antaraa Vasudev
Government micromanagement of technology (e.g., India’s CDOT and Japan’s Fifth Generation computing) often leads to failure; governments should act as enablers and monitors, not directors of tech development.
Offers historical evidence that challenges the assumption that state control ensures beneficial AI outcomes.
Prompted a re‑evaluation of the appropriate role of policy, influencing subsequent remarks about regulatory guardrails, rapid‑fire answers, and the need for agile, not heavy‑handed, governance.
Speaker: Professor Manjunath
AI should not just replace humans to cut costs; the real value lies in unlocking capabilities humans can’t achieve, like personalized customer engagement engines that can increase revenue far beyond the savings from automation.
Distinguishes between superficial cost‑cutting and transformative value creation, reframing the job‑impact narrative.
Redirected the conversation from fear of job loss to opportunities for new value, influencing later discussion on reshaping jobs and supporting small businesses.
Speaker: Kushe Bahl
My generation worried about TV; we adapted and became savvy consumers. Today’s youth will similarly adapt to AI, and we should listen to them rather than impose adult fears.
Provides a historical analogy that normalizes technological anxiety and emphasizes intergenerational dialogue.
Eased audience concerns, shifted the discussion toward empowerment and education, and set the stage for audience questions about youth and AI.
Speaker: Professor Seth Bullock
The biggest danger is not chat‑GPT but platforms that exploit dopamine circuits (e.g., Instagram). AI amplifies existing harms; we need consent‑based models where users opt‑in to data sharing, not covert data harvesting.
Prioritizes consent and data ethics, pointing out that AI’s risks are extensions of existing platform issues.
Reinforced calls for transparent, consent‑driven AI systems, influencing the rapid‑fire debate on bias, transparency, and the role of government in setting guardrails.
Speaker: Professor Seth Bullock
Overall Assessment

These pivotal comments collectively steered the panel from a broad, metaphor‑driven introduction toward concrete, systemic considerations of AI. Professor Seth’s framing of coordination and agentic cascades introduced the need for societal‑scale design and ethical safeguards, while Professor Manjunath’s insights on recommendation systems and governmental overreach highlighted hidden influences and policy pitfalls. Antaraa’s Maharashtra case grounded the discussion in real‑world civic empowerment, and Kushe Bahl’s distinction between cost‑cutting and value creation reshaped the narrative around job impact. Together, these remarks deepened the conversation, prompted new topics (population AI, consent, governance models), and shifted the tone from speculative optimism to a nuanced, solution‑oriented dialogue.

Follow-up Questions
How can AI systems be designed to support whole populations (e.g., disaster response, tax collection) rather than individual queries?
Identifies a need to shift AI from individual assistance to coordinated population‑level services, requiring new technologies and delivery models.
Speaker: Professor Seth Bullock
What partnership models between researchers, companies, non‑profits, and governments are needed to develop AI for populations?
Highlights the importance of cross‑sector collaboration to create and deploy population‑scale AI solutions.
Speaker: Professor Seth Bullock
What interventions in AI promotion are required to avoid the ‘path of least resistance’ commercial tools and ensure socially beneficial outcomes?
Calls for policy or strategic guidance to steer AI development toward public‑good applications rather than purely profit‑driven tools.
Speaker: Professor Seth Bullock
How can social responsibility be embedded into agentic AI to prevent resource contention and unintended societal consequences?
Points to the need for research on designing AI agents that consider the impact of their actions on other agents and on society.
Speaker: Professor Seth Bullock
What are the emergent behaviors and cascading resource consumption effects when AI agents interact at scale (e.g., trivial requests causing large cascades)?
Raises concerns about scalability and externalities of interconnected AI agents, requiring study of systemic impacts.
Speaker: Professor Seth Bullock
What governance frameworks are needed to ensure transparency, accessibility, and equity when AI is used in public systems?
Emphasizes the necessity of designing transparent and equitable AI frameworks before large‑scale deployment in governance.
Speaker: Antaraa Vasudev
How should regulatory and policy frameworks be designed to prevent premature racing to the next AI model without adequate safeguards?
Calls for research on creating timely regulations that balance innovation with safety and public interest.
Speaker: Antaraa Vasudev
How do recommendation systems shape human preferences and potentially amplify bias, and how can this impact be measured?
Identifies a gap in understanding the magnitude of preference manipulation and bias amplification by recommendation algorithms.
Speaker: Professor Manjunath
What methods can be developed to detect and mitigate hidden bias in recommendation algorithms?
Points to the need for technical solutions and standards to address bias that is not immediately visible.
Speaker: Professor Manjunath
What is the effectiveness of AI and social‑media restrictions for minors (e.g., bans in Spain and Australia) and what guardrails are appropriate?
Seeks empirical evaluation of regulatory experiments aimed at protecting children from AI‑driven harms.
Speaker: Speaker 3 (unnamed) and Professor Manjunath
How can appropriate guardrails be established for AI deployment in the public sector without stifling innovation?
Calls for research on balancing rapid AI adoption with necessary oversight in government contexts.
Speaker: Professor Manjunath
How can AI be leveraged to create value for small businesses and self‑employed workers rather than merely replacing jobs?
Suggests investigation into low‑cost AI solutions that augment income for millions of micro‑entrepreneurs.
Speaker: Kushe Bahl
What design principles and regulatory guidelines are needed for AI tools in education to promote step‑by‑step learning rather than instant gratification?
Highlights a gap in current AI‑enabled educational tools and the need for structured, pedagogically sound frameworks.
Speaker: Audience Member 5; Professor Manjunath
What are the legal and ethical implications of AI‑generated content that uses works of deceased artists, and how should consent and IP be managed?
Raises concerns about copyright, consent, and the need for new legal frameworks for AI‑generated creative works.
Speaker: Professor Seth Bullock
How can AI systems aggregate individual preferences into collective decisions while ensuring fairness, transparency, and accountability?
Identifies a research challenge in designing AI‑mediated collective decision‑making mechanisms that maintain trust.
Speaker: Professor Nirav Ajmeri
How can AI increase citizens’ access to government entitlements and benefits through decentralized, disaggregated control mechanisms?
Calls for exploration of AI‑driven platforms that reduce information asymmetry and improve service delivery.
Speaker: Antaraa Vasudev
What mechanisms can enable AI to break down barriers between people (language, expertise, distance) to give citizens a real voice in governance?
Suggests research into AI‑facilitated rich, large‑scale citizen‑government interactions beyond simple voting.
Speaker: Professor Seth Bullock
What are the psychological and social impacts of young people relying heavily on conversational AI (e.g., ChatGPT) for personal issues, and how should society respond?
Points to a need for interdisciplinary study on AI’s influence on youth mental health and family dynamics.
Speaker: Professor Seth Bullock; Kushe Bahl; Antaraa Vasudev
How can AI regulation be structured to shift power towards citizens rather than institutions in governance contexts?
Indicates ongoing debate and need for research on power dynamics shaped by AI‑enabled governance tools.
Speaker: Rapid‑fire (Antaraa Vasudev) and subsequent discussion

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.