Harnessing Collective AI for India’s Social and Economic Development

20 Feb 2026 13:00h - 14:00h

Harnessing Collective AI for India’s Social and Economic Development

Session at a glance

Summary

This panel discussion explored the role of artificial intelligence in promoting collective good and addressing societal challenges, featuring experts from academia, government, and industry. The moderator framed the conversation using an Avengers metaphor, with each panelist representing a different superhero approach to AI’s potential and challenges.


The discussion began by examining whether societal problems stem from lack of intelligence or coordination failures. Professor Seth Bullock argued that coordination itself is intelligence, advocating for AI systems designed to support entire populations rather than just individuals. Professor Nirav Ajmeri emphasized multi-agent systems where intelligence emerges from interactions between entities, noting that most problems are socio-technical in nature requiring global rather than local optimization.


A significant portion focused on AI’s influence on human behavior and decision-making. Professor Manjunath highlighted how recommendation systems actively shape preferences through learning algorithms, comparing them to highly effective advertisements that people are more receptive to. The panelists discussed the concerning extent to which algorithms may be nudging human choices rather than simply responding to them.


The conversation addressed AI’s role in governance, with Antaraa Vasudev sharing examples of using AI to gather citizen feedback on policy-making in Maharashtra. However, there was disagreement about whether AI shifts power toward citizens or institutions, with some arguing that institutions have more resources to leverage AI effectively.


Regarding employment impacts, Kushe Bahl suggested AI will primarily reshape rather than replace jobs, emphasizing that sustainable value comes from AI doing things humans cannot do rather than simple replacement. The discussion concluded with hopes for AI enabling better collective intelligence, supporting small businesses, and improving access to public services while maintaining transparency and citizen trust.


Keypoints

Major Discussion Points:

AI’s Role in Coordination vs. Individual Intelligence: The panelists explored how AI can move beyond answering individual questions to supporting entire populations through coordination, sharing intelligence, and achieving collective outcomes – particularly in areas like disaster response, healthcare, and transportation.


Algorithm Influence on Human Behavior and Choice: Significant discussion centered on how recommendation systems and algorithms actively shape human preferences, beliefs, and behaviors rather than simply responding to them, with concerns about the extent to which our choices are genuinely our own versus algorithmically nudged.


AI in Governance and Civic Engagement: The conversation examined whether AI empowers citizens or institutions more, with examples of AI being used to gather citizen feedback on policy (like in Maharashtra) and debates about transparency versus effectiveness in public AI systems.


Job Market Transformation and Economic Impact: Panelists discussed whether AI will replace, reshape, or polarize jobs, with emphasis on AI’s potential to create new value rather than just reduce costs, and opportunities for small businesses and self-employed individuals to benefit from AI tools.


Ethical Concerns and Regulatory Challenges: The discussion addressed the struggle between innovation and regulation, concerns about AI’s impact on young people, the need for transparency in AI systems, and the difficulty governments face in regulating rapidly evolving technology.


Overall Purpose:

The discussion aimed to explore how AI can be developed and deployed for collective societal benefit rather than just individual or corporate advantage. The panelists, introduced as “Avengers” with different expertise areas, examined the challenges and opportunities of creating AI systems that serve broader social good while addressing concerns about power distribution, ethical implications, and the need for responsible development.


Overall Tone:

The discussion maintained a thoughtful and cautiously optimistic tone throughout. While panelists acknowledged significant concerns about AI’s current trajectory – including algorithmic bias, job displacement, and concentration of power – they also expressed hope about AI’s potential for positive social impact. The tone was academic yet accessible, with panelists balancing realistic assessments of current challenges with aspirational visions for better AI implementation. The moderator’s superhero framing added a lighter touch while still addressing serious topics, and audience engagement remained positive and curious rather than fearful or confrontational.


Speakers

Speakers from the provided list:


Moderator (Janhavi) – Discussion moderator, described as embodying “Jarvis” for the panel


Professor Seth Bullock – Professor studying how societies hold together, coordination systems, and shared values; works at University of Bristol


Professor Nirav Ajmeri – Professor at University of Bristol focusing on multi-agent systems and socio-technical networks


Antaraa Vasudev – Works through NGO service using AI to amplify citizen voices and reshape government-citizen power dynamics; worked with government of Maharashtra


Kushe Bahl – Leads McKinsey Digital and McKinsey Analytics practices in India, focused on execution, scale, and impact in the real economy


Professor Manjunath – Professor focused on recommendation systems, AI’s raw power and control mechanisms, intelligence at scale


Audience Member 1 – Asked about AI’s impact on management consultants


Audience Member 2 – Asked about AI’s impact on young minds, specifically regarding a high school cousin using ChatGPT


Audience Member 3 – Asked about AI bans and regulatory approaches


Audience Member 4 – Asked about AI’s impact on creative content and deceased artists


Audience Member 5 – Asked about AI in education and step-by-step learning processes


Speaker 3 – Responded to questions about AI regulation and bans


Additional speakers:


None identified beyond those in the provided speakers names list.


Full session report

This panel discussion brought together experts from academia, government, and industry to explore how artificial intelligence can be developed and deployed for collective societal benefit rather than merely individual or corporate advantage. The moderator, Janhavi, framed the conversation using an Avengers metaphor, assigning each panelist a superhero identity to represent their approach to AI’s potential and challenges, setting a tone that was both accessible and serious about addressing fundamental questions of AI’s role in society.


The discussion began with the moderator asking the audience whether they believed technology was reserved for the elite, with a show of hands revealing mixed perspectives on technology’s accessibility and democratization potential.


AI as Collective Intelligence Rather Than Individual Tool

Professor Seth Bullock challenged the dominant narrative of AI as personal assistants, arguing that “coordination is intelligence” and advocating for AI systems designed to support entire populations simultaneously. Rather than the typical model of one person asking AI a question and receiving one answer, he envisioned AI helping coordinate populations affected by floods, managing shared medical conditions, or optimizing transportation for entire communities. This represents a paradigm shift from individual AI interactions to systemic, community-based applications that could only be achieved through partnerships between researchers, companies, non-profit organizations, and governments.


Professor Nirav Ajmeri expanded on this concept through the lens of multi-agent systems, where intelligence emerges from interactions between multiple entities. He emphasized that most societal problems are socio-technical in nature, involving both social entities (people and organizations) and technical tools working together. The key insight was that current systems often optimize for individual users, creating local maxima, while multi-agent approaches could achieve global optimization that maps to social welfare.


The conversation revealed the complexity of scaling AI interactions. As Professor Bullock noted, when we move to artificial systems, the potential for cascading effects increases dramatically. A simple request could create waves of agentic interactions that consume significant resources and disadvantage other users. He also highlighted that future AI agents may not be distinguishable from humans in their interactions, adding another layer of complexity to these systems.


Algorithmic Influence on Human Behavior and Preferences

Professor Manjunath provided a sobering analysis of how recommendation systems function as learning agents that continuously experiment with users, showing them various options and observing reactions to optimize for utility functions determined by the organizations designing the algorithms. He emphasized that recommendation systems are fundamentally different from traditional advertising because they catch users in moments of high receptivity when they’re actively seeking something to do, making users “significantly more receptive” to algorithmic suggestions.


The mathematical models his team developed demonstrate that depending on the learning algorithm used, a person’s preferences can be “dramatically different” over time, regardless of where they started. This analysis prompted profound questions about human agency and authentic choice, with the moderator reflecting on how much of her personality might be shaped by algorithmic influence.


The discussion revealed that algorithms are more likely to hide bias rather than reduce it, as they’re not adequately trained to address these issues and may actually increase bias over time.


AI in Governance: Democratization Versus Power Concentration

The panel revealed a fundamental disagreement about whether AI in governance shifts power towards citizens or institutions. Antaraa Vasudev presented an optimistic view based on her work with the Government of Maharashtra, where AI enabled the collection and analysis of citizen responses for long-term state planning. This project demonstrated AI’s potential to address information asymmetry, allowing citizens with limited knowledge of law and policy to engage meaningfully with government through voice notes, text messages, and even drawings.


However, Professor Manjunath offered a contrasting perspective, arguing that AI “absolutely” shifts power towards institutions because “they have the money to invest and discover what’s going on. There is no way citizens can beat that so easily.” This disagreement highlighted the tension between AI’s democratizing potential and the reality of resource asymmetries.


The debate extended to questions of transparency versus effectiveness in public AI systems. Antaraa firmly advocated that transparency matters more than effectiveness for AI in public systems, arguing it’s “the only way that we can actually design AI for public systems.”


Economic Impact and Distributed Value Creation

Kushe Bahl argued that AI will primarily reshape rather than replace jobs, with the most sustainable value coming from AI performing tasks that humans cannot do rather than simple substitution. He noted that current enterprise adoption of AI hasn’t really happened yet, and provided examples showing that while AI might save costs in operations like call centers, these savings often don’t sustain due to quality issues.


The real value, according to Bahl, comes from AI enabling genuinely personalized customer engagement at scale. He reframed innovation success metrics from creating billion-dollar companies to distributed economic impact, calculating that if 150 million self-employed people in India could earn 600 rupees more through AI, this would represent the economic value of a unicorn company. This vision of “50 innovations that puts 600 rupees more in the pockets of 150 million people” rather than “50 companies worth a billion dollars” represents a fundamental shift in how we measure AI’s societal value.


Regulatory Challenges and Government Capability

The conversation around AI regulation revealed complex tensions between innovation, protection, and governance capability. Professor Manjunath drew on historical examples to argue that governments should enable rather than micromanage AI development, citing failures like Japan’s fifth-generation computing project in the 1980s and the decline of India’s CDOT after government interference. His fundamental argument was that “generalists in government cannot handle the space at which technology can move” because while government officials understand society and administration, they “don’t understand technology” that is “moving too damn fast.”


However, Professor Bullock offered a more optimistic view of recent regulatory developments, particularly early attempts to restrict social media access for youth. He argued that these regulatory steps, even if imperfect, signal that governments can resist big tech influence and establish precedents for future AI governance.


Impact on Youth and Human Relationships

The discussion addressed concerns about AI’s impact on young people and human relationships. An audience member expressed worry about their younger cousin sharing everything with ChatGPT—”relationship issues, family issues”—rather than with family members. Kushe Bahl’s response was particularly thought-provoking: rather than blaming the technology, he suggested this preference for “a relatively soulless communication device” over family members reflects “what a distance we have created with each other” and should serve as “a good reminder to us as individuals around the task that we have to do to rebuild bonds with each other.”


Another audience member noted concerns about AI knowing more about their cousin than they did, highlighting how AI relationships might be displacing human connections.


Educational Disruption and Learning Challenges

Professor Manjunath noted that student homework submissions are now “perfect” with “spectacularly written” essays and “beautiful” presentations, but questioned whether students actually understand what they produce. He provided an example of a student who, instead of using provided data for an assignment, created their own data, demonstrating both the creative potential and the challenges of AI in education.


An audience member reinforced concerns that instant feedback from AI tools means “students do not go through the whole process, the step by step process of foundation.” Professor Manjunath acknowledged that “every university is struggling with that question” of how to design AI educational tools that promote structured learning rather than shortcuts.


The discussion also addressed intellectual property concerns, with Professor Bullock noting that AI systems have been trained on vast amounts of data without consent, including work by deceased artists, creating situations where “the cat is already out of the bag” and it’s unclear “how do we put that back in the box.”


Future Visions and Rapid-Fire Insights

In a rapid-fire closing segment, panelists shared their aspirations for AI’s positive impact over the next five years:


Professor Bullock envisioned AI enabling “a greater sense that we are properly connected with each other and learning from each other,” breaking down barriers of language, expertise, and distance to transform democratic participation.


Antaraa emphasized the potential for AI to create “greater access and connectivity to public institutions,” enabling easier access to entitlements and benefits through improved public service delivery.


Professor Nirav highlighted the importance of building AI systems that can aggregate different people’s preferences and explain collective decisions in ways that build trust and buy-in.


Professor Manjunath expressed hope for AI that helps people “understand each other better” and bridges communication gaps.


Kushe envisioned AI enabling “more people to participate in the economic growth of the country” through distributed value creation.


Conclusion

The discussion revealed both the tremendous potential and significant challenges of developing AI for collective good. While there was broad consensus on the need for AI to serve collective rather than individual interests, fundamental disagreements remained about power dynamics, regulatory approaches, and the balance between innovation and protection. The panel’s strength lay in its honest acknowledgment of these tensions while maintaining focus on actionable approaches for ensuring AI development serves broader social benefit.


The conversation ultimately demonstrated that achieving AI for collective good requires not just technical innovation but fundamental reconsideration of how we measure success, govern technology, maintain human relationships, and ensure that the benefits of AI are distributed equitably across society rather than concentrated among those with the resources to develop and deploy these powerful systems.


Session transcript

<strong>Moderator:</strong> sci -fi movies that we grew up watching and what it primarily also reminds me of is in specific terms the avengers right the avengers are the superheroes and they’re trying to you know save the world and decide how one can do that and they all have very different strengths so i was wondering that if all our panelists were superheroes who would they be introducing our panelists i have our first avenger captain america principled steady under pressure obsessed with doing the right thing even when it’s unpopular professor seth is exactly that and reminds me of the lens that he brings in he studies how societies hold together how coordination succeeds or fails and why systems need shared values as much as intelligence next we have spider -man spider -man strength isn’t brute force it’s his ability to navigate through complex webs adapt quickly, and see connections that others miss. Professor Nirav thinks the same way. At the University of Bristol, his work focuses on multi -Asian systems because societies like Spider -Man are all about networks. Andhra Vasudev reminds me of Captain Marvel, operating at scale, moving across institutions, pushing boundaries. Through her NGO service, she uses AI to amplify citizen voices and reshape how power flows between governments and people. And of course we have Iron Man, Iron Man who is obsessed with execution, iteration, and making ideas work in the real world. Mr. Bal is our Iron Man, focused on execution, scale, and impact in the real economy. He leads the McKinsey Digital and McKinsey Analytics practices in India. Last but not the least, no team is complete without Bruce Banner. Deeply aware of the challenges that we face, of AI’s raw power and focused on how to control it before it controls us, Professor Mantunath’s work reminds us that intelligence at scale can cause damage if we don’t fully understand its consequences. My name is Janhavi and today I’m embodying Jarvis, except for being the one answering the questions, I’m the voice asking them. Every Avengers story has a Thanos. The real question is whether AI becomes our ally or the great snap that we didn’t see coming. So when we talk about AI for collective good, we’re not just talking about smarter apps, we’re talking about systems that influence how people live, work and participate in society. Before we start, I would request all my panelists to just stand up for a quick photo op. So, quick show of hands from the audience. How many of you feel that technology today is only with those who have power or resources or information, that technology has been reserved for the elite few? Do we have a show of hands in the house by any chance? Okay, clearly we don’t really have an opinion as such over here. But moving on, Professor Seth, when we look at society, you know, governments, markets or online platforms, we often assume that problems exist because we don’t have enough intelligence or data. Your work suggests something a little bit deeper, that perhaps failures come from how decisions interact at scale. From a systems perspective, Do you think our biggest societal problems are intelligence problems <strong>Professor Seth Bullock:</strong> Thanks a lot. So it’s great to be here in India. I think this topic is extremely relevant to both the UK where I’m working and India. And I think the answer is that coordination is intelligence in this situation that we’re interested in. So I guess we’re used to situations now where we interact with an AI as an individual. One person asks the AI a question and gets one answer. But really there’s the potential for us to develop AI systems that are designed to support a whole population at once. So a population of people that are affected by a flood, a population of people that all are coping with the same. disease or medical condition, a population of people that are all trying to get taxes to and from a summit. So instead of AI answering individual questions, AI can help coordinate those people, share intelligence, share their knowledge, and achieve better outcomes. And I think that’s quite a different way of framing AI than many of the systems that we’re hearing about and requires different technologies and different ways of delivering that to people, different ways of engaging with populations. So I think that’s something that can only really be achieved by partnerships between researchers and companies and not -for -profit organizations and governments and requires probably interventions in the way that we promote AI rather than letting the sort of path of least resistance develop AI commercial tools. I think there are opportunities to really engage with the idea of making AI for populations. <strong>Moderator:</strong> Wonderful. Professor Nirav, you’re also from the University of Bristol and your work focuses on multi -agent systems where basically intelligence emerges from all these entities interacting with one another. What kind of social problems are best suited for these multi -agent approaches? <strong>Professor Nirav Ajmeri:</strong> Thanks, Chandni. Good question. And I think partly Seth already answered what multi -agents could do. So all problems that we’re thinking over here are in terms of, if you’re understanding those problems, they are socio -technical in nature. So there are social entities including people, organizations which interact. All of us also use some technical tools. These could be intelligent agents. These could be applications, softwares that we use. And all of these combined together, help us. So all problems… include or all domains are socio -technical in nature. Multi -agent inherently can encapsulate socio -technical systems. So that is how I would look at it. If you’re talking about, say, ride sharing, for instance, or hailing a ride, current system could be optimizing only for me, right? And then what we end up with, we could end up with local maxima. So if we are optimizing for each one of us, we are doing a local optimal for each of us. But we may not be doing a global optimal. And global optimal would map to social welfare. What does social welfare mean? Does it mean just maximizing experience for everybody? Or are we meaning satisfactory experience? So I think any problem that we think about, say, epidemic, pandemic prevention, making sure. That is. are located properly, all of that would be multi -agent in nature. <strong>Moderator:</strong> Interesting. Professor Seth, do you have anything that you’d like to add on to that? <strong>Professor Seth Bullock:</strong> So, yeah, I think we’ve heard, I think, a little bit from some AI leaders about a next wave of AI that will be agentic, where we won’t be just interacting with ChatGPT as a monolith. We will be interacting with an agent that has purposive aims and is helping us to achieve tasks. And it might do that by communicating with other agents. Whenever we interact with AI, we would be, in fact, interacting with a population of AIs that are sending each other information, that are tasking each other with different jobs to do. And actually, it might not be clear whether one of those agents is artificial or a person. And so, if we enter into that sort of world, I think we have to really understand whether those agents are going to be able to do that. I think those agents are interacting with each other. in a way that is likely to advantage the community of users because the amount of resources that will be consumed by these population of agents and the potential for them to interact in ways that have unforeseen consequences for other people are going to ramify. When we do that manually, really we can only hold so many interactions with other people at once, and so we’re limited in the scale. You know, one request does not create this kind of cascade of other requests in the system. But as we move to artificial systems, that scaling will rapidly increase and potentially one trivial request by me asking a computer to make a picture of a dog riding a skateboard could create a whole kind of wave of different agentic interactions that consume loads of resource and also, depending on what I’ve asked for, disadvantage other people. So embedding some kind of social responsibility… into those agents, some appreciation for how their behavior impacts other agents in the system, I think is going to be imperative. Otherwise, we end up with systems that create conflict and contestation for resources. <strong>Moderator:</strong> Interesting. Whenever I’m on Instagram or Facebook, and let’s say if I’m talking to my friends, I’m really thinking about buying this Dyson or a particular product, it’s always weird to me how the next time I open the app, it’s almost like the app has heard me, and I start seeing the ads for those exact things, even if I’ve not searched it. I’ve just talked about it to someone. Has anybody here also experienced the same thing, a show of hands quickly, where you feel that maybe the choices that we make, are they really our choices, or are we being nudged by algo somewhere? So, Professor Manjana, your work focuses so much on recommendation systems, and we often hear that these algos are just tools. Perhaps your research suggests that they actively shape what people see, buy, believe. How much of human behavior today is genuinely chosen by us and how much is subtly nudged by these algorithms? <strong>Professor Manjunath:</strong> Yeah, recommendation systems and the way they shape many of our feelings and our attitudes and our habits has essentially been a significant concern for me for a while. One of the things that you have to think about when you look at recommendation systems is that they’re essentially learning agents. So they want to learn your preferences, your likes, your dislikes, etc. And when they’re trying to do that learning, they do things. They’re trying to sort of give you options, different kinds of options, and then see how you react. So there is the first way in which the interaction between you and the learning system happens. The learning algorithm. So corresponding to our recommendations. So according to our recommendations. system happens is this, they are showing you a variety of things and the way you react. And then your reaction is usually captured in some kind of utility function, something that the algorithm believes is positive for whoever is designing that algorithm. Now, what exactly is that utility function essentially determines what gets recommended to you in the future and what the system learns about you. Now, there is no such thing as the right utility function and every organization will figure out what they want for themselves. And if you just look at some of the, we have actually done several models, mathematical models on this and show that if I, depending on the kind of learning algorithm that I have and I am assuming a benign recommendation system here, depending on the kind of learning algorithm that I use. Where I start off with a set of preferences. by the end of the day or over a certain time horizon, my preferences can be dramatically different. So there is a certain nudge that is steadily pushed by these algorithms and in which direction the nudge is pushed depends on the kind of algorithms they use and the kind of what we call utility functions that they use. So what exactly are they trying to optimize for themselves? And if you look at various analysis of many of these, especially Facebook algorithms, there is a very famous book that came out recently by somebody called Sarah Wynn Williams who was an insider. You can see the impact of what that had on some sections of some society elsewhere when the whole recommendation system went berserk. So there is definitely a huge impact on the population’s preferences by the recommendation systems. And if you want to sort of give a quick understanding of that, recommendation systems essentially are advertisements. The difference between a standard… and this advertisement definitely shape our preferences. If you see something more often, you will start thinking about it and so on. The difference between, at least in my opinion, the difference between the advertising advertisements that you see on the street and the advertisement corresponding to a recommendation engine is that you are significantly more receptive. You are looking to do something. And when you are trying to look for something to do, if the recommendation pushes you in a certain direction, you are naturally going to go there. So the impact of recommendation systems on the population’s preference, in my opinion, is spectacularly large. <strong>Moderator:</strong> Wow. That’s quite a lot to actually digest and hear from. I really wonder how much my personality is my own at this point. Antara, so from your work in civic engagement, when AI enters governance, is it to primarily help citizens be heard or is it helping governments manage complexity? And where do… Where do citizens struggle the most when technology becomes the interface between them and the government? <strong>Antaraa Vasudev:</strong> Thank you for that question. Just want to make sure that everyone can hear me. Thank you. Some problems like on -stage mics, AI cannot solve. No, but I just wanted to, of course, next year, correct. Thank you for that lovely question and lovely being here with all of you today. Jandi, to your point, I think AI currently is being used in both use cases. It’s allowing us to engage with citizens who perhaps have little or limited knowledge about law and policy and to be able to help them clarify doubts, for them to be able to air out their grievances, for them to actually be able to understand the frameworks of policy and law that govern their lives. But in addition to that, it is also being used in a very large way for optimization. In a country of India’s size and diversity, I think the only other ways to perhaps not tackle circumstances, not an important governance does. So better than that is to actually build strong and robust frameworks for how governance can utilize AI, which is put out in a manner which is transparent, accessible, and one that actually has certain equity built in, which is really what the panel is also discussing today. And once you have that, to know that these optimization solutions can perhaps be built by AI rather than citizen -led. So at CIVIS, we’ve actually been working on gathering a lot more public feedback on draft laws and policies using AI. And again, we see optimization in both ends, but very, very mindful of the fact that the frameworks that govern that level of optimization are what needs to be designed before perhaps even we race to the next model. <strong>Moderator:</strong> Got it. Can you share some examples of the kind of laws that have been impacted or the kind of work that you’ve done? Have you worked with different state governments? Governments where citizens of that particular state have been able to engage with the government about a certain law? or practice has been happening. Thank you. <strong>Antaraa Vasudev:</strong> Absolutely. So I’ll share one example from recent work with the government of Maharashtra that Civis led. The government of Maharashtra actually undertook a very ambitious mission of trying to understand how the next 22 years of the state can be governed by citizens’ voice. Now, this is something which is honestly quite remarkable on their part. What Civis was able to do is that we built out a very easy -to -use chatbot, wherein you could send in a voice note, you could send in any text messages, or you could even, we had people send in drawings, letters that they had personally written to the Chief Minister and other things. Civis aggregated all of that feedback. So that was almost 3 .8 lakh citizen responses from 37 districts across Maharashtra. And that was aggregated, sorted through, and then shared with the government as well. The Vixit Maharashtra report, as it’s called, is now publicly available at the government of Maharashtra has put it. out on their own website as well. But in addition to that, what’s been really interesting about it is that they have said that every law that’s going to come out in the state for the next coming years has to, in some way, factor in what citizens are saying about that problem area or that district for where the law is being made. And you can only do that if you’re able to actually engage at scale. And I think that’s the beauty of what that entire project showed. <strong>Moderator:</strong> Absolutely. Professor, how do you feel about the government in terms of what approach should they be taking when it comes to AI and technology? <strong>Professor Manjunath:</strong> Yeah. One of the fears that I have when the government gets involved in technology development is that they want to start controlling the direction. They want to tell what to be done at a very micromanaging kind of level. And I recently had an article on Tuesday, I think it was in the Financial Express. There was an op -ed where we talked about me and a colleague of mine. We talked about, you know, looking, we looked at history. kind of successful and spectacularly unsuccessful involvements of the government when they wanted to direct technology. So I’ll just give you two quick examples. So in India, about 40 years ago, there was something called CDOT. So that developed some spectacular technology when it was left alone. The government started to direct it and micromanage the flow of technology. Many of you probably don’t even know CDOT. They don’t even come to IIT Bombay campus, for example, for recruitment. That’s just one example. If you look at Japan, just to give you another badly successful story, many of you are too young to know about something called the fifth generation computing systems that they wanted to start off in. The AI boom that we see today was originally planned to be launched in Japan in the 1980s. There was a huge project that the government wanted to micromanage, develop native hardware for AI and everybody thought they would be successful. It was a spectacular failure. The failure essentially stemmed from the fact that the government was directing everything. Governments are generalists. People who run governments are generalists. They are brilliant people. They know society. They understand administration. But they don’t understand technology. Especially a technology that is moving too damn fast has a very large surface area and they cannot control it. They cannot control that. So it is best that they just enable and let others, let the people on the ground, people with a track record and people who want to take risks manage them. They should be enablers. They should also be monitors. Monitors nudging it in a certain direction making sure bad things don’t happen. But that’s a very hard task. So the biggest role that the government should have is just enable and step away. Just to give you one positive example the NPCI in India is a spectacular example of where the government started something and let the private sector and sort of technologies handle that. In the US many of you may be familiar with the internet. It was exactly that. It was just a vision that somebody had and said let’s build this. and the technology is built. That’s the way I would think the government should handle it, but we’ll have to see how that goes. <strong>Moderator:</strong> So just a quick question for the audience. You guys can shout the answers out loud. What emotions come to your mind when we think about AI? Are we feeling excitement? Are we feeling anxiety? Are we feeling FOMO? What are we feeling, guys? Curiosity. Dangerous, somebody said. What else? Definitely opportunity. Opportunity. The man over there? Confusion. Confusion. Anything else? Responsibility. Responsibility, fantastic. Great. So Mr. Bhai, this question is for you. There’s a lot of anxiety, a little bit of excitement as well about maybe AI replacing jobs, especially in India’s tech and services sector. From your experience working with different companies, where is AI genuinely replacing humans? and where is it actually creating new forms of value and roles? <strong>Kushe Bahl:</strong> Yeah, that’s a great question. Thank you. So I think the, let me try and give you the very brief answer, because I could talk about this for a long time. But there is a lot of focus on AI being used to replace humans in particular operations. So, you know, when you have an AI taking a call center call, that’s the simplest example of that. And what, and, you know, the math, the way it works is that, you know, if you’re spending 100 rupees on something, you can save 40 % of that roughly by replacing it with AI, with the current economics of the way it works. And obviously, if you’re in a high -cost geography, you can save more. In a country like even in India, you can save that much. What we have found, though, is that most of the cases where you do this simple replacement of a human with AI, that’s not the case. cost reduction doesn’t really sustain. There’s a famous example of Klarna in Europe where they brought back a lot of the costs called center costs because they had to bring back some of the senior customer support people because a lot of the conversations were not going well and they were losing customer satisfaction. The same thing with IT, you can replace a lot of developers with this, but then people will come back with more projects and there’ll be more things to be done. The real value unlock, which is sustaining, is actually when you get AI to do something which humans can’t do or are not able to do because it’s so time consuming and so difficult. For instance, a genuinely personalized customer engagement engine using the kind of recommendation system that he was talking about, which actually engages in a personalized way with every customer that I have as a company, for instance, or every entity that any organization is dealing with. That genuinely has value. It creates huge value unlock. So like for instance, I mean if I spend 2 -3 % of my revenue on say customer support and even if I save 40 % on that, I’m saving like 0 .8 % or 1%. But if I can generate even just 10 % more revenue from existing customers with hardly any marketing cost and I make 30 -40 % margin on that, I’m getting 3 -4 % more to the bottom line. So that is a huge, it’s like almost 5x of what you can save. So the value unlock is very large and that’s sustainable because you’re really getting AI to do, no human being is going to sit and figure out exactly for millions of customers exactly what is the kind of personal message to send because the amount of experimentation you have to do and the kind of connections you have to draw between individuals and similarities and so on, which the recommendation engines are based on are impossible to do humanly. And that’s where the biggest value unlocks are at least that I’m seeing and those are sustainable and they’re actually even applicable in high cost geographies. It’s just that unfortunately a lot of the initial focus of the innovation has been on this just save, you know, do the easy stuff, right? Have an AI agent replace a human agent. But that’s not the real power of where what AI can bring. So hopefully we’ll see a lot more of that type of innovation going forward as well. <strong>Moderator:</strong> Right. I think I see a lot of students here today. What kind of backgrounds do you all come from? Hands up if you’re from STEM at all? STEM backgrounds? Okay. Anybody from business, humanities, arts? Okay. So I read this LinkedIn post. I’m not sure whether it’s a great post or not. Apparently it’s going to be a little tough for STEM students to, you know, get into this world of AI because they could be replaced a lot easier. What kind of measures, businesses or like degrees does one, should one essentially come from to sustain in this world of AI, do you think? What should the next five years look like? <strong>Kushe Bahl:</strong> Yeah, I think there is some near -term potential impact on jobs and particularly on entry level coding jobs and so on. But honestly, there’s nothing which tells us that there is a, firstly, nobody knows exactly how the math is going to work. So between new work that people do for AI enabling versus the old work that may get more efficient because of AI enabled coding and so on, will we next see an increase or decrease of employment? Nobody actually knows. There are many, many forecasts and so on done by economists much more qualified than me. But what one can see is certainly that the enterprise adoption of AI has not really happened. So right now, the impact has not really happened of all of this. So you’re seeing some initial hit on maybe, okay, this year I have promised I’m going to use AI and reduce my budget by a certain amount, so I’ll stop hiring. That’s the kind of, I would say, almost knee -jerk impact that you’re seeing right now. What eventually plays out will be… A mix of, okay, I will do the work more efficiently and use… a lot more automation, but now I have a lot more things to do as well. So I would say that students in general, actually forget just STEM, students in general need to be focusing a lot on how I can use AI to do the best possible thing that I can do in my field and in every possible field. So whether I’m studying marketing or if I’m studying science degree or if I’m studying any form of the humanities, you know, there is a lot of journalism. If I’m, you know, whoever I am, right, there’s so many things that I can actually be doing with AI to do my, to do things which I was not humanly possible earlier. And that’s really what the students should be equipping themselves with. And then, you know, potentially innovating and also creating things, you know, around that, but also personally equipping themselves to actually leverage AI. And I think there are lots of examples of how that can play out and will serve people really well. <strong>Moderator:</strong> Absolutely. We are now going to get into a quick rapid fire round and then I want to open up the floor for audience questions. So the only rule here is I want short answers only. No explanations. We only have 10 seconds to answer. So I am going to start off by putting Antara on the spot. Does AI in governance shift power towards citizens or towards institutions today? <strong>Antaraa Vasudev:</strong> I want to say citizens because it allows for a lot more information asymmetry to be addressed which is where a lot of the power gaps come up today. <strong>Moderator:</strong> Professor Manjunath, are algorithms today more likely to reduce bias or hide bias better? <strong>Professor Manjunath:</strong> Hide bias. No, the options don’t look right to me. What would you put as the options then? The bias will start increasing. I think they are not trained. I don’t expect training to get better. I think it will be better in the immediate future, maybe much later. But I also want to disagree with what Antara said. <strong>Moderator:</strong> I’ll come back to you for that one. Professor Seth, what worries you more, AI being used with bad intent or AI being used widely without anyone fully understanding its consequences? <strong>Professor Seth Bullock:</strong> Well, they’re both terrible, aren’t they? I think people will always use technologies with bad intent and it can only really be addressed if a large number of people understand that technology and can then resist it. So I think the second is more important. Uplifting the public’s understanding of AI and kind of engagement with AI properly will protect us against malign uses of AI because we will be able to spot them. <strong>Moderator:</strong> Got it. Professor Nirav, what’s harder to design, ethical individuals or ethical systems? <strong>Professor Nirav Ajmeri:</strong> I think that becomes tricky. Like what do we mean by ethical, right? So ethical individuals, if you’re combining ethical individuals and we say individuals combined together is a system, then ethical individuals. <strong>Moderator:</strong> Mr. Bal, in India, will AI mostly replace jobs, reshape jobs or polarize jobs? <strong>Kushe Bahl:</strong> Reshape. <strong>Moderator:</strong> That’s a very quick answer. You win the rapid fire answer. Right. Professor Saad, where does AI struggle more today, with people or with systems? <strong>Professor Seth Bullock:</strong> I mean, I think it struggles with people, but we don’t notice because it resembles the kind of natural language. When I say AI, I’m talking about something like chat GPT. So I think there’s a disguised problem with people there because… Those AIs, they don’t really mean what they say, they don’t really understand what they say, but it seems very strongly that they do. So I think that’s the problem. But what’s coming is AI embedded in all of our systems and then that will create its own set of problems as well. <strong>Moderator:</strong> Mr. Bal, who benefits more from AI today, companies or employees? AI mostly replace jobs, reshape jobs, or polarize jobs? <strong>Kushe Bahl:</strong> Reshape. <strong>Moderator:</strong> That’s a very quick answer. You win the rapid fire answer. Right. Professor Seth, where does AI struggle more today, with people or with systems? <strong>Professor Seth Bullock:</strong> I mean, I think it struggles with people, but we don’t notice because it resembles the kind of natural language. When I say AI, I’m talking about something like chat GPT. So I think there’s a disguised problem with people there because those AIs, they don’t really mean what they say. They don’t really understand what they say, but it seems very strongly that they do. So I think that’s the problem. But what’s coming is AI embedded in all of our systems, and then that will create its own set of problems as well. <strong>Moderator:</strong> Mr. Bahl, who benefits more from AI? Today, companies or employees? <strong>Kushe Bahl:</strong> I would say that right now, no one is benefiting from AI. But if I were to bet, it will be companies who will benefit first. And then employees will benefit. And the whole idea of having sessions like this is that we can get the employees to learn what we talked about, right? Students equipping themselves right from college. Absolutely. <strong>Moderator:</strong> Andra, for AI used in public systems, what matters more, transparency or effectiveness? <strong>Antaraa Vasudev:</strong> Transparency, off the bat. It’s the only way that we can actually design AI for public systems. It has to be at the front and center of all of our efforts. <strong>Moderator:</strong> Got it. Before we get into the last question for the entire panel, I do want to get your answer to Andra’s statement. If that’s fine. The question that I had asked. I’d ask, does AI in governance shift power to a citizen or to an institution? <strong>Professor Manjunath:</strong> Absolutely to the institutions. They have the money. to invest and discover what’s going on. There is no way citizens can beat that so easily. It requires a different whatever. I’m not allowed to say anything. <strong>Moderator:</strong> My last question for all the panelists before we open the floor for audience questions. If we get AI right, what is one everyday improvement people in this room would actually feel within the next five years? <strong>Professor Seth Bullock:</strong> So I think something that connects, there’s a thread that runs through this or there’s supposed to be and I think one thing that AI could give us is a greater sense that we are properly connected with each other and learning from each other. So the possibility for AI to break down barriers between people because of language and expertise and distance I think is huge. So the kind of traditional collective interaction intelligence that we’re used to where we put an X in a box when we vote for someone. It’s very, very simple, right? We can’t write an essay like the users of Antara’s system and send an essay to the government about what we want because there’s so many people, we can’t read all of those essays. But AI can enable that kind of rich interactions. It’s an example of one of the things that Kush is talking about, that AI delivers something that is impossible for humans to do. It doesn’t just replace something that humans are already doing. So a future in which we all feel like we have a voice and AI is helping us mediate between each other, I think is something that is technically possible. There’s a whole bunch of political and social barriers to prevent that from happening. But I think five years is a timeline during which we could see the starts of those sorts of systems. <strong>Kushe Bahl:</strong> I can talk about what I’d like to see if we get AI right. We talk a lot about institutions, we talk about companies, we talk about individuals. But not enough talk happens specifically about small businesses. India is a country of self -employed people and small enterprise. I think there are about 150 million self -employed people. If each of those people could somehow earn 600 rupees more because of AI, and I’ll talk about how, that’s a unicorn. So 600 rupees more of allocation for each of these 150 million people is not, I mean there’s a lot of large numbers in India, but it’s true, right? It’s a unicorn. So I think we think of the next 50 unicorns. We may not think of like 50 companies worth a billion dollars, but we may think of 50 innovations that puts 600 rupees more in the pockets of 150 million people. And how does one do that? I mean if you look at all the important things all of us use today, ride hailing, e -commerce, this restaurant ordering, food ordering, right? All of these created by… On institution, they make an app and then they do spend money on marketing and so on. Today, you have AI systems that are incredibly low cost. You know, 50 cab drivers can get organized. There’s an AI agent can do the scheduling and whatever. You have a WhatsApp chat with them and you can just find the driver, right? There’s no reason why we can’t have innovation like this. Very low cost. The price, the cost of the tokens can be funded in that ride. It can be, right? That’s all that there is to run it. It’s an autonomous system which just runs off publicly available infrastructure. I think that, to me, is the real unlock that we can see. And those same systems can then serve anyone in the world. So you can do this for taxi drivers. You can do this for lawyers. And those lawyers can then serve anyone anywhere in the world. So I think that’s the real, real unlock that we are waiting for. These systems are very low cost to build. They can be built by anybody. They can be self -built by people. And it just takes a few groups, a group of a few of these self -employed people to get together. And then, you know, suddenly this can go viral. So I would love to see that type of innovation coming. Rather than necessarily, you know, the stuff that we know for the companies we’ll do or the things that we’ll all play around with on our LLMs on ourselves. <strong>Moderator:</strong> Great. Antara? <strong>Antaraa Vasudev:</strong> Thank you. I think building on what Seth and Mr. Baral just said, there’s two things that I see happening. One is the disaggregation of systems and a lot of decentralized control mechanisms, right? When that happens, you have very fragmented channels to actually engage with institutions, to Seth’s point about building collective and new ways of collective intelligence. What I want to see happening for all of us in the room is greater access and connectivity to public institutions, which actually fuels us to get easier access to entitlements and benefits that the state is supposed to provide to us. If AI can get that right, if we can solve for that, I think there is a long and a big argument to be made about that being the sort of rising tide that lifts all boats. <strong>Professor Nirav Ajmeri:</strong> Building on to what people have been talking and last on Antara’s point, thinking about collectives, right? So we can build systems which work for individuals, but how do we make sure that those individuals could be, like each individual have different preferences. How do we take into account different people’s preferences? How do we aggregate people’s preferences and then come up with a collective decision? If you are coming up with a collective decision, how does that decision affect various other people? How do we explain that decision to other people that, hey, we have taken into account your preferences in this particular way? So we need to get that part of AI right to make sure that people have a buy -in, people trust the system that we are designing. So that is what I would want to see and I’m thinking that we are moving forward with that. We are thinking about fairness. We are thinking about, transparency. we are thinking about accountability and so on and so forth <strong>Professor Manjunath:</strong> yeah I can probably say what I already see the homeworks that my students submit are perfect the essays are spectacularly written the presentations are beautiful the only hope that I have is that they actually understand what they say so if that happens I will be very happy I think the output is perfect the understanding behind that output I hope will get better and better that’s my my wish for <strong>Moderator:</strong> I’m going to open up the floor for audience questions <strong>Audience Member 1:</strong> my question is sir I want to understand what kind of impact AI will be having on management consultants and the business <strong>Kushe Bahl:</strong> I have no idea I have no idea really it’s very hard to say every industry is going to evolve Obviously, management consultants like everybody else are using AI for every possible thing that they can do with it. So they’re also trying to become more efficient, more productive with it. We don’t know what that means in terms of reshaping of the business. If you look at past tech innovations, which have also had a very big impact on productivity in many sectors, it’s not that entire sectors have disappeared or things have got, but things have got reshaped significantly. That has happened a lot. So I think the job that consultants do, like today when we do research, you don’t wait for one week for somebody to go and find things from everywhere. It comes in a few minutes. Unfortunately, I find that a lot of the output, I have also seen a lot of the output, like Professor Manjunath said. I find two issues right now with the current versions of the AI. When it writes, it has no soul. So it’s correct, but it has no soul. And when it prepares a presentation or a piece of communication, it’s not inspiring. So it is correct, but it’s not inspiring. So I think there is a, so the consultants will spend more time on actually communicating in a way that’s inspiring while the desk, you know, the basic desk work will be done for you. So you spend time doing more, I would say, human tasks. And that’s going to happen actually in a lot of other, in a lot of service jobs, right? You’re going to do, you’re going to spend time doing what humans are truly supposed to do and are really good at, which the AI models are not able to do. <strong>Audience Member 2:</strong> Okay, thanks. So my question is for everyone. I have a younger cousin who is in high school and her entire life is on chat GPT at this point. So she shares everything, relationship issues, family issues, and it knows more about her than I do. And I kind of worry when I see the younger generation getting on these AI platforms. So what is your take on this, like, impact? What is the impact of this technology on young minds? <strong>Professor Seth Bullock:</strong> So. I share your concern I have slightly older kids I think we have to trust that we’ve been through these technological shifts before so my parents when they looked at me watching television had similar worries about they told me that my eyes would become square because I watched too much television so actually my generation became much more sophisticated consumers of television and were much more savvy about TV ads than my parents’ generation so I think we have to listen to our children about the way that they’re using these technologies they’re natives in this new world I’m calibrated for a world where AI doesn’t work where AI is not rolled out across the whole world so I’m the wrong person really to ask about how AI is going to change people we should ask young people how they’re using it and engage with them before they start to use their AI in a way that we don’t understand in secret <strong>Kushe Bahl:</strong> I have a funny answer and a short answer. But I think that one, I think the real danger actually is not with the chat GPTs of the world, but with the earlier addictive systems like the Instagrams of the world, right? Because they are genuinely playing on our brain’s dopamine circuits and are genuinely addictive and can therefore be harmful. I think with chat GPT, I think the only thing I would say is, I think it makes one actually question where we are as individuals, as parents, as family, that our children prefer to communicate to a relatively soulless communication device which answers everything like an American therapist textbook would, right? That they prefer to talk to that than to us. It shows what a distance we have created. With each other, right? And that may be a good reminder to us as individuals around the task that we have to do in to rebuild bonds with each other. <strong>Antaraa Vasudev:</strong> I think on a very similar note actually to what Kusha just said, I think there have been studies from Youth Ki Awaaz and a number of other global youth -based organizations which have been looking at why exactly we turn to AI. And the phraseology is very interesting there because it indicates that turning to AI is something that you can also turn away from. I think the questions really come up where exactly what was just mentioned about understanding what are the kinds of tactile family bonds, what are the kinds of lived experience -based interactions that we can keep having with the younger generation to show that AI is a part of their life, but it’s not the only part of their life. And I think that’s maybe my hypothesis on where we’re headed there. <strong>Audience Member 3:</strong> I have a quick follow -up and you can connect with the previous question also. Many countries right now are trying to ban the new AI. Clearly there is evidence it is harmful in the course it’s coming. You mentioned Instagram or any other. AI is an amplifier. So unless we design, whether it’s regulation or whether it’s guardrails or whatever, what is our hope and what is the hope for a society not to get amplified harm than what they have already experienced, especially for the generation? Shall we start with you, sir? <strong>Speaker 3:</strong> Well, I think that’s basically what I wanted to say was to, the countries of Spain and Australia are two examples of where severe restrictions have been put on social media companies to at least give access to children. And that’s an interesting experiment. One has to see what’s going on. What will happen because it’s not an easy thing to do. I mean, I think technologically it’s not easy. Legally, I’m sure there are a lot of loopholes in all of this. We have to see how that evolves and potentially apply a similar. similar kind of guardrails with respect to AI. That’s the view, at least that’s the view that I have on that matter. <strong>Professor Manjunath:</strong> No, it has to start somewhere. I mean, this exactly goes to my point that I made earlier. Generalists in government cannot handle the space at which technology can move. You cannot put guardrails on that at the beginning. The moment you know something is happening, you have to get into the act as quickly as possible. Somebody is making an attempt. So let’s understand what’s going on. Maybe it’s, I mean, exactly what goes on is, what will happen is something that we have to see. I mean, what was interesting, at least in that attempt, was that the way in which the social media companies reacted to both the Australian and the Spanish ban. Okay, so to me, the most interesting part was they all said it was too fast, they’ve not thought about it. things through. And then I remembered what Facebook’s slogan was, move fast and break things. They are allowed to move fast, but the legal system is not allowed to experiment. That seemed like an interesting contradiction for me to study. <strong>Professor Seth Bullock:</strong> Relatedly, so the first AI summit in London was very closed, right? Politicians and the leaders of big tech firms. And the idea that a couple of years later governments would actually be legislating in ways that limited in this case social media companies is very good news. After London, you could imagine that regulatory capture had happened, right? Governments were not going to be able to resist these big companies and their multinational power. So those first couple of steps of regulating social media for under -16s, even if it doesn’t quite work, even if it’s not exactly right, it at least is a step of introducing regulations and it will make AI companies… at least aware that that is a possibility. Because they have to take that responsibility, I think. <strong>Moderator:</strong> Professor Nirav, do you have any other input on that as well? <strong>Professor Nirav Ajmeri:</strong> I think I agree to the points that have been made. I think there could be different ways to think about a blanket ban, for instance. If you try to restrict something, people may not… They can have more curiosity in terms of why is it something which is getting banned. So we have to be thinking about that as well. But there is a step. There will have to be some regulations that should come into place. What those regulations would be, we need to be thinking about that. I think a lot of times the worry is people keep scrolling. And then the way the algorithms work, Professor Majunath knows better, but recommender systems would put you in a rabbit hole. And you keep going into one direction. There could be echo chambers that could get informed. So the younger population is more vulnerable there, and that is where possibly a ban or restricted access helps. We have to be thinking about how can we, say in YouTube, there is YouTube Kids, and they only see kids’ content, but then there are malicious actors who would post some content which is targeted towards kids, but it is not actually kids’ content. There could be somebody could come up with a new social media platform for kids. I am not very sure what it would look like, but there would be new technology that would come, but that needs some guardrails to be put into place. What kind of guardrails? Research and the legislation will have to be thinking about it. <strong>Moderator:</strong> Sure. I think we have time for one last question. Can we give it to somebody at the back? Yeah. The jean jacket. Yeah. Go for it. Can we pass the mic at the back, please? <strong>Audience Member 4:</strong> So, definitely AI has enabled in the education and medical domain. But do we think that it has influenced, reached or violated the concept of the developers as well? There are singers who no longer exist. We are getting to hear those songs in the new generation. The ones who are alive, they definitely have a way to improve. But those who are not going to exist, it’s a breach of concept that, of course, it is falling under the domain of ethical AI. But just wanted to know your thoughts. <strong>Moderator:</strong> Is there someone that’s directed route for this question or is it open for all? Ethical. Okay, we can just, whoever would like to take that. <strong>Professor Seth Bullock:</strong> So, I think it’s a completely legitimate concern. Okay. And it’s difficult to understand where we go from here because the cat is already out of the bag, right? The models are already trained on everyone’s data without our consent. And how do we put that back in the box? I’m not sure that we can. I think, so there are currently legal cases that are going through the courts about the IP claims of musicians and artists, and it will be very interesting to see what law courts decide about that. I do think the kind of systems I’m interested in are systems that are built on consent. So a population of people that all have diabetes who sign up for an app that will track their disease, and then they gain by being part of a community where information is being shared to help people manage their diabetes. So that’s a much more consenting model. It’s not about stealing people’s writing and art and music from the Internet, but that activity is already underway, and I don’t see a way of really putting it back in the box. <strong>Moderator:</strong> Let’s do one last question. <strong>Audience Member 5:</strong> Yeah, I guess it’s not the… the topic of education and the internet is strong and all of those things. One thing that we have observed is that instant feedback even by AI tools in education especially, students do not go through the whole process, the step by step process of foundation. So if your, let’s say your courses work in a way or the tools work in a way that they are step by step trying to make learn the person, make learn the student instead of giving instant gratification with the output. So one thing, the question is like this that has any of the professors in the panel been approached for this kind of a thing for modeling of the education process or process of getting educated or learning especially. And the other thing that would you, can we see a collaboration in that regard where we can try to create a regulatory thing for us or a guidelines that how AI tools should be constructed for imparting education in a step by step so that that is structured with gratification. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. <strong>Professor Manjunath:</strong> Yeah, the short answer to honor his, whatever, never mind. I didn’t get that right. So, yeah, the short answer is no, nobody is thinking along those lines. And handling AI in a classroom has been quite painful. And to give you one example, there was an example in which I asked somebody to do something, essentially write a certain program to perform a certain task. I gave the data. The student, because the student went to chat GP to understand what the question was about, created her own data to do and did not know how to use the data that I was giving. So the point you are making is extremely valid. if you want to think about legislation or any other guardrails or anything like that, I’m up to discuss those with you offline. Give a very brief answer today. More generally, I think every university is struggling with that question. And I’m hoping that there are lots of bright people and we will start to see some answers. But it’s not easy. <strong>Moderator:</strong> Well, a big thank you to all the panelists here. And a big thank you to all the audience members as well for being such great and engaging people. We have a token of appreciation from the University of Bristol side for all the panelists. From all of our sides. From somewhere. Thank you very much. Thank you. Thank you. Thank you. Thank you.

P

Professor Seth Bullock

Speech speed

165 words per minute

Speech length

1587 words

Speech time

575 seconds

P

Professor Nirav Ajmeri

Speech speed

148 words per minute

Speech length

695 words

Speech time

280 seconds

P

Professor Manjunath

Speech speed

169 words per minute

Speech length

1529 words

Speech time

540 seconds

A

Antaraa Vasudev

Speech speed

170 words per minute

Speech length

883 words

Speech time

310 seconds

K

Kushe Bahl

Speech speed

188 words per minute

Speech length

1945 words

Speech time

620 seconds

M

Moderator

Speech speed

147 words per minute

Speech length

1619 words

Speech time

659 seconds

A

Audience Member 1

Speech speed

100 words per minute

Speech length

22 words

Speech time

13 seconds

A

Audience Member 2

Speech speed

136 words per minute

Speech length

81 words

Speech time

35 seconds

A

Audience Member 3

Speech speed

142 words per minute

Speech length

94 words

Speech time

39 seconds

A

Audience Member 4

Speech speed

127 words per minute

Speech length

91 words

Speech time

42 seconds

A

Audience Member 5

Speech speed

186 words per minute

Speech length

196 words

Speech time

62 seconds

S

Speaker 3

Speech speed

179 words per minute

Speech length

117 words

Speech time

39 seconds

Agreements

Agreement points

AI should enable collective intelligence and coordination rather than just individual interactions

Speakers

– Professor Seth Bullock
– Professor Nirav Ajmeri

Arguments

Coordination is intelligence – AI should support populations rather than just individuals


Multi-agent systems can optimize for global welfare rather than individual local maxima


Summary

Both professors agree that AI’s true potential lies in supporting groups and populations collectively, moving beyond individual AI interactions to systems that coordinate multiple users and optimize for collective benefit rather than individual local maxima.


Topics

Artificial intelligence | Social and economic development


Regulation and guardrails are necessary for AI, especially to protect youth

Speakers

– Professor Seth Bullock
– Professor Nirav Ajmeri
– Professor Manjunath
– Audience Member 3

Arguments

Early regulatory steps against social media companies signal that governments can resist big tech influence


Blanket bans may increase curiosity, but guardrails are needed to prevent echo chambers and protect vulnerable youth


Regulation is necessary but governments struggle with the pace of technological change


Countries attempting to ban AI reflects evidence of harm, and society needs guardrails to prevent amplified damage


Summary

There is strong consensus that some form of regulation is needed for AI systems, particularly to protect vulnerable populations like youth from harmful effects such as echo chambers and amplified damage, though the implementation challenges are acknowledged.


Topics

The enabling environment for digital development | Human rights and the ethical dimensions of the information society | Building confidence and security in the use of ICTs


AI is significantly impacting education by providing instant results without foundational learning

Speakers

– Professor Manjunath
– Audience Member 5

Arguments

AI tools provide instant gratification that prevents students from learning foundational step-by-step processes


AI tools providing instant feedback prevent students from learning foundational step-by-step processes


Summary

Both speakers agree that AI tools in education are creating problems by giving students instant results and perfect outputs without requiring them to go through the foundational learning process, potentially undermining deep understanding.


Topics

Capacity development | Social and economic development | Artificial intelligence


Young people’s extensive AI usage raises concerns about human relationships and development

Speakers

– Kushe Bahl
– Antaraa Vasudev
– Audience Member 2

Arguments

Young people preferring AI communication over family reflects the distance we’ve created with each other


AI should be part of life but not the only part, requiring maintained human connections


Extensive AI usage by young people for personal and emotional matters may have concerning impacts on developing minds


Summary

There is consensus that while AI can be part of young people’s lives, excessive reliance on AI for emotional and personal matters indicates problems in human relationships and may negatively impact youth development.


Topics

Human rights and the ethical dimensions of the information society | Capacity development


AI training on data without consent raises legitimate intellectual property concerns

Speakers

– Professor Seth Bullock
– Audience Member 4

Arguments

AI training on data without consent raises legitimate IP concerns that are difficult to reverse


AI’s use of deceased artists’ work without consent violates intellectual property rights and raises ethical concerns


Summary

Both speakers acknowledge that AI systems being trained on copyrighted material without consent creates legitimate intellectual property violations, particularly concerning deceased artists’ work, though solutions are challenging to implement.


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society | Data governance


Similar viewpoints

Both speakers emphasize the importance of engaging with and understanding young people’s relationship with AI rather than simply restricting it, while acknowledging that AI usage patterns may reflect broader social issues in human relationships.

Speakers

– Professor Seth Bullock
– Kushe Bahl

Arguments

We should trust young people as natives in the AI world while engaging with them about their usage


Young people preferring AI communication over family reflects the distance we’ve created with each other


Topics

Human rights and the ethical dimensions of the information society | Capacity development


While they disagree on the direction, both speakers acknowledge that AI fundamentally shifts power dynamics in society, with the outcome depending on how it’s implemented and who controls the resources and access.

Speakers

– Professor Manjunath
– Antaraa Vasudev

Arguments

AI shifts power towards institutions because they have resources to invest and understand the technology


AI shifts power towards citizens by addressing information asymmetry


Topics

Artificial intelligence | Social and economic development | Closing all digital divides


Both professors believe governments have an important role in AI governance but should focus on enabling and regulating rather than directing technological development, learning from both historical failures and recent regulatory successes.

Speakers

– Professor Seth Bullock
– Professor Manjunath

Arguments

Governments should enable rather than micromanage AI development, learning from historical failures like Japan’s fifth generation computing


Early regulatory steps against social media companies signal that governments can resist big tech influence


Topics

The enabling environment for digital development | Artificial intelligence


Unexpected consensus

AI’s real value comes from doing what humans cannot do, not replacing human tasks

Speakers

– Kushe Bahl
– Professor Seth Bullock

Arguments

Simple AI replacement of humans doesn’t create sustainable value – real value comes from AI doing what humans cannot do


AI can enable rich collective interactions like detailed citizen feedback to governments


Explanation

It’s unexpected that both a business practitioner and an academic researcher independently arrived at the same conclusion that AI’s true value lies not in simple human replacement but in enabling capabilities that are impossible for humans to achieve at scale.


Topics

Artificial intelligence | The digital economy | Social and economic development


The need for transparency in AI systems

Speakers

– Antaraa Vasudev
– Professor Nirav Ajmeri

Arguments

Transparency matters more than effectiveness for AI in public systems


AI systems must account for different preferences and explain collective decisions to build trust


Explanation

The strong consensus on prioritizing transparency over effectiveness is unexpected, as it goes against typical efficiency-focused approaches in technology deployment, suggesting a mature understanding of AI’s social implications.


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society | Data governance


AI as an amplifier of existing problems rather than a solution

Speakers

– Professor Manjunath
– Audience Member 3

Arguments

Recommendation systems significantly shape human preferences through learning algorithms and utility functions


Countries attempting to ban AI reflects evidence of harm, and society needs guardrails to prevent amplified damage


Explanation

The consensus that AI amplifies existing societal problems rather than solving them is unexpected, as it challenges the common narrative of AI as primarily beneficial technology, showing sophisticated understanding of AI’s systemic effects.


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society | Building confidence and security in the use of ICTs


Overall assessment

Summary

The speakers showed remarkable consensus on several key issues: the need for AI to serve collective rather than individual interests, the importance of regulation and guardrails (especially for youth protection), concerns about AI’s impact on education and human relationships, and the need for transparency in AI systems. There was also agreement on intellectual property concerns and the view that AI’s real value comes from enabling impossible-at-scale human capabilities rather than simple task replacement.


Consensus level

High level of consensus across diverse perspectives (academic researchers, practitioners, civil society, and audience members) suggests these are fundamental concerns that transcend disciplinary boundaries. The implications are significant: there appears to be broad agreement that AI development should prioritize collective benefit, transparency, and human welfare over pure efficiency or profit, and that proactive governance measures are essential rather than optional.


Differences

Different viewpoints

Whether AI shifts power towards citizens or institutions

Speakers

– Antaraa Vasudev
– Professor Manjunath

Arguments

AI shifts power towards citizens by addressing information asymmetry


AI shifts power towards institutions because they have resources to invest and understand the technology


Summary

Antaraa believes AI empowers citizens by reducing information gaps, while Professor Manjunath argues institutions benefit more due to their financial resources and technical understanding


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society | Closing all digital divides


Government’s role in AI development and regulation

Speakers

– Professor Manjunath
– Professor Seth Bullock

Arguments

Governments should enable rather than micromanage AI development, learning from historical failures like Japan’s fifth generation computing


Early regulatory steps against social media companies signal that governments can resist big tech influence


Summary

Professor Manjunath advocates for minimal government intervention based on historical failures, while Professor Bullock sees recent regulatory actions as positive signs of government capability to regulate tech companies


Topics

The enabling environment for digital development | Artificial intelligence


Whether algorithms reduce or hide bias

Speakers

– Professor Manjunath
– Moderator

Arguments

Algorithms are more likely to hide bias rather than reduce it


Are algorithms today more likely to reduce bias or hide bias better?


Summary

Professor Manjunath definitively states that algorithms hide bias and that bias will increase, disagreeing with the moderator’s framing that suggested algorithms might reduce bias


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Unexpected differences

The fundamental nature of AI’s impact on power distribution

Speakers

– Antaraa Vasudev
– Professor Manjunath

Arguments

AI shifts power towards citizens by addressing information asymmetry


AI shifts power towards institutions because they have resources to invest and understand the technology


Explanation

This disagreement is unexpected because both speakers work on AI applications for social good, yet they have fundamentally opposite views on whether AI democratizes or concentrates power. This suggests deep philosophical differences about technology’s role in society


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society | Closing all digital divides


The effectiveness of government regulation in fast-moving technology

Speakers

– Professor Manjunath
– Professor Seth Bullock

Arguments

Governments should enable rather than micromanage AI development, learning from historical failures like Japan’s fifth generation computing


Early regulatory steps against social media companies signal that governments can resist big tech influence


Explanation

This disagreement is unexpected because both are academics who might be expected to have similar views on regulation. However, they represent different perspectives – one emphasizing historical failures of government intervention, the other seeing recent regulatory successes


Topics

The enabling environment for digital development | Artificial intelligence


Overall assessment

Summary

The main areas of disagreement center on power dynamics (whether AI empowers citizens or institutions), the appropriate role of government in AI regulation, and the nature of algorithmic bias. There were also nuanced differences on youth protection approaches and the balance between AI benefits and human connection.


Disagreement level

Moderate to high disagreement level with significant implications. The fundamental disagreement about whether AI democratizes or concentrates power suggests different underlying philosophies about technology’s role in society. The disagreement on government regulation approaches has direct policy implications for how AI should be governed. These disagreements reflect broader tensions in the AI governance community between techno-optimists and techno-skeptics, and between those favoring market-led versus government-led approaches.


Partial agreements

Partial agreements

Both agree that regulation is needed and that recent government actions against social media companies are positive steps, but they disagree on the extent of government involvement – Bullock sees regulatory capability while Manjunath emphasizes government limitations with fast-moving technology

Speakers

– Professor Seth Bullock
– Professor Manjunath

Arguments

Early regulatory steps against social media companies signal that governments can resist big tech influence


Regulation is necessary but governments struggle with the pace of technological change


Topics

The enabling environment for digital development | Artificial intelligence


Both recognize the importance of maintaining human connections alongside AI usage, but Bahl focuses on rebuilding family bonds as a response to AI preference, while Antaraa emphasizes balance and ensuring AI doesn’t become the only part of life

Speakers

– Kushe Bahl
– Antaraa Vasudev

Arguments

Young people preferring AI communication over family reflects the distance we’ve created with each other


AI should be part of life but not the only part, requiring maintained human connections


Topics

Human rights and the ethical dimensions of the information society | Capacity development


Both agree on the need to protect youth while respecting their technological sophistication, but Bullock emphasizes trusting and engaging with young people, while Ajmeri focuses more on the need for protective guardrails and regulations

Speakers

– Professor Seth Bullock
– Professor Nirav Ajmeri

Arguments

We should trust young people as natives in the AI world while engaging with them about their usage


Blanket bans may increase curiosity, but guardrails are needed to prevent echo chambers and protect vulnerable youth


Topics

Human rights and the ethical dimensions of the information society | Building confidence and security in the use of ICTs


Similar viewpoints

Both speakers emphasize the importance of engaging with and understanding young people’s relationship with AI rather than simply restricting it, while acknowledging that AI usage patterns may reflect broader social issues in human relationships.

Speakers

– Professor Seth Bullock
– Kushe Bahl

Arguments

We should trust young people as natives in the AI world while engaging with them about their usage


Young people preferring AI communication over family reflects the distance we’ve created with each other


Topics

Human rights and the ethical dimensions of the information society | Capacity development


While they disagree on the direction, both speakers acknowledge that AI fundamentally shifts power dynamics in society, with the outcome depending on how it’s implemented and who controls the resources and access.

Speakers

– Professor Manjunath
– Antaraa Vasudev

Arguments

AI shifts power towards institutions because they have resources to invest and understand the technology


AI shifts power towards citizens by addressing information asymmetry


Topics

Artificial intelligence | Social and economic development | Closing all digital divides


Both professors believe governments have an important role in AI governance but should focus on enabling and regulating rather than directing technological development, learning from both historical failures and recent regulatory successes.

Speakers

– Professor Seth Bullock
– Professor Manjunath

Arguments

Governments should enable rather than micromanage AI development, learning from historical failures like Japan’s fifth generation computing


Early regulatory steps against social media companies signal that governments can resist big tech influence


Topics

The enabling environment for digital development | Artificial intelligence


Takeaways

Key takeaways

AI should focus on collective intelligence and coordination rather than just individual assistance, enabling populations to work together more effectively


Recommendation systems significantly shape human preferences and behavior, often hiding rather than reducing bias


AI’s impact on governance is contested – it may empower citizens through better information access or strengthen institutions that have resources to leverage the technology


Government’s role should be enabling AI development rather than micromanaging it, based on historical lessons of technological failures


AI will reshape rather than simply replace jobs, with real value coming from AI doing tasks humans cannot perform rather than basic substitution


Simple AI replacement of human tasks often doesn’t create sustainable value – the biggest opportunities lie in personalized systems that generate new revenue rather than just cost savings


Students across all disciplines need to learn how to leverage AI tools effectively rather than focusing on which fields are ‘safe’ from AI disruption


Young people’s preference for AI communication over human interaction reflects broader social disconnection that needs addressing


AI regulation attempts, while imperfect, represent necessary experimentation and signal that governments can resist big tech influence


Educational institutions are struggling with AI integration as students produce perfect outputs without understanding underlying concepts


Resolutions and action items

Students should equip themselves with AI skills across all disciplines to leverage AI for enhanced performance in their fields


Families and communities need to rebuild human connections to compete with AI’s appeal to young people


Universities need to develop new approaches to education that ensure foundational learning despite AI assistance


Researchers, companies, non-profits and governments should partner to develop AI systems designed for populations rather than individuals


AI systems need to embed social responsibility and appreciation for impact on other users to prevent resource conflicts


Unresolved issues

How to reverse AI training on data without consent – ‘the cat is already out of the bag’


Whether AI will ultimately increase or decrease employment – ‘nobody actually knows’ the final math


How to effectively regulate AI given the pace of technological change versus government capabilities


How to design educational systems that maintain step-by-step learning processes while incorporating AI tools


Whether current regulatory attempts (like youth social media bans) will be technically and legally effective


How to balance AI optimization for individual preferences versus collective social welfare


How to ensure AI systems account for diverse preferences while building trust through explainable collective decisions


Suggested compromises

AI development should focus on consent-based models where users voluntarily participate and benefit from shared intelligence (like diabetes management communities)


Governments should act as enablers and monitors of AI development rather than directors, learning from successful examples like NPCI in India


AI regulation should start with experimental steps (like social media restrictions) even if imperfect, to establish precedent for future governance


Educational institutions should focus on teaching students to use AI for enhanced human capabilities rather than replacement


AI systems should be designed as ‘part of life but not the only part’ requiring maintained human connections and interactions


Small business and self-employed workers should organize in groups to leverage low-cost AI systems for collective benefit rather than competing with large institutions


Thought provoking comments

So instead of AI answering individual questions, AI can help coordinate those people, share intelligence, share their knowledge, and achieve better outcomes. And I think that’s quite a different way of framing AI than many of the systems that we’re hearing about and requires different technologies and different ways of delivering that to people, different ways of engaging with populations.

Speaker

Professor Seth Bullock


Reason

This comment reframes AI from an individual tool to a collective coordination mechanism, challenging the dominant narrative of AI as personal assistants. It introduces the concept of ‘AI for populations’ rather than ‘AI for individuals,’ which is a fundamentally different paradigm.


Impact

This set the foundational tone for the entire discussion, shifting focus from individual AI interactions to systemic, community-based applications. It influenced subsequent speakers to think about collective benefits and multi-agent systems, establishing the panel’s central theme of AI for collective good.


So there is definitely a huge impact on the population’s preferences by the recommendation systems… The difference between advertising advertisements that you see on the street and the advertisement corresponding to a recommendation engine is that you are significantly more receptive. You are looking to do something. And when you are trying to look for something to do, if the recommendation pushes you in a certain direction, you are naturally going to go there.

Speaker

Professor Manjunath


Reason

This insight reveals the subtle but profound way AI shapes human behavior by exploiting moments of receptivity. It goes beyond simple bias to explain how recommendation systems fundamentally alter human preferences over time, making the influence more insidious than traditional advertising.


Impact

This comment created a sobering moment in the discussion, prompting the moderator’s personal reflection (‘I really wonder how much my personality is my own at this point’) and shifting the conversation toward questions of agency and authentic choice. It added psychological depth to the technical discussion.


The real value unlock, which is sustaining, is actually when you get AI to do something which humans can’t do or are not able to do because it’s so time consuming and so difficult… That genuinely has value. It creates huge value unlock.

Speaker

Kushe Bahl


Reason

This comment challenges the prevalent focus on AI as job replacement and reframes it as capability augmentation. It provides concrete economic reasoning for why AI should complement rather than simply substitute human work, offering a more nuanced view of AI’s economic impact.


Impact

This shifted the job displacement anxiety in the room toward a more constructive discussion about AI creating new forms of value. It directly influenced the student-focused questions that followed and provided a framework for thinking about career preparation in an AI world.


Absolutely to the institutions. They have the money to invest and discover what’s going on. There is no way citizens can beat that so easily.

Speaker

Professor Manjunath


Reason

This blunt contradiction to Antaraa’s optimistic view about AI empowering citizens cuts through idealistic rhetoric to highlight power dynamics and resource asymmetries. It forces acknowledgment of how economic realities shape AI’s actual impact on power distribution.


Impact

This created a moment of tension that revealed the complexity of AI’s social impact. It prevented the discussion from becoming overly optimistic and introduced a critical perspective on whether AI truly democratizes power or concentrates it further.


I think there are about 150 million self-employed people. If each of those people could somehow earn 600 rupees more because of AI… that’s a unicorn. So I think we think of the next 50 unicorns. We may not think of like 50 companies worth a billion dollars, but we may think of 50 innovations that puts 600 rupees more in the pockets of 150 million people.

Speaker

Kushe Bahl


Reason

This reframes innovation success metrics from creating billion-dollar companies to distributed economic impact across millions of small-scale workers. It’s a profound shift in how we measure AI’s societal value, particularly relevant for India’s economic structure.


Impact

This comment redirected the conversation from abstract discussions about AI’s potential to concrete, human-scale impacts. It grounded the discussion in India’s economic reality and influenced other panelists to think about decentralized, grassroots applications of AI.


I think the only thing I would say is, I think it makes one actually question where we are as individuals, as parents, as family, that our children prefer to communicate to a relatively soulless communication device… That they prefer to talk to that than to us. It shows what a distance we have created.

Speaker

Kushe Bahl


Reason

This turns a technology concern into a mirror for examining human relationships and social bonds. Rather than blaming AI, it challenges the audience to consider what gaps in human connection AI is filling, making it a deeply personal and introspective observation.


Impact

This comment transformed a typical ‘AI is dangerous for kids’ discussion into a more nuanced examination of human relationships and social responsibility. It prompted other panelists to focus on maintaining human connections rather than simply restricting technology.


Generalists in government cannot handle the space at which technology can move. You cannot put guardrails on that at the beginning… They are brilliant people. They know society. They understand administration. But they don’t understand technology. Especially a technology that is moving too damn fast has a very large surface area and they cannot control it.

Speaker

Professor Manjunath


Reason

This comment articulates a fundamental mismatch between the pace of technological development and the capacity of democratic institutions to govern it. It highlights a structural problem in how societies manage rapid technological change.


Impact

This provided a framework for understanding regulatory challenges that influenced the subsequent discussion about AI bans and government intervention. It helped explain why simple regulatory approaches might fail and why enabling rather than controlling might be more effective.


Overall assessment

These key comments collectively transformed what could have been a superficial discussion about AI applications into a nuanced examination of power, agency, and social responsibility. The most impactful insights challenged conventional wisdom—reframing AI from individual tools to collective systems, questioning whether AI empowers or concentrates power, and shifting focus from job replacement to capability augmentation. The discussion evolved from technical possibilities to fundamental questions about human relationships, democratic governance, and economic justice. The tension between optimistic and critical perspectives, particularly around power dynamics, prevented the conversation from becoming either utopian or dystopian, instead fostering a more realistic and actionable dialogue about AI’s role in society.


Follow-up questions

How can we design AI systems that embed social responsibility and appreciation for how agent behavior impacts other agents in multi-agent systems?

Speaker

Professor Seth Bullock


Explanation

This is crucial as AI systems scale up and one simple request could create cascading effects that consume resources and disadvantage others, requiring embedded social responsibility in agents


What are the right utility functions for recommendation systems, and how can we prevent them from dramatically altering user preferences over time?

Speaker

Professor Manjunath


Explanation

Understanding how recommendation systems shape preferences is critical as these systems have spectacularly large impacts on population preferences, and there’s no consensus on the ‘right’ utility function


How can we build frameworks for government AI use that are transparent, accessible, and have equity built in before racing to deploy new models?

Speaker

Antaraa Vasudev


Explanation

This is essential for ensuring AI in governance serves citizens rather than just optimizing for efficiency, requiring careful framework design before implementation


How can we develop AI systems that enable richer collective intelligence interactions beyond simple voting mechanisms?

Speaker

Professor Seth Bullock


Explanation

This could transform democratic participation by allowing citizens to provide detailed input that AI can process and synthesize, moving beyond traditional limited interaction methods


How can small businesses and self-employed individuals organize to create AI-powered systems that benefit them directly rather than relying on institutional solutions?

Speaker

Kushe Bahl


Explanation

This could unlock significant economic value for 150 million self-employed people in India, creating decentralized alternatives to institution-controlled platforms


How do we aggregate different people’s preferences and explain collective AI decisions in a way that ensures buy-in and trust?

Speaker

Professor Nirav Ajmeri


Explanation

This is fundamental to building AI systems that work for collectives while maintaining transparency, fairness, and accountability


How can we ensure students understand the content behind AI-generated outputs rather than just producing perfect-looking work?

Speaker

Professor Manjunath


Explanation

This addresses a critical educational challenge where AI tools may be undermining genuine learning and understanding


What are effective regulatory approaches for AI that can keep pace with rapid technological development without stifling innovation?

Speaker

Professor Manjunath and Professor Seth Bullock


Explanation

This explores the tension between necessary oversight and the speed of AI development, learning from early social media regulation attempts


How can AI educational tools be designed to promote step-by-step learning rather than instant gratification?

Speaker

Audience Member 5


Explanation

This addresses concerns about AI tools undermining foundational learning processes by providing immediate answers without building understanding


What guidelines should govern the construction of AI tools for education to ensure structured learning with appropriate gratification?

Speaker

Audience Member 5


Explanation

This seeks to establish standards for educational AI that support proper learning progression rather than shortcuts


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.