Artificial General Intelligence and the Future of Responsible Governance

20 Feb 2026 11:00h - 12:00h

Artificial General Intelligence and the Future of Responsible Governance

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by noting the rapid acceleration of AI since 2020 and the emerging public debate about artificial general intelligence (AGI), warning that societies that ignore these trends may miss the chance to shape future governance [1-8]. Participants agreed that AGI is envisioned as a form of AI that can reason, learn, adapt, transfer knowledge and operate beyond narrow domains, unlike current systems that excel only in specific tasks [11-18].


Simonas Satunas offered a pragmatic definition of AGI as an entity capable of performing any human professional task with comparable accuracy, estimating a 3-to-7-year horizon based on growing public trust in generative AI tools [21-24]. Alexandra Bech Gjørv cautioned against fixing a timeline, emphasizing that progress depends on sustained investment, advanced low-latency hardware, neuromorphic and edge computing, and that privacy constraints on personal data may limit situational awareness [23-30][35-37]. Kenny Kesar highlighted that achieving the “five-nines” level of accuracy-a benchmark that historically required five to ten years per additional nine-will be essential for moving from probabilistic to more deterministic AI behavior [41-48].


Simonas Cerniauskas warned that current massive compute spending may be a bubble and that algorithmic efficiency could curb over-investment, while Simonas Satunas stressed that compute is only one of several critical components, including energy, data, and especially human critical-thinking capacity [70-71][72-90]. He also mapped AI risks into four layers-traditional privacy and security, mental-health impacts, social effects such as empathy erosion, and macro-level threats to democracy-calling for coordinated national and international strategies to mitigate them [131-139].


Alexandra argued that democratic access to compute must be paired with education of policymakers, noting that human oversight is often inadequate in ethical dilemmas and that algorithmic monitoring can reduce bias, as illustrated by a sports-analytics case where video review eliminated discriminatory decisions [96-101]. Kenny proposed institutionalizing “AI operating procedures” (AOP) analogous to current SOPs, training models to avoid bias and establishing external audits to ensure ethical compliance as AI approaches general intelligence [191-199].


The panelists concurred that early “anchor controls” such as robust labeling, regulatory measures, and resilience planning-including rollback mechanisms and diversified energy sources-are needed to limit harmful outcomes while enabling innovation [173][187-189]. They also agreed that collaboration among industry, academia, and governments is vital to embed egalitarian values and prevent profit-driven bias, citing examples like the amplification of violent content in Myanmar through platform algorithms [174-180]. In closing, Vinayak summarized that understanding AGI’s implications for security, privacy, and ethics requires immediate action, and announced the launch of an AI Cyber Security Terminal to support these efforts [202-207]. Overall, the discussion underscored a consensus that multidisciplinary governance, education, and measured investment are essential to steer AGI development responsibly [1-8].


Keypoints


Major discussion points


Defining AGI and estimating its arrival – The panel opened by questioning what “Artificial General Intelligence” actually means and how soon it might appear. Vinayak highlighted the surge in AI breakthroughs since 2020 and the growing talk of AGI [1-4]. Simonas Cerniauskas noted common traits of AGI such as reasoning, learning, adaptation and knowledge transfer [12-18]. Simonas Satunas offered a concrete (though simplified) definition – an AI that can perform any human professional task at human-level accuracy – and projected a 3-to-7-year horizon [21].


Compute, hardware, and investment as enablers (and possible bottlenecks) – Several speakers stressed that massive compute power, new architectures and funding are critical to reaching AGI. Cerniauskas described the current “super-high-cycle” of investment and the risk of a bubble [70-71]. Simonas Satunas used a 19th-century transport metaphor to argue that compute is only one element among energy, data, implementation and human skills [72-85][86-87]. Alexandra added that low-latency, energy-efficient neuromorphic and edge hardware are required for human-like situational awareness [31-34].


Security, privacy and ethical threats of powerful AI – The conversation turned to the dangers that more capable models pose. Kenny warned that the same AI that creates content can also generate sophisticated attacks, and that an AGI could impersonate humans such as CEOs [105-108]. Simonas Satunas broke down AI risks into four layers – classic cyber-security/privacy, mental-health impacts, social-cohesion, and macro-societal threats to democracy [131-138]. Alexandra highlighted the need for human oversight and the difficulty of making ethical decisions in autonomous systems [96-102].


Human factors: critical thinking, education and regulation – Multiple panelists argued that technology alone will not solve the challenges; societies must boost critical thinking and regulatory capacity. Simonas Satunas stressed that raising public critical thinking is as important as investing in compute [88-92]. Kenny pointed out that over-reliance on AI-generated content could erode our own reasoning muscles, creating a “vicious cycle” [164-170]. Alexandra called for educating politicians and the public so that ethical choices are made before machines dominate [96-102][187-190].


Early-stage governance and “anchor-control” concepts – The moderator asked for concrete steps that can be taken now. Cerniauskas suggested technical measures such as watermarking/labeling and hinted at regulatory actions [173-176]. Simonas Satunas advocated for global coordination, industry-academia collaboration, and embedding egalitarian values into AI design [174-180]. Alexandra proposed resilience measures, robust rollback mechanisms, and scenario planning for infrastructure loss [187-190]. Kenny introduced the idea of an “AI Operating Procedure” (AOP) to embed bias-checks, ethical reviews and continuous monitoring into AI deployments [191-199].


Overall purpose / goal of the discussion


The panel aimed to demystify AGI-clarifying its definition, likely timeline, and technical prerequisites-while simultaneously surfacing the security, privacy, ethical, and societal risks that accompany rapid AI advancement. By juxtaposing technical optimism with cautionary perspectives, the participants sought to identify practical “anchor-control” measures and governance frameworks that can be instituted today to steer the emergence of AGI responsibly.


Overall tone and its evolution


Opening (0:00-3:30) – Curious and forward-looking, with speakers outlining possibilities and expressing excitement about breakthroughs.


Middle (3:30-15:00) – The tone shifts to a more cautionary stance, emphasizing the gaps between current narrow AI and true AGI, and flagging looming security and ethical threats.


Later (15:00-35:00) – Concern deepens as concrete risks (misinformation, bias, cyber-attacks) are discussed, but a collaborative, problem-solving attitude emerges.


Closing (35:00-end) – The discussion becomes pragmatic and solution-oriented, focusing on governance, resilience, education and concrete early-stage controls.


Thus, the conversation moves from exploratory optimism to measured concern and finally to actionable recommendations.


Speakers

Ms. Alexandra Bech Gj​ørv – Head of Sintef, Norway’s largest research institute; expertise in AI research, neuromorphic and edge computing, and AI governance.


Mr. Vinayak Godse – Moderator/host of the panel discussion on AGI; involved in AI policy and security discussions.


Mr. Simonas Satunas – Speaker on AGI, provides definitions and timelines; background in AI development and public engagement (Israel).


Mr. Kenny Kesar – Speaker on AI accuracy, compute, and market disruption; experience in AI consulting and implementation for clients.


Simonas Cerniauskas – Speaker focusing on AI investment cycles, compute efficiency, and regulatory perspectives.


Additional speakers:


None (all participants in the transcript are covered by the speakers list).


Full session reportComprehensive analysis and detailed insights

The session opened with moderator Vinayak Godse framing the rapid acceleration of artificial-intelligence research that began around 2020 and intensified after the launch of powerful generative models in early 2023, warning that societies that ignore these developments risk missing the chance to shape the governance of the next technological wave, possibly the arrival of artificial general intelligence (AGI) within the next two to ten years [1-8].


Defining AGI – Cerniauskas said most definitions agree that AGI should be able to reason, learn, adapt and transfer knowledge, and that it must be broader than today’s narrow-domain systems such as customer-service bots [12-18]. Building on this, Satunas offered a pragmatic, human-centric formulation: an AI that can perform any professional task with the same accuracy and professionalism as a human expert. He linked this functional view to a growing public trust in generative tools, noting that roughly half of Israeli respondents already trust AI more than their friends, which he interprets as a step toward AGI [21-24].


Timeline and investment uncertainty – Satunas projected a 3-to-7-year horizon, arguing that the convergence of technical capability and societal trust makes the milestone imminent [21-24]. By contrast, Gjørv rejected a fixed schedule, insisting that progress depends on sustained investment, hardware breakthroughs, data-privacy and regulatory challenges, and warned that policy should ensure broad, democratic access to compute resources rather than concentrating power in a few providers [23-26]. Godse echoed this uncertainty, urging societies to prepare now rather than wait for a precise date [1-7]. Cerniauskas described the current “super-high-cycle” of compute spending as potentially speculative, noting industry chatter about a bubble and the possibility of over-capacity persisting for years, as even Mark Zuckerberg has suggested [70-71].


Technical prerequisites – Gjørv highlighted that human-like situational awareness will require ultra-low-latency, energy-efficient hardware such as neuromorphic and edge-computing architectures, together with massive private data streams – a requirement that immediately raises privacy concerns [31-34][35-37]. Vinayak asked about the latency of System 2-type reasoning and the limitations of language-only models; the panel noted that current large language models (LLMs) excel at fast, intuitive (System 1) pattern-matching but struggle with deep, logical (System 2) contextual understanding, exposing a key bottleneck for AGI [95-98]. Kesar framed progress in terms of accuracy, invoking the “five-nines” benchmark: “to get from 90 % to 99 % accuracy took five to ten years”, and argued that each additional nine adds one to two more years, driving compute demand toward AGI [44-48]. Satunas broadened the picture with a 19th-century transport metaphor, arguing that compute is only one link in a chain that also includes energy, data, implementation, language localisation and, crucially, human critical-thinking capacity [72-90].


Security, privacy and risk taxonomy – Kesar warned that the same generative models that create content can also craft sophisticated cyber-attacks and impersonate senior executives, making future threats “real” once AGI can emulate human behaviour [105-108]. Gjørv added that achieving true situational awareness would require access to personal data, but privacy regulations limit such collection, creating a tension between capability and rights [35-37]. Satunas categorised AI risks into four layers: (1) classic privacy, security and fraud; (2) mental-health impacts; (3) social effects such as erosion of empathy and bullying; and (4) macro-level threats to democracy through manipulation and fake-news campaigns [131-138].


Human factors – Satunas argued that without a strong emphasis on critical-thinking education, societies will be unable to recognise AI-generated manipulation; he noted that 30 % of online content is already AI-generated, creating a feedback loop that could stall human intellectual growth [154-155][165-170]. Kesar echoed this, warning that reliance on AI-generated content may erode the “brain-muscle” needed for innovation, leading to a vicious cycle where AI diminishes the very intelligence it seeks to emulate [164-170]. Gjørv reinforced the need for political and public education, pointing out that human oversight often fails in ethical dilemmas and that policymakers must be equipped to make hard choices before machines do [96-102]. Cerniauskas also noted the importance of the human critical-thinking element as part of the broader ecosystem [12-18].


Early-stage “anchor-control” proposals – The panel offered a spectrum of concrete measures:


Technical and regulatory safeguards (e.g., watermarking, output labeling) – Cerniauskas [173-176];


Resilience planning (robust rollback mechanisms, diversified energy sources, scenario-based risk matrices) – Gjørv [187-189];


AI Operating Procedure (AOP) – a procedural framework embedding bias-audits, ethical training and continuous monitoring, analogous to traditional SOPs – Kesar [191-199];


Global regulatory collaboration – especially for smaller nations, to embed egalitarian values and mitigate bias, citing the Myanmar example where platform algorithms amplified violent content – Satunas [174-180].


Points of agreement – All speakers concurred that education, awareness and critical-thinking skills are essential to counter AI-driven threats [154-155][164-170]; they also agreed on the need for layered risk-management frameworks that combine technical safeguards, resilience planning and procedural oversight [187-189][131-138][191-199]. Both Gjørv and Satunas highlighted privacy as a fundamental constraint on the data required for human-level situational awareness [35-37][131-138]. The panel agreed that the proliferation of AI-generated misinformation poses a serious societal risk [144-149].


Remaining disagreements – The timeline for AGI remained contested (Satunas’ 3-to-7-year estimate vs. Gjørv’s refusal to set a horizon vs. Godse’s call for preparedness). On the primary driver of progress, Kesar foregrounded compute-driven accuracy improvements, while Satunas argued for a holistic ecosystem, and Gjørv emphasised specialised low-latency hardware as the bottleneck. Regarding governance mechanisms, the four speakers advocated different early-stage toolkits, reflecting a lack of consensus on the optimal approach.


Closing remarks and announcement – Godse summarised the collective insight: while the acceleration of AI capabilities creates unprecedented opportunities, immediate action is required to embed security, privacy, safety and ethical safeguards into the emerging paradigm [202-207]. He concluded by announcing the launch of the “AI Cyber Security Terminal” as the session’s final action [208-210].


The panel’s recommendations can be grouped as follows: (i) institute early anchor controls such as output labeling and technical safeguards; (ii) invest in education programmes that foster critical-thinking and AI literacy; (iii) foster cross-sector collaboration to develop global, risk-adaptive regulatory frameworks; (iv) adopt AI Operating Procedures that institutionalise bias-checks and ethical reviews; and (v) design resilience and rollback mechanisms to limit the impact of failures or malicious use. These steps aim to steer the trajectory toward AGI responsibly, balancing compute-driven progress with human-centred governance. [173-176][187-189][191-199][202-207][208-210]


Session transcriptComplete transcript of the session
Mr. Vinayak Godse

Pet Summit and the basic idea and intent behind setting up this session is while all the things were happening in AI in the period of 2020, a lot of development happening and somehow all that is now leading to kind of acceleration that we are seeing in last three years of time and especially this year, since January, all the new launches that we see, we are getting the first sign of a powerful AI, right? And now because of that, there is a discussion about AGI seems to be gaining quite a significant ground, right? And although people still have a lot of doubt and skepticism about whether it is really reality or possibility in coming future or what that means, many people are still skeptical.

They are struggling to define what that means for a cigarette. So as an overall society. and I can tell about India so probably we didn’t pay much attention when AI was coming. If you don’t pay attention now what is coming in next 2, 3, 5 years of time or 10 years of time that is probably the timeline for AGI, then probably we will miss on again thinking, talking, discussing, governing it better basically. So this discussion is about what is to help understand for us and for the audience here basically what do we mean by AGI can we really think about that right now what are different conference that we need to thank you for getting welcome to the panel and try to then find the meaning possible meaning for security, privacy and ethics basically.

So I would like to talk with someone with you, so how do you see this concept of AGI and formulationally how that will be different that we would see what is your understanding about the concept of artificial intelligence and artificial intelligence

Simonas Cerniauskas

So, yeah, thank you very much for having us here. And, yeah, like you said, it’s a really nice topic to wrap up the conference. So, well, so, you know, of course, there are kind of different definitions of AGI. And on the same time, most of them agree that it’s, you know, it’s about smarter AI than we have right now. We were joking a bit that, you know, on the way, the traffic is really, you know, exceptional. And, yeah, that’s a sign that maybe we are still not here today. So, but, yeah, but basically kind of among those common agreements that, let’s say, the smarter AI should reason. It should learn. It should adapt. And also it should transfer knowledge.

And also it shouldn’t be, you know, very. narrow, like, you know, of course, right now we have great, let’s say, areas where AI is really helping a lot, like co -development, customer service, and et cetera, but, you know, it should be much broader. So, and, you know, don’t think that any of us, maybe the colleagues will be able to answer when we will have, you know, and what timing, but definitely, you know, that’s one of the big topics right now.

Mr. Vinayak Godse

Let me come to you and you look at the digital initiative and artificial intelligence as one of the important research areas, so we are grappling with understanding what is right now, but can we think about what would happen in the next three, five years of time, and that seems to be the timeline for each area.

Mr. Simonas Satunas

So I’m the one with the date I’ll do my best So first of all my definition of AGI is very simplistic and I think that we need some simple explanation in this field and my very simple explanation is AGI will be something that can perform every human task at the level of accuracy and professionality of a human professional Now this is not an optimal definition because people can ask every task if a baby is crying will the AGI help him stop crying and people can ask what is the level of professionality but I think that this is something that we can digest and I think that for me I understood that we are getting closer there not from a technology perspective but from the perspective of talking with real Israelis about their problems and five years ago when I was telling this definition of AGI people were like oh it’ll never happen not in our lifetime and right now when I’m speaking with Israelis and I’m telling them this is AGI they’re saying oh aren’t we there yet oh because I thought that Chachi Biddy can help me like a lawyer isn’t it true now I think that we are not there yet okay there is a very sharp line between the AI that we are experiencing today and true AGI but the fact that the audience is already confusing the fact that people give trust to Gen AI tools 50 % of Israelis trust them more than they trust their friends many trust them more than they trust human professionals this puts us closer to AGI so I would say that it’s a matter of 3 years to 7 years until we reach that milestone

Mr. Vinayak Godse

so coming to you Alexandra how do you see this as a concept what is leading to this AGI what would we do that will impact the future of the AI bring this age of Asia in three or seven years of time?

Ms. Alexandra Bech Gjørv

Well, I’m not necessarily subscribing to the time frame. I think that depends on how much money we throw at it. And then there are other things to throw money at as well. Some of this, for example, we had a discussion with my team, you know, are machines able to make complex decisions as fast as humans? And in some areas, like, you know, many operations demand millisecond response and reflex level. You know, you can see that machines are quite good at detecting fire or doing various instinctive things as fast as we are, but the ability to interpret context, emotions, ambiguity, surroundings, body language, etc., that’s still quite far away. They take too long. And in a dynamic environment, you know, a wrong decision or a late decision is really a wrong decision.

So in order to get there, I, you know, there’s both low latency, energy efficient hardware, neuromorphic and edge computing and architectures beyond auto regression. But I think, you know, the researchers in Sintef, I head up the largest research institute in Norway. They, you know, they point to promising like hierarchical reflex reasoning systems, embodied multimodal learning, et cetera, et cetera. And there’s really no real doubt that you will get there. But there’s, in order to have the situational awareness like a human, you have to study a lot of data that would be considered private, personal. So there’s really limits on privacy. And then it triggers a lot of other questions that I’m sure we’ll get into.

Mr. Vinayak Godse

Yeah, we’ll come to that. So, Mr. Kenney, you must be serving many of the clients right now on AI, right? And every of us are getting stunned by… the progress and acceleration of the capability that is happening week by week basically right and that also scares us what is coming next right and when it comes to that level where there is a there is a two words uh somebody defines agi right so one is the consistency across the domain uh that it will be so general in a way that it will be consistently performing across the domain and second part is uh it will be reliable as well so currently probably sometimes it doesn’t have anything and it throws output and that’s why hallucination happens basically so consistency and reliability that’s what the agi will bring to the table basically so it will solve a lot of problems that we see uh uh right now we have been also getting stunned by the things that it can do basically so so there are routes to achieve the agi which will lead us to agi basically so how do you think uh uh your perspective the the journey that probably take us there

Mr. Kenny Kesar

So, you know, I agree with the panel that a couple of things we talked about in terms of where we’re getting to models evolving. But you bring up another component of accuracy. I’ll talk about accuracy first, and then I’ll come back to the disruption which is happening in the market. Now, the epitome of accuracy is five nines. So for AI to get from 90 % to 99%, it took five to ten years. Now, every nine that you add is another year or two years to the point where you get to 99 .99 and nines. So every nine that you’re adding has a time frame to it. And the number of nines that you add, you get closer to general intelligence because that’s what is going to look at the human brain.

I’ll take the topic of photographic regression that you talked about. Any regression, AI is right now built on regression. It’s built on learnings of the neural network. The neural network maturing on information that it sees. but the human brain is also inventing. It’s researching. So when AI really gets to the point of being able to research and bring new ideas to life that a human brain does, you’re getting closer to intelligence. Now, the disruption in the market that you’ve seen with announcements across the different players which dominate the AI market is creating a disruption in the industry and I think it’s the right disruption. It’s the disruption that word processor did to typewriter, what computers did to word processor, and what cloud did to data center.

This is another thing, but it’s much faster because it’s more pervasive and it impacts everybody in life. So the fact is people are talking about how does it translate to me. When I say it translates to me, it’s about how do we structure processes. Everybody and I agree accuracy is work in process. And since accuracy is work in process, we have to be really mature about… the use cases that we put onto it. We have to look at the human pyramid, what components of the pyramid that you’re going to look at. So the way we are advising our clients and what we’re doing ours is maker jobs, which is basically repetitive jobs with little context.

AI does very well, but create a controller for these autonomous. So combination of probabilistic and deterministic is what’s going to be the near future as we get to more and more deterministic when we get to general intelligence, because from a human perspective, it’s mostly deterministic.

Mr. Vinayak Godse

Right. Yeah. So these are and thank you all for putting some level of clarity in terms of what this means. And so at the end of the day, Asia is like so they say attention, right? Ability to give attention to all possible thing that. People, millions and billions of people asking questions. but as you rightly say the context matters so it’s not only attention the it should be contextual to your requirement and your things that you do right and third important part of which they are doing and last six months had been a great months for reasoning that bring to the table basically so my question is and anybody of you can answer this you then for achieving all of these things so why compute becomes very important so why you need this much of compute why there are trillions of dollars that is invested to make sure that it it use attention to each and every problem better and it is contextual and you reasoning and at the same time latency as I talk about so the role of compete what is the role of competitive this any of you

Simonas Cerniauskas

yeah so you know so of course if I may start and of course please accompany so currently we are at super high cycle let’s say of those investments and most of us are also wondering is it a bubble or when it will blow a bit etc is it really in some cases sustainable everyone of us most likely has our own opinion but still this race to be number let’s say one this belief that if you are number one you will remain number one and this momentum I think plus huge appetite all this hype definitely brings much much more money to the table than we could ever imagine and you know on the same time it depends a lot of course on the algorithms how efficient they will be all of us remember most likely last year this deep sea moment and there are also other models which are much more efficient but so So, you know, at some point we might understand that it’s overestimated, overinvested.

At the same time, I remember in Zuckerberg’s quotes that, you know, said, okay, in the worst case scenario, I will, you know, have overcapacity for a couple more years and then I will use it.

Mr. Simonas Satunas

So my humble opinion is that compute is one element in a chain of elements and that sometimes we treat this element as the only one. Let’s explore a metaphor. Let’s imagine that we are in the 19th century and a prophet arrives and he tells us, okay, in five years, a new technology will emerge that will enable you to arrive from Delhi to Bangkok in less than an hour. But I don’t know what the technology is. Maybe it’s a ship, maybe it’s a car, maybe it’s a train, maybe it’s an airplane, but we must be prepared. So everyone is trying to be prepared and to build the right infrastructure. So let’s look at the structure. The problem is everyone thinks about it as something else.

So one will build an airport and the other one will build rails and the other one will build boats. I think that we are in this moment. We know that AGI will arrive. We know that it is soon and we know that we must be prepared. Compute is one of the elements that is necessary, but energy is also important and heating and cold is also important. Data is extremely important. Implementation is important. Language is important in India as well. I think that one of the elements that we are not investing enough is the human element. Think about critical thinking, for example. I don’t know what AGI will arrive, but I know that already now for us it is very important to raise critical thinking among the public.

When you hear something in the news, when you see something, was it made by AI? What is the manipulation that is being forced upon me? So I think that investing in education is not less critical than investing in computing.

Mr. Vinayak Godse

And then another element I want to come to you on this that you talked about. there is very interesting discussion about this system one and system two thinking human is more intuitive in terms of response and system two is more logical and AI is probably helping with that basically but there is a latency that is an important area and that’s why they are putting a lot of effort and improving the competence such that the latency of system two thinking is also less so that your intuitive thinking can improve with that basically but it’s not only the competence the perception, the ambient, the senses, the emotions so all that also matters a lot and that’s where the limitation of language based models are getting exposed basically and you did talk about that in your initial remark can you just throw light on that?

On the language? On the different type of the models right? Ambient, compute for that matter, world model that people talk about so…

Ms. Alexandra Bech Gjørv

Well I just wanted to first agree with the… Mir, sorry that you know if you are a government and this is democratic access to compute is a big topic I think you can really get lost in just investing in compute power so investing in skills and leading edge technology understanding in your own country and participating in the regulatory approach because some of the things that I care about is that everybody says that they should be human oversight but you know that once you get into these dilemma situations like what should happen in a car accident, humans are not very good at understanding risks and humans are not very good at really making ethical discussions they tend to go as far, you know, do your best and then let moral luck decide who gets lost but you have to in machine driven systems you actually have to make decisions about those things so I think becoming, you know, educating also our politicians to be able to to know that you have to make the hard choices because otherwise the machines will make them for you and they will continue our biases and they will, you know, it will not end well.

But then I just wanted to share a little story that I heard. You know, Michael Lewis, the guy with the money ball and everything, he has this anecdote that in the Basketball Association in the States, they started video surveillance and the coaches were all making racist decisions and home team decisions. And by showing the videos and by showing the statistics, next season they couldn’t find any bias at all. So I think that’s a good example of how the machines make people better, whereas we’re not able to better ourselves over time. So I think I just thought this was a nice anecdote for this

Mr. Vinayak Godse

Thank you. And I’ll come to Kenny. So… As we are… trying to solve problems of security, privacy in current big capability of AI and we are struggling to understand what it means for security, what it means for privacy and suddenly there is a significant acceleration that is happening so what we are doing right now for security privacy which could help us to graduate to more and more powerful model comes in or any other things basically so can you just help us

Mr. Kenny Kesar

yeah I think security as we evolve and we talked about compute compute gets bigger, context get bigger, we get smarter in terms of what AI can do and definitely the same AI that can generate, can pose more sophisticated attacks and when we get to AGI right, the biggest thing is I could be emulating a human Let’s say in a company, I could emulate a CEO and make a decision because I’m getting so close to being natural. The threat is real. Now, even today, let’s say without AI, you need to be just a step ahead of the bad actors or the persons who are into cybercrime. You just have to be a step ahead. And similarly, we talked about, you know, we’re mentioning about the human portion, right?

That the human portion needs to get more educated where there are going to be set of humans that are going to use the same AI to build better agents to fight them. So now it’s a question of the tooling that you have at hand. Even today, it’s the tools. It’s a human who’s building tools to fight your cyber threats. Imagine, in the next era, the only thing is… It’ll become nearly close to science fiction when agents try locking humans out. But that’s, I would say, still science fiction. But the fact is as we evolve, we need to right -size the solution and that’s how we will manage compute too. You don’t use I7 computer or to do a simple calculator task of adding two numbers, right?

You use a calculator. So in the context of the world, we’re going to have SLMs which is small language models that will do smaller things so that we can manage compute. You have the bigger models that will solve world hunger in terms of how we do with different levels of machines and processing that we do. I think there will be tiering. Right now, we were talking about it’s a fight to who’s first. So with the fight to first, bigger, better, more elaborate. But now as it evolves, you’ll get the right size fitting to them. Then only it will be commercially viable. AI is not commercially viable today. The costs outweigh the RO.

Mr. Vinayak Godse

Yeah, current cost is quite significantly higher. You can do POC but… once you put into production environment the token cost is too much high to the ROI so so near want to come to you there is a established understanding of security privacy safety or ethics right and that’s what the paradigm that we at least try to understand right now but would the Asia altogether different paradigm and the concepts of security privacy will be foundationally very different than what we discussed right now

Mr. Simonas Satunas

so as I see it when we try to deal with the risks that AI pose we distinguish between four different levels the first level is the classical risks like privacy security cyber fraud every technology that we have since the 90s we need to explain how does it meet the current risk in that matter and AI is much more powerful and it poses a lot of more risks but these are the kinds of risks that we when we design products we know how to deal with them. Above it there is a level of human health and mental health and we find out that AI solutions can be quite problematic for mental health, can cause a lot of damage in some cases and this is something that is not yet well understood and investigated above that there is a social level.

What does it does to the empathy between people? What does it does normally people say oh I see that it’s bad for my kids. They are experiencing bullying or addiction usually what’s bad for your kids is also bad for you and we understand that these are complications that we didn’t think about when we code and the higher level is a macro level what does it do to society? What does it do to democracy? I think that several countries are now experiencing foreign manipulation and it is very easy to run campaigns that are built of fake news and we see that manipulation can become very problematic. So I think that a strategy, a national strategy and an international strategy should access, should address all these levels and all these levels have mitigations but they are costly and they need collaboration.

So we need to be in close collaboration in order to mitigate these risks.

Mr. Vinayak Godse

It’s good that the way you put the structure, right? Things it would do to us, our brain and the thing that will impact us as individually and we discussed that in one of the sessions that we hosted on neuroscience and AI. So what this means to the brain development process if we are using AI for every small thing that we want to do, what that means to society, brain development process plateaus for that matter, what will be in society and then what is the macro kind of impact it. Do you want to add something on that?

Ms. Alexandra Bech Gjørv

Yeah, I just, sorry. I just want to build on that. How it’s not just targeted manipulation or the things that we see in our kids and somebody walking around with a button called friend and that’s your only friend that you need but also the well -structured in the geopolitical context the ability to create completely different information universes you don’t need to be neurologically strange you just see a completely different view we just published a paper in science on these agent swarms and just reading a book about the Ukraine and Russia war going on now and how large populations are overpowered by totally different images of the world from what we are and at least obviously your defense systems need to be hardened against those kinds of manipulations but it’s also you know actually an offensive strategy to find good bots that enter those universes.

It’s an actual battleground in and of itself, and it’s very strange to think about the world in that way, but I think you’re very naive if you don’t start systematically working on how you make your conviction of what the world is like also part of the people that you need to somehow, hopefully not defeat, but relate to and convince that things can be better. So it’s not just a technological challenge. I would say it’s a huge mental leap for most of us.

Mr. Vinayak Godse

So Siman, the question is like the more we use, the more we become dependent on AI system, right? And the more acceleration of the people’s ability to think critically, that will go down basically, right? The speed will increase the more dependence, and then more… More AI become powerful for that matter, right? so what we see in terms of this misinformation, disinformation and defake, so probably there will be different kind of cognitive warfare that may happen so how do you see such kind of challenges in the society, you talked about society or individual for that matter, so what kind of implication it will have for individual society and overall the way the world is organized

Simonas Cerniauskas

yeah so absolutely so basically all those layers and all the dependencies like you rightly stated they also critical thinking of course is one but also awareness, education and you know the skills, abilities for people to understand the things here I think this audience is you know for us it’s more or less everything self obvious but you know when you start talking to people in the streets or different backgrounds then you you know realize that what is self -obvious for you for another person might be completely different. To find those ways I would say to educate to basically help them identify the threats, that’s one of the key priorities and also obligation I would say from our side.

Mr. Vinayak Godse

one of the important challenge of this critical thinking which I come across is critical thinking is nothing but your ability to give attention to various different dimensions nuances, different perspective, different views basically right. Where it is tremendous amount of effort that I would have to become a critical thinker. And AI saws that quite easily for me. It can make me to bring all the attention, all the dimensions, all the nuances, all the viewpoints, you can quickly get access to me, right? So, even for critical thinking, Kenny, for you, this question is, you will be depending too much on AI as well, right? So, we need to know distinction between what do you critical thinking? Critical thinking is not just getting information, giving attention, but critical thinking is what?

So, that question probably is very important question to ask.

Mr. Kenny Kesar

critical thinking that is very necessary for us to innovate further. So the biggest issue that the AI world is facing, 30 % of the content is consuming is AI generated already. So basically you’re feeding back and it’s learning on the same model. When originally it was learning on artifacts that were built through different thinking processes. So I would say one of the, it’s a risk, it’s a boon because it gets work done. But in overtime it’s a risk that we will stop evolving because if we don’t exercise the brain as a muscle, if we don’t exercise it and don’t build those neurons which really influence critical thinking, it will be actually a very big loss to society.

So I would say general intelligence, everybody is asking for it. Now how do we make sure as AI. computers get general intelligence we’re not losing our intelligence to create that general intelligence again so it’s a it’s a it’s a vicious cycle it’s a question which we’re debating we’re trying to answer in ourselves everybody has perspectives but it’s a it’s something that I think about do I have an answer to it no but I feel that critical thinking on both sides is something that we really need to critically think about

Mr. Vinayak Godse

yeah so that’s what may every thing that you think as a solution and kind of thing so there is always this challenge of what it means right in this new paradigm is an important so now a little bit concluding part of this discussion is can we when this is question to each of you briefly we can discuss about it can we still think about I know we know we have been doing security privacy and particular safety privacy particular way right but as this paradigm is new can we think about some anchor control right now that we should be mindful of right that when it comes it happened right when AI was getting built after 3 years we are talking about AI governance and all these things so is there a way for us to think about some kind of anchor control some idea some concept basically that could help us to browse through challenges the AGI could throw I can start with you briefly and each of you can comment on this

Simonas Cerniauskas

yeah so well of course you know there are some technical things like you know the same what are marks or something you know labeling and other technical features that could help us a bit to identify at least some threats … then also we can talk about regulator measures but you know that’s a broader topic for the further discussion but especially here we in Europe we tend to regulate and overregulate everything so but in a way I think also at least some measures here also can be really viable and really reasonable

Mr. Simonas Satunas

well I come from a very small country Israel is so small that you can put it it’s like a pin on the map and therefore our regulative approach is that we are unable to determine the global regulation and in this AI race I think that what is more important is the global regulation so since we are a very tiny country we must work with positive tools say, okay, we cannot affect the regulation, but how can we work together with the AI developers in order to make the personality of the AI more moral, more ethic? How can we put egalitarian and equality into the consideration? How can we avoid bias? And I think that it makes us work together with the industry and together with the academia in order to find out about new consequences.

I think that in many cases the giants, the big tech doesn’t point towards unethical conclusions, but they work towards financial incentives that make AI behave in a very immoral way. If I’ll take, for example, the conflict in Myanmar, in Burma, we saw that Meta was not actively promoting violence in Myanmar, but the algorithm of Meta was designed to attract attention in a way that make the AI the more violent post much more viral and make violence flourish. So if we’ll be able to promote a dialogue and if we’ll be able to be together with the industry in development of new AI, sometimes we’ll be able to make AI more ethical.

Mr. Vinayak Godse

So Alexandra, your view. So one is the anchor control idea concept, but second part is how do you get into early? How do you get into? Early in the game, right? So when AI happened, now we are discussing in 25, 26 about the responsibility and alignment and adoption and governance basically, right? So in Asia discussion is the anchor control ways, ideas and ways for us to get into early discussion of it.

Ms. Alexandra Bech Gjørv

Well, I think at least you need to work on resilience and robust rollback mechanisms. A little bit like what we’re experiencing now in Europe, where we all have to practice on living without electricity. You know that it’s a realistic option that somebody. sabotages your electricity and then looking at well how dependent are we really and what are the alternative you know and and planning from a point of view where you not only work to reduce risk but you really work to reduce consequences of those risks occurring so if you work on the traditional risk matrix it’s always you know avoiding bad outcomes but then making the bad outcomes less bad that’s something that at least we think is well the new realities are propelling that kind of thinking and I think that’s important

Mr. Vinayak Godse

Kenny your voice on this

Mr. Kenny Kesar

sure actually the way we look at it in terms of AI from ethical AI to biases to data privacy it’s very similar akin to what a human would do even today what today we have a standard operating procedure that we review for biases, we review for content. You know, in our organizations, we have organizations that manage this. Now, and the other thing is we train people on ethical practices, on non -bias and things like that. So ultimately, AI is very similar to that, where we will have, you know, in today’s world, for the lack of a better word, I call it AOP instead of SOP, agent operating procedure or AI operating procedure, where we have to train AI in terms not to be biased.

So I feel that there is a big industry which is in the offing, which is going to manage and create models, LLMs, to manage or to validate that the responses from, you know, your common models are ethically right, non -biased. Because today, as organizations, we invite experts from outside to come and see our practices, whether we are following ethical, we are transparent, a number of those things. Very similarly as we mature towards more general intelligence and the more ways of working, I feel that these control structures will come in cyber security, will come in ethical use of AI, unbiased use of AI. So ultimately it will be a checks and balances system and we will see innovation in these areas.

That is how we feel it. It’s an evolving area. Let’s see how it happens.

Mr. Vinayak Godse

Thank you all of you to really help us understand the meaning of this concept of AGI and how that will pan out from now and what kind of challenges it will throw to us. There are definitely opportunities that we don’t have time to discuss about what it will bring to us. But then what could we start doing right now? And this was definitely one of the important conversations. Help this would help you understand what we are talking about the AGI today. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Join me to give big hand to my co -panelists for helping us understand. Thank you. Thank you, Simon. Thank you, Nir.

Thank you. We have some photo shoot. Alexandra, we need to come here for photo shoot. I also request the fireside panels, Hendrikus sir and Narendra sir to please join us for the photo shoot. Thank you. Thank you. Before we commence the session for the Fireside I would like to announce the launch I would like to announce the launch of AI Cyber Security Terminal This is published today Thank you. Thank you. you you Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (33)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“Kesar introduced the concept of accuracy progression through “five nines,” explaining that AI evolved from 90 % to 99 % accuracy over several years and each additional nine requires increasingly longer timeframes.”

The knowledge base explicitly describes Kesar’s “five-nines” accuracy benchmark and the increasing time required for each additional nine of accuracy [S1].

Additional Contextmedium

“Progress toward AGI depends on sustained investment, hardware breakthroughs, data‑privacy and regulatory challenges.”

Long-term sustained investment is highlighted as essential for fundamental research breakthroughs, providing context for the claim about investment dependence [S92].

Additional Contextmedium

“Industry chatter about a speculative bubble in compute spending, with concerns that over‑capacity may persist for years.”

An Alibaba Group chairman argued that AI investment is not a speculative bubble, offering a contrasting perspective that adds nuance to the bubble discussion [S99].

Additional Contextlow

“Mark Zuckerberg has suggested concerns about the compute spending cycle and over‑capacity.”

Zuckerberg publicly stated Meta’s long-term vision is to develop AGI and make it open source, confirming his active involvement in AGI discourse, though the source does not mention a bubble comment [S31].

Confirmedmedium

“The moderator framed AI research as accelerating rapidly since around 2020 and intensifying after early‑2023 generative model releases.”

The knowledge base notes that artificial intelligence is advancing at a rapid pace, supporting the moderator’s framing of accelerated AI progress [S82].

External Sources (99)
S1
Artificial General Intelligence and the Future of Responsible Governance — – Mr. Kenny Kesar- Ms. Alexandra Bech Gjørv – Mr. Simonas Satunas- Ms. Alexandra Bech Gjørv – Ms. Alexandra Bech Gjørv…
S2
Artificial General Intelligence and the Future of Responsible Governance — -Mr. Vinayak Godse- Moderator/Host of the panel discussion on AGI (Artificial General Intelligence)
S3
Subrata K. Mitra Jivanta Schottli Markus Pauli — Gandhi was vehemently opposed to Partition, an outcome which other senior Congress leaders like Jawaharlal …
S4
Artificial General Intelligence and the Future of Responsible Governance — – Ms. Alexandra Bech Gjørv- Mr. Simonas Satunas – Simonas Cerniauskas- Mr. Simonas Satunas
S5
Artificial General Intelligence and the Future of Responsible Governance — – Mr. Kenny Kesar- Ms. Alexandra Bech Gjørv – Ms. Alexandra Bech Gjørv- Mr. Kenny Kesar
S6
Artificial General Intelligence and the Future of Responsible Governance — – Simonas Cerniauskas- Mr. Simonas Satunas- Mr. Kenny Kesar – Simonas Cerniauskas- Mr. Simonas Satunas- Ms. Alexandra B…
S7
National Disaster Management Authority — “One is the infrastructure layer”[9]. “Second is the operating system layer which runs on top of infrastructure”[62]. “f…
S8
https://dig.watch/event/india-ai-impact-summit-2026/artificial-general-intelligence-and-the-future-of-responsible-governance — It’s an actual battleground in and of itself, and it’s very strange to think about the world in that way, but I think yo…
S9
Expert workshop on the right to privacy in the digital age — Ms Fanny Hidvégi, European policy manager at Access Now, Brussels, highlighted the actions taken by states. She started …
S10
High-Level session: Building and Financing Resilient and Sustainable Global Supply chains and the Role of the Private Sector — Such an assembly of varied views yields a well-rounded array of approaches, potentially leading to more nuanced and robu…
S11
Launch / Award Event #52 Intelligent Society Development & Governance Research — AI changes the way in which knowledge is created, transmitted, and verified. Misinformation and disinformation will beco…
S12
Breaking the Fake in the AI World: Staying Smart in the Age of Misinformation, Disinformation, Hate, and Deepfake — ## Government Perspectives – **Carol Constantine** – Human resources technology company representative AHM Bazlur Rahm…
S13
Parallel Session A9: Climate Change Adaptation, Resilience-Building and DRR for Ports (continued) — In summary, the positive sentiment surrounding the shared experiences and strategies represents a constructive, forward-…
S14
AI for Democracy_ Reimagining Governance in the Age of Intelligence — Nor do we hold identical views on democratic institutions. Thank you. We face a choice, either we step back or allow the…
S15
https://dig.watch/event/india-ai-impact-summit-2026/ai-safety-at-the-global-level-insights-from-digital-ministers-of — continue rapidly for policymakers across the globe to rely on an independent scientific assessment of what AI can do and…
S16
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — Demis Hassabis on AGI Development:Demis Hassabis, CEO of Google DeepMind, predicts that Artificial General Intelligence …
S17
Indias Roadmap to an AGI-Enabled Future — Absolutely. In power sector, we use a lot of electronics. For example, I gave you a small example of IGBT. IGBT is again…
S18
World Economic Forum Panel Discussion: Global Economic Growth in the Age of AI — Professional experience analyzing various risks including cyber, environmental, and health risks, with observation that …
S19
Folding Science / DAVOS 2025 — Artificial General Intelligence (AGI) Development Hassabis believes that one or two major breakthroughs are still neede…
S20
Keynote-António Guterres — “Our target is 3 billion US dollars.”[29]”That is why, encouraged by the General Assembly of the United Nations, I am ca…
S21
Keynote-Sundar Pichai — Or in India, where a work -together is helping farmers. protect their livelihoods in the face of monsoons. Last summer, …
S22
Ethics and AI | Part 4 — Damage to information integrity (mis/disinformation, impersonation) Human rights violations Violation of intellectual …
S23
9821st meeting — The UK highlights the potential risks associated with AI, particularly in the areas of autonomous weapons and cyber atta…
S24
WS #123 Responsible AI in Security Governance Risks and Innovation — Cybersecurity | Network security Technical Challenges and Risks
S25
Rethinking learning: Hope, solutions, and wisdom with AI in the classroom — Suppose AI (as with previous technologies) frees educators from focusing solely on repetitive memorisation and routine p…
S26
Education meets AI — It was acknowledged that critical thinking enables individuals to analyse information critically, question assumptions, …
S27
Smart Regulation Rightsizing Governance for the AI Revolution — The discussion began with a notably realistic and somewhat pessimistic assessment of global cooperation challenges, but …
S28
Global AI Policy Framework: International Cooperation and Historical Perspectives — So global principles are very important, but implementation must account for national contexts and capacities, as you we…
S29
What is it about AI that we need to regulate? — A recurring theme was the need for shared principles rather than uniform solutions.Paula Gori articulated this approach:…
S30
Indias Roadmap to an AGI-Enabled Future — And at a certain volume of production that it has to be done. So, which means that resources have to be deployed in a ma…
S31
Meta joins the tech giants’ race for AGI — Meta, the parent company of Facebook, has entered the race for Artificial General Intelligence (AGI).Meta CEO Mark Zucke…
S32
Artificial General Intelligence and the Future of Responsible Governance — Massive compute investment is driven by the race to be first, though efficiency improvements may reduce requirements Sp…
S33
Presentation of outcomes to the plenary — This aligns with SDGs 13 and 14, which call for climate action and the conservation of marine life. Overall, the compreh…
S34
TECHNICAL SPECIFICATION — This Technical Specification examines electronic patient record systems at the clinical point of care that are also inte…
S35
EU Digital Diplomacy: Geopolitical shift from focus on values to economic security  — ‘Human‑centric’ language still appears, but under resilience. Explicit human rights advocacy, such as protections for di…
S36
Crypto hiring snaps back as AI cools — Tech firms led crypto’s hiring rebound, adding over 12,000 roles since late 2022, according toA16z’s State of Crypto 202…
S37
Wrap up — These key comments fundamentally reframed the discussion from typical technology policy debates to deeper philosophical …
S38
INCREASING ACCESS TO DATA ACROSS THE ECONOMY — Estimating the economic activity potentially in scope allows us to rank the levers according to their potential impact. …
S39
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — The analysis of the speeches reveals several significant findings. Firstly, it highlights that AI can eliminate unintent…
S40
Artificial intelligence (AI) – UN Security Council — In addition, there is a call forcontinuous education and awareness raisingabout AI’s capabilities and limitations. Educa…
S41
Emerging Shadows: Unmasking Cyber Threats of Generative AI — Furthermore, influence operations have been conducted to spread discord and misinformation. The rapid evolution of techn…
S42
Launch / Award Event #52 Intelligent Society Development & Governance Research — AI changes the way in which knowledge is created, transmitted, and verified. Misinformation and disinformation will beco…
S43
AI: The Great Equaliser? — Another key point highlighted is the need for good governance to effectively manage the risks associated with AI. The ri…
S44
Folding Science / DAVOS 2025 — Mentions that AGI development may take a five-year timescale rather than the one or two years some are predicting. Time…
S45
Comprehensive Discussion Report: The Future of Artificial General Intelligence — The session examined critical questions surrounding the timeline for achieving Artificial General Intelligence (AGI) and…
S46
Practical Toolkits for AI Risk Mitigation for Businesses — In conclusion, the analysis recognizes the immense potential of AI technology but stresses the need to govern and regula…
S47
Education meets AI — In addition to the above topics, the significance of critical information and critical thinking in education was also di…
S48
Artificial intelligence (AI) and cyber diplomacy — The conversation expanded to highlight the universal need for digital literacy and capacity building in AI, urging gover…
S49
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — Reality check for the artificial general intelligence (AGI) narrative:Since the launch of ChatGPT in November 2022, ther…
S50
Artificial General Intelligence and the Future of Responsible Governance — Mr. Kenny Kesar introduced the concept of accuracy progression through “five nines,” explaining that while AI evolved fr…
S51
The Dawn of Artificial General Intelligence? / DAVOS 2025 — In summary, the discussion emphasized the complex challenges and opportunities presented by AGI development, with no cle…
S52
Keynote-Jeet Adani — Adani announced that “earlier this week, the chairman of the Adani Group made one of the most transformative announcemen…
S53
Driving Indias AI Future Growth Innovation and Impact — Energy infrastructure investment critical for compute infrastructure development
S54
Ethics in the Age of AI — The ethical concerns raised by AI technology are diverse and far-reaching. The four main concerns discussed in the provi…
S55
Ethics and AI | Part 4 — Damage to information integrity (mis/disinformation, impersonation) Human rights violations Violation of intellectual …
S57
Rethinking learning: Hope, solutions, and wisdom with AI in the classroom — Suppose AI (as with previous technologies) frees educators from focusing solely on repetitive memorisation and routine p…
S58
WSIS Action Line C6: Digital Ecosystem Builders in action: Redefining the role of ICT regulators — This comment provides a crucial balance to the technology-focused discussion by emphasizing that human elements remain c…
S59
Smart Regulation Rightsizing Governance for the AI Revolution — The discussion began with a notably realistic and somewhat pessimistic assessment of global cooperation challenges, but …
S60
WS #98 Towards a global, risk-adaptive AI governance framework — During the Q&A session, the importance of standards in AI governance was discussed. Speakers highlighted the need for te…
S61
Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (Explanatory Report) — 59.          The provision also provides for measures with regards to the identification of AI-generated content in orde…
S62
What is it about AI that we need to regulate? — A recurring theme was the need for shared principles rather than uniform solutions.Paula Gori articulated this approach:…
S63
Opening of the session — Canada: Thank you, Chair. We thank you for your efforts in seeking to devote tomorrow to the discussions that are necess…
S64
Opening of the session — – Ensuring the mechanism is action-oriented and needs-driven – Focusing on policy-oriented and cross-cutting thematic g…
S65
Opening remarks — Good morning, esteemed guests and participants. Today, we are gathered at the NET Mundial Plus 10 event to celebrate the…
S66
Any other business /Adoption of the report/ Closure of the session — The statement offers a sense of success and a forward-looking optimism, referencing a soon-to-occur resumed session. Thi…
S67
Opening of the session — Convergence necessary for progress with limited time. In summary, the analysis distils into a narrative that intertwine…
S68
AI and Digital Developments Forecast for 2026 — The tone begins as analytical and educational but becomes increasingly cautionary and urgent throughout the conversation…
S69
Conversational AI in low income & resource settings | IGF 2023 — Sameer Pujari:Thank you, Rajendra. And thanks for sitting on this forum. I think it’s a very interesting discussion, esp…
S70
Workshop 8: How AI impacts society and security: opportunities and vulnerabilities — Remote moderator: We actually have two questions online. The first one is from Antonina Cherevko. But security essential…
S71
Comprehensive Summary: AI Governance and Societal Transformation – A Keynote Discussion — The tone begins confrontational and personal as Hunter-Torricke distances himself from his tech industry past, then shif…
S72
WS #187 Bridging Internet AI Governance From Theory to Practice — The discussion maintained a thoughtful but increasingly cautious tone throughout. It began optimistically, with speakers…
S73
Agenda item 5: discussions on substantive issues contained in paragraph 1 of General Assembly resolution 75/240 (continued)/3/OEWG 2025 — North Macedonia: Distinguished Chair, esteemed delegates, North Macedonia aligns itself with the statement of European…
S74
Agenda item 5: Day 1 Afternoon session — A victim-focused framework will highlight the humanitarian impact of cyberattacks, fostering a more empathetic and compr…
S75
Agenda item 5: Day 2 Morning session — Belarus pledged steadfast backing for the Group’s initiatives and lauded the leadership’s competency in guiding the Grou…
S76
Agenda item 5: discussions on substantive issues contained inparagraph 1 of General Assembly resolution 75/240 part 5 — Pakistan: Thank you, Chair. Let me take this opportunity to commend the work done by you and your team in confidence-…
S77
Wrap up — High level of consensus on core principles with nuanced understanding of implementation challenges. The agreement spans …
S78
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — The tone was consistently optimistic yet pragmatic throughout the conversation. Speakers maintained an encouraging outlo…
S79
Parliamentary Closing Closing Remarks and Key Messages From the Parliamentary Track — High level of consensus with constructive engagement. While there were some specific reservations raised (particularly a…
S80
How to Project Europe’s Power / Davos 2025 — The tone was largely pragmatic and solution-oriented, with speakers acknowledging challenges but focusing on concrete st…
S81
Swiss AI Initiatives and Policy Implementation Discussion — The discussion maintained a professional, collaborative tone throughout, with speakers presenting both opportunities and…
S82
Open Forum: A Primer on AI — Artificial Intelligence is advancing at a rapid pace
S83
Keynote-Rishad Premji — “And they are the pioneers and the thought leaders of artificial intelligence.”[13] Artificial intelligence Opening fr…
S84
Opening of the session/OEWG 2025 — El Salvador: Thank you, Chairman. In line with the opening words, El Salvador hopes to provide comments to all the dif…
S85
A Global Human Rights Approach to Responsible AI Governance | IGF 2023 WS #288 — Pratek Sibal:Thanks Ian. How much time do I have? You have five to six minutes, but there’s no rush. I wanna hear your c…
S86
Skilling and Education in AI — The tone was cautiously optimistic throughout. Speakers acknowledged both the tremendous opportunities AI presents for I…
S87
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — And it’s very useful. It’s used to benchmark applications and performance on quantum computers and using AI techniques a…
S88
Scaling AI for Billions_ Building Digital Public Infrastructure — “Because trust is starting to become measurable, right, through provenance, through authenticity, as well as verificatio…
S89
Election integrity in the digital age: insights from IGF 2024 — Election integrity and disinformation have been closely followed topics during the session ‘Internet governance and elec…
S90
Democratizing AI Building Trustworthy Systems for Everyone — And so there are different in quotes, markets here at UL. People who can pay at different levels. Even within a country …
S91
ETHIO PA 2025 — How likely is it that jobs will be lost to automation in the manufacturing sector in the Fourth Industri…
S92
Science as a Growth Engine: Navigating the Funding and Translation Challenge — Long-term sustained investment is essential for fundamental research breakthroughs
S93
HIGH LEVEL LEADERS SESSION I — Microsoft’s deep investment in this area demonstrates the company’s commitment to harnessing the power of data for posit…
S94
Main Topic 2 –  European approach on data governance  — Emphasising data’s critical role as the lifeblood of the digital economy, the speaker cautioned about the risks associat…
S95
Main Session on Sustainability & Environment | IGF 2023 — Maike Lukien:So policymakers, same as us, can never have too much information to base evidence-based decisions on. The o…
S96
Pre 10: Regulation of Autonomous Weapon Systems: Navigating the Legal and Ethical Imperative — This comment transformed the tone of the entire discussion, legitimizing disagreement and uncertainty as valuable rather…
S97
Workshop 3: Quantum Computing: Global Challenges and Security Opportunities — Mattingley-Scott stresses the urgency of taking action immediately, even though the exact timeline for when quantum thre…
S98
HUMANITARIAN NEGOTIATION — Some societies tolerate higher levels of ambiguity and uncertainty than others. In negotiations, this means that, while …
S99
AI investment shows strong momentum beyond bubble fears — AI investmentis not showingsigns of a speculative bubble, according to theAlibaba Groupchairman. Instead, he argued at t…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
M
Ms. Alexandra Bech Gjørv
5 arguments148 words per minute942 words380 seconds
Argument 1
Need for massive, low‑latency, energy‑efficient hardware (neuromorphic, edge) to achieve human‑like situational awareness
EXPLANATION
She argues that achieving AGI requires specialized hardware that can process information with very low latency and high energy efficiency, such as neuromorphic and edge computing architectures, to match human reflexes and situational awareness.
EVIDENCE
She describes that many operations demand millisecond-level response, noting machines can already detect fire quickly but still lack the ability to interpret context, emotions, and body language, which requires low-latency, energy-efficient hardware like neuromorphic and edge computing [26-33].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Technical requirements for low-latency, energy-efficient hardware are highlighted in [S1]; infrastructure-layer considerations for such hardware are discussed in [S7].
MAJOR DISCUSSION POINT
Hardware requirements for AGI
DISAGREED WITH
Mr. Kenny Kesar, Mr. Simonas Satunas
Argument 2
Access to personal data needed for true situational awareness creates privacy limits
EXPLANATION
She points out that to give AI human‑like situational awareness, massive amounts of personal and private data must be collected, which raises significant privacy concerns and limits.
EVIDENCE
She states that achieving situational awareness requires studying a lot of data that would be considered private, personal, and that this creates real limits on privacy [35-37].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The tension between extensive data collection for AGI and privacy constraints is described in [S1]; concrete legislative examples concerning privacy and backdoors are provided in [S9].
MAJOR DISCUSSION POINT
Privacy constraints on data collection for AI
AGREED WITH
Mr. Simonas Satunas
Argument 3
Emphasis on robust rollback mechanisms and system resilience to mitigate failures
EXPLANATION
She emphasizes the need for resilience strategies such as rollback mechanisms and risk‑matrix planning to reduce the impact of AI failures, drawing an analogy to living without electricity as a test of system robustness.
EVIDENCE
She suggests working on resilience and robust rollback mechanisms, likening it to practicing living without electricity to understand dependence and planning for alternative solutions, thereby reducing the severity of bad outcomes [187-189].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Resilience and rollback mechanisms are emphasized in [S1]; parallels to broader infrastructure resilience are drawn in [S13]; a holistic approach to supply-chain and system resilience is outlined in [S10].
MAJOR DISCUSSION POINT
Resilience and rollback in AI governance
AGREED WITH
Mr. Simonas Satunas, Mr. Kenny Kesar
DISAGREED WITH
Mr. Simonas Cerniauskas, Mr. Kenny Kesar, Mr. Simonas Satunas
Argument 4
AI‑driven misinformation, cognitive warfare, and creation of divergent information universes threaten societal cohesion
EXPLANATION
She warns that AI can be used to generate large‑scale misinformation campaigns that create separate reality bubbles, which can be weaponised geopolitically and affect public perception.
EVIDENCE
She references a paper on agent swarms that shows how AI can create completely different information universes, citing the Ukraine-Russia war as an example of populations being overpowered by divergent narratives, and notes the need for defensive measures against such manipulation [144-149].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-enabled misinformation and the creation of separate information universes are discussed in [S11]; agent-swarm information warfare is detailed in [S12]; these concerns are also referenced in [S1].
MAJOR DISCUSSION POINT
Misinformation and information warfare via AI
AGREED WITH
Mr. Simonas Satunas, Mr. Kenny Kesar
Argument 5
Building resilience through risk‑matrix planning, rollback strategies, and reducing consequence severity is a proactive governance approach
EXPLANATION
She reiterates that proactive governance should focus on minimizing the impact of risks by planning for contingencies, using risk matrices, and ensuring that any adverse outcomes are less severe.
EVIDENCE
She repeats the importance of a risk-matrix approach that not only avoids bad outcomes but also makes the consequences of any failures less severe, describing this as a new reality-driven way of thinking [187-189].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The risk-matrix planning approach is presented in [S1]; systemic resilience strategies are explored in [S10]; practical resilience planning examples are given in [S13].
MAJOR DISCUSSION POINT
Proactive risk management for AI systems
M
Mr. Vinayak Godse
1 argument104 words per minute1988 words1138 seconds
Argument 1
Uncertainty and need for societal preparedness
EXPLANATION
He stresses that while AI advancements have accelerated, there remains uncertainty about when AGI will arrive, and societies must prepare now to avoid missing the opportunity to govern it effectively.
EVIDENCE
He notes the rapid AI developments since 2020, the growing discussion around AGI, and warns that failing to pay attention now could cause us to miss the chance to discuss, govern, and manage AGI over the next 2-10 years [1-7].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The call for proactive, science-based assessment and governance of emerging AI capabilities is made in [S15]; broader governance imperatives are echoed in [S14].
MAJOR DISCUSSION POINT
Preparedness for AGI
DISAGREED WITH
Mr. Simonas Satunas, Ms. Alexandra Bech Gjørv
M
Mr. Simonas Satunas
7 arguments161 words per minute1149 words426 seconds
Argument 1
Simple functional definition: AI that can perform any human task at professional level
EXPLANATION
He defines AGI as an AI system capable of executing every human task with the accuracy and professionalism of a human expert.
EVIDENCE
He states that AGI would be able to perform every human task at a professional level, acknowledging that the definition is not optimal but serves as a digestible baseline [21].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A concise functional definition of AGI matching this description is provided in [S1].
MAJOR DISCUSSION POINT
Definition of AGI
Argument 2
Timeline estimate: AGI could emerge within 3–7 years
EXPLANATION
He predicts that AGI may be achieved in a timeframe of three to seven years based on current trends and public perception of generative AI.
EVIDENCE
He mentions that many Israelis now trust generative AI more than friends, indicating a shift toward AGI, and estimates a 3-7 year horizon for reaching the milestone [21].
MAJOR DISCUSSION POINT
Projected timeline for AGI
DISAGREED WITH
Ms. Alexandra Bech Gjørv, Mr. Vinayak Godse
Argument 3
Compute is one element among many (data, energy, human skills) in the AGI supply chain
EXPLANATION
He argues that while compute is essential, other factors such as data, energy, and especially human critical‑thinking skills are equally important for achieving AGI.
EVIDENCE
He uses a 19th-century metaphor about preparing infrastructure for an unknown technology, then lists compute, energy, data, implementation, language, and the under-invested human element like critical thinking as crucial components [72-90].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A holistic view of the AGI supply chain that includes compute, data, energy, and human critical-thinking skills is outlined in [S1].
MAJOR DISCUSSION POINT
Holistic view of AGI requirements
AGREED WITH
Mr. Kenny Kesar
DISAGREED WITH
Mr. Kenny Kesar, Ms. Alexandra Bech Gjørv
Argument 4
Classical risks (privacy, cyber‑fraud) plus higher‑level risks to mental health, social cohesion, and democracy must be addressed
EXPLANATION
He categorises AI risks into four layers: traditional security and privacy concerns, mental‑health impacts, social‑level effects on empathy and bullying, and macro‑level threats to democracy and manipulation.
EVIDENCE
He outlines four risk levels-classical (privacy, cyber-fraud), mental health, social (empathy, bullying), and macro (democracy, foreign manipulation) and calls for national and international strategies to mitigate them [131-138].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A four-layer risk taxonomy covering privacy, mental-health, social, and macro-societal threats is described in [S1] and reinforced by the risk-level discussion in [S18].
MAJOR DISCUSSION POINT
Multi‑layered AI risk taxonomy
AGREED WITH
Ms. Alexandra Bech Gjørv, Mr. Kenny Kesar
Argument 5
High public trust in generative AI may erode critical thinking and mental health
EXPLANATION
He observes that a large proportion of people trust generative AI more than human peers, which could diminish critical thinking abilities and affect mental well‑being.
EVIDENCE
He cites that 50 % of Israelis trust generative AI tools more than their friends, suggesting a shift that brings society closer to AGI but may also reduce critical thinking [21].
MAJOR DISCUSSION POINT
Impact of AI trust on cognition
Argument 6
Small nations should pursue global regulation and collaborate with industry to embed ethics, equality, and bias mitigation
EXPLANATION
He argues that tiny countries like Israel cannot dictate global AI rules alone, so they must work with industry and academia to promote ethical, egalitarian AI development and avoid bias.
EVIDENCE
He explains Israel’s limited regulatory power, the need for global regulation, and the importance of collaborating with AI developers to embed morality, equality, and bias mitigation, giving the Myanmar example where Meta’s algorithm amplified violent content [174-180].
MAJOR DISCUSSION POINT
Role of small states in AI governance
Argument 7
Education, awareness, and critical‑thinking skills are essential to recognise and counter AI‑induced threats
EXPLANATION
He stresses that educating the public and raising awareness are vital for people to identify AI‑driven threats and develop critical‑thinking capabilities.
EVIDENCE
He notes the need to educate people to identify threats, emphasizing that what is self-obvious to one may be unknown to another, and calls for education as a key priority [154-155].
MAJOR DISCUSSION POINT
Importance of AI literacy
AGREED WITH
Mr. Kenny Kesar, Mr. Simonas Cerniauskas
M
Mr. Kenny Kesar
4 arguments156 words per minute1299 words497 seconds
Argument 1
Accuracy progression (from 90 % to five‑nine levels) drives compute growth and moves toward AGI
EXPLANATION
He explains that improving AI accuracy from 90 % to five‑nine levels requires incremental compute investments, and each additional ‘nine’ adds years, bringing AI closer to human‑like intelligence.
EVIDENCE
He describes the five-nine accuracy goal, noting that moving from 90 % to 99 % took five-to-ten years, and each additional nine adds another one-to-two years, linking higher accuracy to progress toward AGI [44-48].
MAJOR DISCUSSION POINT
Accuracy as a driver for compute and AGI
AGREED WITH
Mr. Simonas Satunas
DISAGREED WITH
Mr. Simonas Satunas, Ms. Alexandra Bech Gjørv
Argument 2
AI can generate sophisticated attacks and impersonate humans, raising new security threats
EXPLANATION
He warns that as AI becomes more capable, it can be used to launch advanced cyber‑attacks and mimic human decision‑makers, creating serious security challenges.
EVIDENCE
He states that AI capable of generating content can also produce sophisticated attacks and emulate a CEO’s decisions, highlighting the real threat of AI-driven impersonation [105-108].
MAJOR DISCUSSION POINT
Emerging AI‑enabled security threats
Argument 3
Education, awareness, and critical‑thinking skills are essential to recognise and counter AI‑induced threats
EXPLANATION
He argues that critical thinking is necessary to avoid a feedback loop where AI‑generated content dominates, which could stall human cognitive development.
EVIDENCE
He notes that 30 % of content is already AI-generated, creating a risk of a vicious cycle that hampers human critical thinking and innovation, and calls for education to maintain human intelligence alongside AI [164-170].
MAJOR DISCUSSION POINT
Critical thinking as a safeguard against AI over‑reliance
AGREED WITH
Mr. Simonas Satunas, Mr. Simonas Cerniauskas
Argument 4
Development of AI Operating Procedures (AOP) analogous to SOPs, including bias audits and ethical training, will become standard practice
EXPLANATION
He proposes that organisations will adopt AI‑specific operating procedures—AOPs—to systematically audit bias, ensure ethical use, and manage AI lifecycle similarly to traditional SOPs.
EVIDENCE
He describes current SOP-like reviews for bias and content, the training of staff on ethical practices, and envisions future AOPs that validate AI responses for ethics and bias, predicting an emerging industry around such controls [191-197].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The move toward standardized AI governance frameworks, including bias audits and ethical training, aligns with emerging ethical AI guidelines discussed in [S14].
MAJOR DISCUSSION POINT
Institutionalizing AI governance through AOPs
AGREED WITH
Ms. Alexandra Bech Gjørv, Mr. Simonas Satunas
DISAGREED WITH
Mr. Simonas Cerniauskas, Ms. Alexandra Bech Gjørv, Mr. Simonas Satunas
S
Simonas Cerniauskas
4 arguments132 words per minute632 words286 seconds
Argument 1
Broad definition: smarter AI that reasons, learns, adapts and transfers knowledge, not narrow
EXPLANATION
He outlines a common agreement that AGI must be a more general form of AI capable of reasoning, learning, adapting, and transferring knowledge across domains, unlike today’s narrow AI applications.
EVIDENCE
He lists the attributes-reasoning, learning, adaptation, knowledge transfer, and breadth beyond narrow domains-as core aspects of AGI [12-18].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Core AGI characteristics-reasoning, learning, adaptation, and knowledge transfer-are enumerated in [S1].
MAJOR DISCUSSION POINT
Core characteristics of AGI
Argument 2
Current investment surge may be over‑invested; risk of a bubble
EXPLANATION
He observes that massive funding into AI may be unsustainable, questioning whether the hype will lead to a bubble or over‑investment.
EVIDENCE
He remarks that we are in a super-high investment cycle, many wonder if it’s a bubble, and notes that past over-capacity (citing Zuckerberg) may lead to over-investment concerns [70-71].
MAJOR DISCUSSION POINT
Potential AI investment bubble
Argument 3
Early “anchor controls” such as labeling, technical safeguards, and regulatory frameworks are needed to guide AI development
EXPLANATION
He suggests that initial control mechanisms—like labeling AI outputs and establishing regulatory measures—are essential to identify threats and steer AI development responsibly.
EVIDENCE
He mentions technical tools such as labeling and other safeguards, and notes that European regulatory approaches could serve as viable examples for early controls [173-176].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Early control mechanisms like labeling and regulatory safeguards are advocated in [S1]; European regulatory approaches exemplify such early controls in [S11].
MAJOR DISCUSSION POINT
Pre‑emptive AI governance tools
DISAGREED WITH
Mr. Simonas Cerniauskas, Ms. Alexandra Bech Gjørv, Mr. Kenny Kesar, Mr. Simonas Satunas
Argument 4
Education, awareness, and critical‑thinking skills are essential to recognise and counter AI‑induced threats
EXPLANATION
He emphasizes that educating the public and raising awareness are crucial for people to detect AI‑generated threats and develop critical‑thinking abilities.
EVIDENCE
He stresses the need to educate people to identify threats, pointing out that what is self-obvious to one may be unknown to another, and calls for education as a priority [154-155].
MAJOR DISCUSSION POINT
AI literacy as a defensive measure
AGREED WITH
Mr. Simonas Satunas, Mr. Kenny Kesar, Mr. Simonas Cerniauskas
Agreements
Agreement Points
Education, awareness and critical‑thinking skills are essential to recognise and counter AI‑induced threats
Speakers: Mr. Simonas Satunas, Mr. Kenny Kesar, Mr. Simonas Cerniauskas
Education, awareness, and critical‑thinking skills are essential to recognise and counter AI‑induced threats Education, awareness, and critical‑thinking skills are essential to recognise and counter AI‑induced threats Education, awareness, and critical‑thinking skills are essential to recognise and counter AI‑induced threats
All three panelists stress that building AI literacy, raising public awareness and fostering critical-thinking are prerequisite measures to identify and mitigate AI-driven threats, from misinformation to security risks [154-155][164-170][154-155].
POLICY CONTEXT (KNOWLEDGE BASE)
The UN Security Council emphasizes continuous AI education and awareness-raising to empower stakeholders [S40]; IGF-related discussions highlight the need for critical-thinking curricula in schools [S47]; and cyber-diplomacy forums call for digital-literacy programmes to build societal resilience to AI threats [S48].
Structured risk management, resilience and rollback mechanisms are needed to mitigate AI‑related harms
Speakers: Ms. Alexandra Bech Gjørv, Mr. Simonas Satunas, Mr. Kenny Kesar
Emphasis on robust rollback mechanisms and system resilience to mitigate failures Classical risks (privacy, cyber‑fraud) plus higher‑level risks to mental health, social cohesion, and democracy must be addressed Development of AI Operating Procedures (AOP) analogous to SOPs, including bias audits and ethical training, will become standard practice
The speakers converge on the need for formal risk-management frameworks – from resilience and rollback planning (Alexandra) to a layered risk taxonomy (Satunas) and institutionalised AI Operating Procedures (Kesar) – to keep AI systems safe and accountable [187-189][131-138][191-197].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy briefs on AI governance stress the importance of resilience and rollback capabilities as part of risk-management frameworks [S33]; AI governance reports call for robust structures to address misinformation and surveillance risks [S43]; and practical toolkits for businesses recommend regulatory safeguards and contingency plans [S46].
Privacy constraints limit the data needed for true situational awareness in AGI
Speakers: Ms. Alexandra Bech Gjørv, Mr. Simonas Satunas
Access to personal data needed for true situational awareness creates privacy limits Classical risks (privacy, cyber‑fraud) plus higher‑level risks to mental health, social cohesion, and democracy must be addressed
Both panelists highlight privacy as a fundamental barrier to collecting the personal data required for human-like situational awareness, framing it as a classic AI risk that must be managed [35-37][131].
POLICY CONTEXT (KNOWLEDGE BASE)
Technical specifications for health data underline privacy-by-design limits on data sharing that affect situational awareness [S34]; broader analyses of data-access levers note that privacy regulations can restrict the flow of high-quality data needed for AGI development [S38]; EU digital diplomacy documents also discuss the tension between privacy and security objectives [S35].
Compute power is crucial for AI progress but must be complemented by data, energy and human skills
Speakers: Mr. Simonas Satunas, Mr. Kenny Kesar
Compute is one element among many (data, energy, human skills) in the AGI supply chain Accuracy progression (from 90 % to five‑nine levels) drives compute growth and moves toward AGI
Both agree that while increasing compute is a driver of higher AI accuracy and a step toward AGI, it is only one piece of a broader ecosystem that includes data, energy and human expertise [72-90][44-48].
POLICY CONTEXT (KNOWLEDGE BASE)
Experts in responsible AI governance argue that while massive compute drives progress, a holistic mix of energy efficiency, data quality and skilled personnel is essential [S32]; India’s AGI roadmap stresses coordinated resource deployment beyond raw compute [S30]; and economic analyses of data levers highlight the complementary role of data and energy resources [S38].
AI‑generated misinformation and manipulation pose serious societal risks
Speakers: Ms. Alexandra Bech Gjørv, Mr. Simonas Satunas, Mr. Kenny Kesar
AI‑driven misinformation, cognitive warfare, and creation of divergent information universes threaten societal cohesion Classical risks (privacy, cyber‑fraud) plus higher‑level risks to mental health, social cohesion, and democracy must be addressed 30 % of the content is already AI‑generated … risk of a vicious cycle that hampers human critical thinking
All three point to the danger that generative AI can flood the information ecosystem with false or biased content, eroding trust, mental health and democratic processes [144-149][131-138][165-170].
POLICY CONTEXT (KNOWLEDGE BASE)
Security analyses identify generative AI as a vector for influence operations and misinformation campaigns [S41]; policy discussions on AI’s impact on knowledge ecosystems flag disinformation as a primary societal challenge [S42]; and governance frameworks call for measures to curb AI-driven misinformation and protect vulnerable groups [S43].
Similar Viewpoints
Both the moderator and the panelist call for the introduction of early‑stage governance tools – like output labeling and regulatory safeguards – to steer AI development before AGI arrives [172][173-176].
Speakers: Mr. Vinayak Godse, Mr. Simonas Cerniauskas
Early “anchor controls” such as labeling, technical safeguards, and regulatory frameworks are needed to guide AI development
Unexpected Consensus
AI can be leveraged to improve human decision‑making and reduce bias
Speakers: Ms. Alexandra Bech Gjørv, Mr. Simonas Satunas
AI‑driven misinformation, cognitive warfare, and creation of divergent information universes threaten societal cohesion Small nations should pursue global regulation and collaborate with industry to embed ethics, equality, and bias mitigation
While Alexandra focuses on hardware and privacy, she also shares an anecdote showing AI reducing human bias in sports officiating; Simonas Satunas argues that collaboration with industry can embed ethics and curb bias. The convergence on AI as a tool for bias reduction is not obvious given their differing primary concerns [99-101][174-180].
POLICY CONTEXT (KNOWLEDGE BASE)
Studies on gender-inclusive AI demonstrate that algorithmic systems can mitigate unconscious human bias and promote fairer outcomes [S39]; governance literature also notes AI’s potential to support unbiased decision-making when coupled with proper oversight [S43].
Overall Assessment

The panel shows strong convergence on four pillars: (1) education and critical‑thinking as a defence against AI misuse; (2) comprehensive risk‑management frameworks including resilience, rollback and procedural safeguards; (3) recognition of privacy as a limiting factor for data‑intensive AGI; (4) acknowledgement that compute is essential but must be balanced with data, energy and human expertise. There is also broad agreement that AI‑generated misinformation threatens societal cohesion.

High consensus on governance, risk management and capacity‑building measures, moderate consensus on technical pathways (compute, hardware). This suggests that future policy discussions can build on a shared foundation of education, risk controls and privacy safeguards while still debating timelines and specific technical solutions.

Differences
Different Viewpoints
Timeline for achieving AGI
Speakers: Mr. Simonas Satunas, Ms. Alexandra Bech Gjørv, Mr. Vinayak Godse
Timeline estimate: AGI could emerge within 3–7 years I’m not necessarily subscribing to the time frame. I think that depends on how much money we throw at it. Uncertainty and need for societal preparedness
Satunas predicts a concrete 3-7-year horizon for AGI based on current public trust in generative AI [21]. Alexandra rejects a fixed timeline, arguing that progress depends on funding and other factors [23-25]. Godse stresses that the exact arrival date is unknown and urges societies to prepare now to avoid missing governance opportunities [1-7].
POLICY CONTEXT (KNOWLEDGE BASE)
Recent Davos discussions suggest a five-year horizon for AGI rather than the shorter timelines some predict [S44]; a comprehensive report on AGI futures also documents divergent views on expected timelines and their societal implications [S45].
What factor is the primary driver for reaching AGI – compute accuracy, specialised hardware, or a broader mix of resources
Speakers: Mr. Kenny Kesar, Mr. Simonas Satunas, Ms. Alexandra Bech Gjørv
Accuracy progression (from 90 % to five‑nine levels) drives compute growth and moves toward AGI Compute is one element among many (data, energy, human skills) in the AGI supply chain Need for massive, low‑latency, energy‑efficient hardware (neuromorphic, edge) to achieve human‑like situational awareness
Kesar links higher accuracy (five-nine) directly to increased compute and treats this as the main path toward AGI [44-48]. Satunas argues that compute is only one piece of a larger puzzle that also includes data, energy and critical-thinking skills [72-90]. Alexandra focuses on the necessity of specialised low-latency, energy-efficient hardware to replicate human reflexes and situational awareness [26-33]. The three speakers therefore disagree on which element should be prioritised.
POLICY CONTEXT (KNOWLEDGE BASE)
Responsible-governance panels highlight that compute alone is insufficient and that energy, hardware efficiency, data and human factors jointly drive AGI progress [S32]; India’s roadmap similarly stresses a balanced resource mix for goal-directed research [S30].
Preferred early‑stage governance mechanisms (anchor controls, resilience/rollback, AI‑operating‑procedures, education/global regulation)
Speakers: Mr. Simonas Cerniauskas, Ms. Alexandra Bech Gjørv, Mr. Kenny Kesar, Mr. Simonas Satunas
Early “anchor controls” such as labeling, technical safeguards, and regulatory frameworks are needed to guide AI development Emphasis on robust rollback mechanisms and system resilience to mitigate failures Development of AI Operating Procedures (AOP) analogous to SOPs, including bias audits and ethical training, will become standard practice Education, awareness, and critical‑thinking skills are essential to recognise and counter AI‑induced threats; small nations should pursue global regulation and collaborate with industry
Cerniauskas proposes technical anchor controls like labeling and early regulation as the first line of defence [173-176]. Alexandra stresses building system resilience through rollback plans and risk-matrix thinking [187-189]. Kenny envisions institutionalised AI Operating Procedures (AOP) that embed bias checks and ethical training as the core governance tool [191-197]. Satunas highlights education, public awareness and the need for global regulatory cooperation, especially for small states, as the key response [174-180]. These differing prescriptions reveal a lack of consensus on the most effective early-stage control strategy.
POLICY CONTEXT (KNOWLEDGE BASE)
Toolkits for AI risk mitigation propose anchor controls, operational procedures and global regulatory coordination as early-stage safeguards [S46]; governance reports also prioritize resilience and rollback mechanisms alongside education initiatives [S33]; EU digital diplomacy notes the shift toward procedural and ethical controls in AI strategy [S35].
Unexpected Differences
Compute as the central lever versus a broader resource mix
Speakers: Mr. Kenny Kesar, Mr. Simonas Satunas
Accuracy progression (from 90 % to five‑nine levels) drives compute growth and moves toward AGI Compute is one element among many (data, energy, human skills) in the AGI supply chain
Kesar treats compute (and the associated accuracy gains) as the primary engine propelling AI toward AGI, whereas Satunas explicitly downplays compute’s primacy, insisting that data, energy and especially human critical-thinking are equally indispensable. This divergence is surprising given their shared technical background [44-48][72-90].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates in AI governance circles stress that focusing solely on compute overlooks critical dependencies on data, energy and talent, advocating for a broader resource portfolio [S32]; policy analyses from India echo this balanced perspective [S30].
Hardware‑centric resilience versus procedural/ethical controls
Speakers: Ms. Alexandra Bech Gjørv, Mr. Kenny Kesar
Emphasis on robust rollback mechanisms and system resilience to mitigate failures Development of AI Operating Procedures (AOP) analogous to SOPs, including bias audits and ethical training
Alexandra focuses on physical and systemic resilience (rollback, risk‑matrix) as the main safeguard, while Kenny proposes a procedural, standards‑based approach (AOP) centred on bias and ethics audits. The contrast between hardware‑focused risk mitigation and process‑focused governance was not anticipated.
POLICY CONTEXT (KNOWLEDGE BASE)
Resilience-focused policy briefs emphasize hardware robustness as a pillar of AI safety [S33]; however, practical governance toolkits and EU strategies highlight procedural and ethical safeguards as complementary or alternative approaches [S46][S35].
Overall Assessment

The panel shows substantial divergence on three core fronts: (1) the expected timeline for AGI, with one speaker offering a short‑term estimate and others rejecting a fixed horizon; (2) the relative importance of compute versus hardware versus a holistic resource mix; (3) the optimal early‑stage governance toolkit, ranging from technical anchor controls to resilience planning, procedural AOPs, and education‑driven regulation. While there is consensus on the need for education, critical thinking and multi‑layered risk awareness, the lack of alignment on strategic priorities could hinder coordinated policy responses and investment decisions.

High – the disagreements touch on fundamental strategic choices (timing, resource allocation, governance architecture) that shape national and international AI policy. Without a shared roadmap, stakeholders may pursue conflicting initiatives, leading to fragmented regulation, duplicated investments, and potential gaps in security and ethical safeguards.

Partial Agreements
All three speakers agree that building AI literacy and critical‑thinking capacity is crucial to mitigate AI‑driven risks, even though they frame it differently (Satunas focuses on public education, Kenny on preventing a feedback loop of AI‑generated content, Cerniauskas on broader awareness) [154-155][164-170].
Speakers: Mr. Simonas Satunas, Mr. Kenny Kesar, Mr. Simonas Cerniauskas
Education, awareness, and critical‑thinking skills are essential to recognise and counter AI‑induced threats Critical thinking that is very necessary for us to innovate further
Both acknowledge that AI introduces new security threats beyond traditional privacy and cyber‑fraud concerns, though Satunas frames them within a layered risk taxonomy while Kenny highlights specific attack vectors such as CEO impersonation [131-138][105-108].
Speakers: Mr. Simonas Satunas, Mr. Kenny Kesar
Classical risks (privacy, cyber‑fraud) plus higher‑level risks to mental health, social cohesion, and democracy must be addressed AI can generate sophisticated attacks and impersonate humans, raising new security threats
Takeaways
Key takeaways
AGI is broadly defined as AI that can reason, learn, adapt, transfer knowledge and operate beyond narrow tasks; a functional view describes it as performing any human professional task at comparable accuracy. Panelists estimate AGI could appear within roughly 3–7 years, though there is considerable uncertainty and a need for societal preparedness. Achieving AGI will require massive, low‑latency, energy‑efficient compute hardware (neuromorphic, edge) together with data, energy, and human expertise; compute is only one element of a larger supply chain. Current investment in AI compute is huge and may be over‑invested, raising concerns about a potential bubble. Security and privacy risks will intensify: AI can generate sophisticated attacks, impersonate humans, and exploit personal data needed for true situational awareness. Beyond classical risks, AI poses higher‑level threats to mental health, social cohesion, and democratic processes through misinformation and cognitive warfare. Public trust in generative AI is high, which can erode critical‑thinking skills; education, awareness, and critical‑thinking training are essential safeguards. Governance will need early “anchor controls” such as labeling, technical safeguards, and regulatory frameworks; AI Operating Procedures (AOP) analogous to SOPs are envisioned. Collaboration across nations, industry, and academia is crucial to embed ethics, equality, and bias mitigation, especially for smaller countries lacking global regulatory influence. Resilience measures—robust rollback mechanisms, risk‑matrix planning, and tiered model deployment (small vs large models)—are recommended to limit the impact of failures.
Resolutions and action items
Develop and adopt early anchor controls (e.g., model labeling, technical safeguards) as part of AI governance. Invest in education and critical‑thinking programs to prepare the public for AI‑driven information environments. Encourage collaboration between governments, industry, and academia to shape global regulation and embed ethical principles in AI development. Create AI Operating Procedures (AOP) for bias audits, ethical training, and continuous monitoring of AI systems. Implement resilience strategies such as rollback mechanisms and tiered deployment of small and large language models to manage compute costs and risk.
Unresolved issues
Exact timeline for AGI emergence remains uncertain; no consensus on when it will be realized. How to balance massive compute investment with efficiency and sustainability without creating a bubble. Specific technical pathways to achieve human‑level situational awareness (e.g., multimodal embodied learning) are still open questions. Concrete regulatory frameworks and international agreements for AGI governance have not been defined. Methods to protect privacy while providing the data needed for advanced AI reasoning are not yet resolved. Strategies to prevent erosion of critical thinking and mitigate cognitive warfare lack detailed implementation plans.
Suggested compromises
Adopt a tiered model approach: use small, task‑specific language models for low‑risk functions while reserving large models for high‑value, high‑risk applications. Combine probabilistic AI methods with deterministic controls to improve reliability and move toward AGI without sacrificing safety. Balance heavy compute investment with research into more efficient algorithms and hardware to avoid over‑investment. Blend regulatory oversight with industry self‑governance (e.g., AOPs, bias audits) to create flexible yet accountable AI ecosystems.
Thought Provoking Comments
AGI will be something that can perform every human task at the level of accuracy and professionalism of a human professional. 50 % of Israelis trust generative AI tools more than they trust their friends, which brings us closer to AGI.
Provides a concrete, human‑centric definition of AGI and backs it with a sociological metric (trust) that signals a shift in public perception, turning the debate from abstract timelines to observable behavior.
Shifted the conversation from speculative timelines to measurable societal adoption. Prompted other panelists to discuss trust, adoption curves, and the gap between current AI capabilities and true AGI.
Speaker: Simonas Satunas
Machines can already make millisecond‑level decisions (e.g., fire detection), but interpreting context, emotions, ambiguity, and body language remains far away. Achieving human‑like situational awareness will require low‑latency, energy‑efficient neuromorphic and edge hardware, plus massive private data – which raises privacy limits.
Links technical hardware challenges directly to the core AI limitation of contextual understanding, while foregrounding privacy as a fundamental barrier, thus expanding the discussion beyond pure algorithmic progress.
Introduced a new dimension—hardware and privacy constraints—causing the panel to explore compute needs, data governance, and the trade‑off between performance and personal data protection.
Speaker: Alexandra Bech Gjørv
The epitome of accuracy is five‑nines. Moving from 90 % to 99 % accuracy took 5‑10 years; each additional nine adds another 1‑2 years. True AGI will require AI that can not only learn from data but also invent new ideas, similar to the human brain.
Offers a quantitative framework (five‑nines) to gauge progress and reframes AGI as a transition from regression‑based learning to genuine invention, adding a measurable benchmark to an otherwise vague concept.
Guided the discussion toward concrete performance targets and the notion of AI as a creative agent, influencing later remarks about the need for deterministic models and the timeline for achieving AGI.
Speaker: Kenny Kesar
Compute is just one element in a chain; we also need energy, cooling, data, implementation, language, and especially the human element—critical thinking education—so people can recognise AI‑generated manipulation.
Broadens the focus from a compute‑centric race to a holistic ecosystem, emphasizing education and human cognition as equally vital for AGI readiness.
Redirected the panel from a purely technical race to a societal preparedness narrative, prompting others (e.g., Simonas Cerniauskas) to stress education and awareness as part of risk mitigation.
Speaker: Simonas Satunas
We can categorize AI risks into four levels: (1) classic privacy, security, fraud; (2) human mental health; (3) social impacts like empathy and bullying; (4) macro‑level effects on democracy and societal manipulation. Each level needs its own mitigation and international collaboration.
Provides a clear, layered risk taxonomy that moves the conversation from generic ‘risk’ talk to a structured, actionable framework.
Served as a turning point that organized subsequent dialogue around specific domains (security, mental health, societal manipulation), leading to concrete suggestions on regulation and collaboration.
Speaker: Simonas Satunas
When video surveillance was introduced in basketball, coaches’ racist decisions vanished because the data made bias visible. This shows machines can make people better, not just introduce new biases.
Counters the common narrative that AI inevitably amplifies bias, offering a concrete example where technology corrected human prejudice, thereby enriching the ethical debate.
Encouraged a more nuanced view of AI ethics, influencing later comments about the role of oversight and the potential for AI to improve human decision‑making.
Speaker: Alexandra Bech Gjørv
AI will create a tiered ecosystem: tiny, efficient models for simple tasks and massive models for complex challenges like world hunger. Right‑sizing models will make AI commercially viable and curb the current cost‑to‑ROI imbalance.
Introduces a pragmatic solution to the scalability and cost problem, framing the future AI market as a spectrum rather than a monolithic race for the biggest model.
Shifted the discussion from a “bigger is better” mindset to strategic deployment, prompting considerations of sustainability, compute allocation, and business models.
Speaker: Kenny Kesar
The current hype may be a bubble; we risk over‑investing in compute that could become overcapacity. Even Zuckerberg admits we might have excess compute for years.
Provides a critical market perspective that questions the sustainability of the current investment frenzy, adding a cautionary note to the optimism.
Tempered the enthusiasm of earlier speakers, leading to a balanced conversation about responsible investment and the need for efficiency improvements.
Speaker: Simonas Cerniauskas
30 % of content online is already AI‑generated, feeding the same models and risking a feedback loop that could stall human intellectual growth. We must preserve critical thinking to avoid a vicious cycle where AI erodes the very intelligence it seeks to emulate.
Highlights a paradox where AI’s own output may diminish the human capacity that fuels future AI development, raising a profound ethical and societal concern.
Deepened the dialogue on long‑term societal effects, prompting further remarks on education, awareness, and the necessity of maintaining human cognitive skills.
Speaker: Kenny Kesar
Overall Assessment

The discussion evolved from a broad framing of AGI’s emergence to a multi‑layered analysis of technical, societal, and economic dimensions. Key comments—especially those that introduced concrete definitions, quantitative benchmarks, risk taxonomies, and real‑world examples—served as turning points that redirected the conversation toward actionable insights. By juxtaposing optimism about rapid progress with cautionary notes on over‑investment, privacy, and human cognition, the panel collectively moved from speculative timelines to a nuanced roadmap that balances compute, hardware, regulation, education, and ethical safeguards. These pivotal remarks shaped the dialogue into a structured, forward‑looking discourse on how to responsibly navigate the path toward AGI.

Follow-up Questions
What is the specific role of compute in achieving AGI, and why is such massive investment in compute resources justified?
Understanding compute’s importance helps allocate resources efficiently and assess whether current spending is sustainable or a bubble.
Speaker: Mr. Vinayak Godse
How can we achieve contextual, low‑latency, and reasoning‑capable AI—specifically regarding language models, ambient computing, and world‑model architectures?
Addressing these technical challenges is crucial for building AI that can operate safely in dynamic, real‑time environments.
Speaker: Mr. Vinayak Godse
What security and privacy measures should be adopted now to prepare for increasingly powerful AI models?
Proactive safeguards are needed to prevent misuse of AI as capabilities grow, especially in the context of AGI‑level threats.
Speaker: Mr. Vinayak Godse
What could serve as an ‘anchor control’—early governance mechanisms or concepts—to steer AGI development responsibly?
Establishing foundational controls early can shape the trajectory of AGI and mitigate future risks.
Speaker: Mr. Vinayak Godse
How will growing dependence on AI affect human critical thinking, and what forms of cognitive warfare might emerge?
If AI erodes critical thinking, societies become vulnerable to misinformation and manipulation at scale.
Speaker: Mr. Vinayak Godse
How can we ensure that reliance on AI does not diminish human intelligence and critical thinking abilities?
Maintaining human cognitive skills is essential for innovation and for preventing a feedback loop where AI trains on AI‑generated content.
Speaker: Mr. Vinayak Godse
What global regulatory frameworks and collaborative approaches are needed to embed ethics, bias mitigation, and moral behavior into AI systems?
AI impacts cross‑border societies; coordinated regulation can address bias, misinformation, and unethical deployments.
Speaker: Mr. Simonas Satunas
What research directions (e.g., hierarchical reflex reasoning, embodied multimodal learning, neuromorphic and edge computing) show the most promise for reaching AGI?
Identifying promising technical pathways guides funding and research priorities toward viable AGI architectures.
Speaker: Ms. Alexandra Bech Gjørv
How can education systems be strengthened to improve public critical‑thinking skills and resilience against AI‑driven manipulation?
An informed populace is a key defense against deception, bias, and loss of agency in an AI‑rich world.
Speaker: Mr. Simonas Satunas
What resilience and rollback mechanisms should be designed to mitigate the impact of AI failures or malicious use?
Preparing for worst‑case scenarios reduces societal disruption and ensures continuity when AI systems malfunction.
Speaker: Ms. Alexandra Bech Gjørv
How should organizations develop AI Operating Procedures (AOP) analogous to traditional SOPs to ensure ethical, unbiased AI deployment?
Standardized operational guidelines can embed ethical checks into AI lifecycles, promoting responsible use.
Speaker: Mr. Kenny Kesar
What are the macro‑level societal impacts of AGI (e.g., on democracy, misinformation, agent‑swarm manipulation), and how can they be studied and mitigated?
Understanding large‑scale effects is vital for national security and for preserving democratic institutions.
Speaker: Ms. Alexandra Bech Gjørv
What are the energy‑efficiency and hardware constraints (e.g., low‑latency, neuromorphic chips) that must be overcome to realize AGI?
Hardware limitations directly affect feasibility, cost, and environmental impact of scaling AI.
Speaker: Ms. Alexandra Bech Gjørv

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.