Artificial General Intelligence and the Future of Responsible Governance

20 Feb 2026 11:00h - 12:00h

Artificial General Intelligence and the Future of Responsible Governance

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by noting that rapid advances in AI since 2020, especially the surge of powerful models in 2023-24, have sparked renewed debate about the emergence of artificial general intelligence (AGI) and the risk of missing the opportunity to shape it responsibly [1-4][6]. Participants agreed that while many definitions exist, AGI is generally understood as an AI that can reason, learn, adapt, transfer knowledge and operate beyond narrow, task-specific domains [12-18]. Simonas Satunas offered a concrete, albeit simplified, definition: an AGI would perform any human professional task with comparable accuracy, and he estimated a 3-to-7-year horizon for reaching that milestone based on growing public trust in generative AI tools [21-24].


The discussion highlighted that massive compute investments are driving the current AI boom, but compute is only one element of a broader ecosystem that also requires energy-efficient hardware, data, and especially human capacities such as critical thinking [70-71][72-85][86-90]. Alexandra emphasized that achieving human-like situational awareness will demand access to large amounts of private data, raising privacy concerns and underscoring the need for robust regulatory frameworks [35-37][38-41]. Kenny warned that as AI becomes more capable, it can both generate sophisticated attacks and mimic human decision-making, making security threats more realistic and amplifying the importance of educating defenders [105-108][110-112].


Simonas Satunas outlined a four-tier risk hierarchy-from traditional privacy and cyber-fraud to mental-health impacts, social empathy erosion, and macro-level threats to democracy-calling for coordinated national and international strategies to mitigate these costs [131-138]. The panel concurred that critical thinking and public awareness are essential safeguards, with education needed to help people identify AI-generated misinformation and understand underlying threats [154-155][164-170]. Regarding governance, participants suggested technical measures such as model labeling and broader regulatory actions, noting Europe’s tendency toward over-regulation but recognizing the potential of reasonable standards [173-176].


Alexandra proposed building resilience through rollback mechanisms and contingency planning, likening the approach to preparing for electricity outages to reduce the impact of AI failures [187-190]. Kenny introduced the concept of an “AI Operating Procedure” (AOP) analogous to existing SOPs, which would institutionalize bias reviews, ethical training, and continuous validation of model outputs [191-199]. The discussion concluded that immediate actions should include investing in education, establishing robust risk-mitigation frameworks, and developing early-stage “anchor controls” to guide the safe evolution toward AGI [202-207][173-176].


Overall, the panel stressed that while AGI may be imminent, its safe deployment depends on balanced compute investment, privacy-respecting data practices, security preparedness, and proactive governance structures [70-71][105-108][173-176].


Keypoints


Major discussion points


Defining AGI and estimating its arrival – The panel agreed that AGI means an AI that can reason, learn, adapt, transfer knowledge and operate beyond narrow tasks [12-18]. Simonas Satunas offered a concrete, if simplistic, definition – an AI that can perform any human task with professional-level accuracy – and projected a 3- to 7-year horizon for reaching that milestone [21]. Vinayak opened the session by noting the rapid acceleration of AI since 2020 and the growing debate around AGI’s feasibility [1-2].


Compute, data, and the human factor as essential ingredients – While compute power is often highlighted, Simonas Satunas emphasized that it is only one link in a chain that also includes energy, data, implementation, language and, critically, human education and critical-thinking skills [72-89]. Alexandra added that achieving human-like situational awareness will require low-latency, energy-efficient hardware and massive private data, raising privacy limits [31-36]. Vinayak later asked why massive compute investments are needed for attention, context, reasoning and low latency [65-69].


Security, privacy and ethical risks of increasingly powerful AI – Kenny Kesar warned that as AI accuracy improves (moving from 90 % toward “five-nines”), the technology will become capable of both sophisticated attacks and autonomous decision-making, creating new security threats [41-48][105-108]. Simonas Satunas outlined four risk layers-from classic privacy and cyber-fraud to mental-health, social cohesion and macro-level threats to democracy-calling for coordinated national and international strategies [131-138]. Alexandra highlighted the need for human oversight, pointing out how algorithmic bias can be exposed and corrected (e.g., the NBA video-surveillance example) [96-102].


Governance, “anchor-control” mechanisms and early-stage regulation – The moderator asked for concrete control concepts to guide the transition to AGI [172-176]. Simonas Cerniauskas suggested technical safeguards such as model labeling and broader regulatory measures, noting Europe’s tendency to over-regulate but also its potential for viable standards [173]. Simonas Satunas argued that small nations must collaborate globally to embed moral and egalitarian values into AI development, citing the Myanmar-Meta case as an illustration of ethical failure [174-180]. Alexandra proposed building resilience and rollback mechanisms to mitigate the impact of failures, emphasizing a risk-reduction mindset [187-190].


Impact on cognition, critical thinking and societal dependence on AI – Several speakers expressed concern that pervasive AI use could erode individuals’ critical-thinking abilities, creating a feedback loop where AI-generated content dominates training data and stifles human intellectual growth [150-170][164-170]. The panel stressed the need for widespread education and awareness to help people identify manipulation, disinformation, and “cognitive warfare” [154-155][155].


Overall purpose / goal of the discussion


The session was convened to clarify what “Artificial General Intelligence” (AGI) actually means, to assess how close we are to achieving it, and to explore the security, privacy, ethical, and governance challenges that AGI will introduce. Participants aimed to identify early-stage “anchor controls” and practical steps-technical, regulatory, and educational-that societies can adopt now to prepare for the transformative impact of AGI.


Overall tone and its evolution


– The conversation began with an optimistic, exploratory tone, highlighting rapid AI progress and the excitement of defining AGI [1-2].


– It then shifted to a cautious, risk-focused tone, as speakers detailed technical limitations, compute demands, and the widening gap between current narrow AI and true general intelligence [21][31-36].


– Mid-discussion the tone became protective and solution-oriented, emphasizing security threats, ethical pitfalls, and the need for robust governance and resilience [41-48][131-138][187-190].


– Toward the end, the tone turned reflective and advisory, urging education, critical-thinking cultivation, and coordinated global action to mitigate societal dependence on AI [150-170][154-155].


Overall, the panel moved from enthusiasm about AI’s potential to a sober assessment of the safeguards required before AGI can be responsibly deployed.


Speakers


Mr. Vinayak Godse – Moderator/host of the panel discussion on AGI; leads the conversation and poses questions to panelists. [S2]


Mr. Simonas Satunas – Panelist; provides a simplified definition of AGI and discusses timelines and societal impact. [S1]


Simonas Cerniauskas – Panelist; contributes perspectives on AGI definitions, investment cycles, and the broader AI ecosystem. [S5]


Mr. Kenny Kesar – Panelist; consultant advising AI clients, focuses on accuracy, compute, market disruption, and ethical/operational procedures for AI. [S6]


Ms. Alexandra Bech Gjørv – Panelist; head of SINTEF, Norway’s largest research institute; discusses hardware, neuromorphic computing, privacy, governance, and societal implications of AGI. [S7]


Additional speakers:


None. All speakers appearing in the transcript are accounted for in the list above.


Full session reportComprehensive analysis and detailed insights

The panel opened with Vinayak Godse framing the rapid expansion of artificial-intelligence research since 2020 and the surge of powerful models launched from early 2023 as a catalyst for renewed debate over artificial general intelligence (AGI) and the risk of missing the chance to shape it responsibly [1-4][6]. He warned that societies that do not begin to understand what AGI could mean for the next three to ten years will fall behind in governance and policy [5-7].


A broad consensus emerged that AGI must transcend today’s narrow, task-specific systems. Speakers agreed that a true AGI should be able to reason, learn, adapt, transfer knowledge and operate across domains rather than being confined to a single function [12-18]. Simonas Satunas offered a concrete, if simplistic, definition: an AGI would perform any professional human task with comparable accuracy and professionalism [21-23]. Citing a poll in which roughly 50 % of Israelis said they trust generative-AI tools more than friends, he projected a three-to-seven-year horizon for reaching that milestone [24-25][21].


Technical foundations and compute – Kenny noted that moving model accuracy from the current 90 % toward “five-nines” (99.999 %) historically required five to ten years for the first extra nine, and each subsequent nine adds roughly one to two years [41-48]. Cerniauskas warned that the industry may be heading toward an “over-capacity” situation for a couple of years, quoting Zuckerberg’s comment about excess compute resources [80-82]. Satunas used a 19th-century transport-infrastructure metaphor to argue that compute is only one link in a chain that also includes energy-efficient hardware, vast data, implementation expertise, language resources and, crucially, human critical-thinking capacity [72-90]. Alexandra added that achieving human-like situational awareness will require low-latency, neuromorphic or edge-computing architectures and access to large amounts of private data, which in turn raises serious privacy constraints [31-36][35-37].


System 1 / System 2 thinking and latency – Vinayak highlighted the distinction between intuitive “system 1” and logical “system 2” thinking, noting that the latency of purely language-based models limits system 2 performance. The panel agreed that reducing system 2 latency is essential for AGI, and that AI is helping to close this gap [65-69].


Security, privacy and risk taxonomy – Kenny emphasized that more accurate models will be able to launch sophisticated cyber-attacks and could emulate a CEO to make decisions, making AI-driven deception a concrete threat [105-108]. Satunas presented a four-level risk taxonomy: (1) classical privacy, security and fraud risks; (2) human health and mental-health impacts; (3) social effects such as erosion of empathy, bullying and addiction; and (4) macro-level threats to democracy and foreign manipulation [131-138]. He stressed that mitigation at each level will require costly, coordinated national and international strategies [131-138]. Alexandra reiterated that privacy limits on personal data impede the development of the deep situational awareness required for AGI, underscoring the tension between data needs and privacy protection [35-37].


Ethics, governance and “anchor-control” proposals


* Technical labeling and European-style regulation were advocated by Cerniauskas as an immediate lever [173-176].


* Satunas called for a global, multi-stakeholder regulatory framework that embeds egalitarian values into AI design, citing the Meta algorithm that amplified violent content in Myanmar as a cautionary example [174-180].


* Alexandra proposed resilience and rollback mechanisms-analogous to planning for electricity outages-to limit the impact of AI failures [187-190].


* Kenny introduced AI Operating Procedures (AOP), formal SOP-like processes that embed bias reviews, ethical training and continuous validation into organisational practice [191-199].


Critical-thinking concerns – Vinayak warned that AI’s ability to provide rapid, multi-dimensional attention may erode human critical thinking, which he defined as “the ability to give attention to various dimensions” [156-163]. Kenny quantified the problem, noting that roughly 30 % of online content is already AI-generated, creating a feedback loop that could stall the evolution of human intellect if people stop exercising their “brain muscles” [164-170]. Satunas echoed this, urging investment in education that cultivates critical-thinking skills to prepare society for AGI [87-90][154-155].


Commercial viability – Kenny observed that AI is not commercially viable today because the costs outweigh the ROI [200].


Closing remarks and concrete outcome – After summarising the discussion, Vinayak thanked the participants, announced the launch of the “AI Cyber Security Terminal”, and noted the upcoming photo-shoot [210].


In conclusion, the panel agreed that AGI is likely to arrive within a near-term horizon, but its safe realisation depends on balanced investment in compute, energy-efficient hardware, high-quality data, and, crucially, human critical-thinking capacities. Immediate actions include developing AI Operating Procedures, establishing technical safeguards such as model labeling, investing in education to preserve critical thinking, pursuing tiered model strategies to manage compute costs, and creating resilience and rollback plans for AI failures. Unresolved issues remain around the exact timeline for AGI, reconciling privacy with the data needs of situational awareness, defining globally acceptable governance structures, and preventing the erosion of human cognition through AI-generated feedback loops. Addressing these challenges will require coordinated effort across industry, academia and governments, both nationally and internationally, to embed ethical, transparent and robust controls before AGI becomes a pervasive reality.


Session transcriptComplete transcript of the session
Mr. Vinayak Godse

Pet Summit and the basic idea and intent behind setting up this session is while all the things were happening in AI in the period of 2020, a lot of development happening and somehow all that is now leading to kind of acceleration that we are seeing in last three years of time and especially this year, since January, all the new launches that we see, we are getting the first sign of a powerful AI, right? And now because of that, there is a discussion about AGI seems to be gaining quite a significant ground, right? And although people still have a lot of doubt and skepticism about whether it is really reality or possibility in coming future or what that means, many people are still skeptical.

They are struggling to define what that means for a cigarette. So as an overall society. and I can tell about India so probably we didn’t pay much attention when AI was coming. If you don’t pay attention now what is coming in next 2, 3, 5 years of time or 10 years of time that is probably the timeline for AGI, then probably we will miss on again thinking, talking, discussing, governing it better basically. So this discussion is about what is to help understand for us and for the audience here basically what do we mean by AGI can we really think about that right now what are different conference that we need to thank you for getting welcome to the panel and try to then find the meaning possible meaning for security, privacy and ethics basically.

So I would like to talk with someone with you, so how do you see this concept of AGI and formulationally how that will be different that we would see what is your understanding about the concept of artificial intelligence and artificial intelligence

Simonas Cerniauskas

So, yeah, thank you very much for having us here. And, yeah, like you said, it’s a really nice topic to wrap up the conference. So, well, so, you know, of course, there are kind of different definitions of AGI. And on the same time, most of them agree that it’s, you know, it’s about smarter AI than we have right now. We were joking a bit that, you know, on the way, the traffic is really, you know, exceptional. And, yeah, that’s a sign that maybe we are still not here today. So, but, yeah, but basically kind of among those common agreements that, let’s say, the smarter AI should reason. It should learn. It should adapt. And also it should transfer knowledge.

And also it shouldn’t be, you know, very. narrow, like, you know, of course, right now we have great, let’s say, areas where AI is really helping a lot, like co -development, customer service, and et cetera, but, you know, it should be much broader. So, and, you know, don’t think that any of us, maybe the colleagues will be able to answer when we will have, you know, and what timing, but definitely, you know, that’s one of the big topics right now.

Mr. Vinayak Godse

Let me come to you and you look at the digital initiative and artificial intelligence as one of the important research areas, so we are grappling with understanding what is right now, but can we think about what would happen in the next three, five years of time, and that seems to be the timeline for each area.

Mr. Simonas Satunas

So I’m the one with the date I’ll do my best So first of all my definition of AGI is very simplistic and I think that we need some simple explanation in this field and my very simple explanation is AGI will be something that can perform every human task at the level of accuracy and professionality of a human professional Now this is not an optimal definition because people can ask every task if a baby is crying will the AGI help him stop crying and people can ask what is the level of professionality but I think that this is something that we can digest and I think that for me I understood that we are getting closer there not from a technology perspective but from the perspective of talking with real Israelis about their problems and five years ago when I was telling this definition of AGI people were like oh it’ll never happen not in our lifetime and right now when I’m speaking with Israelis and I’m telling them this is AGI they’re saying oh aren’t we there yet oh because I thought that Chachi Biddy can help me like a lawyer isn’t it true now I think that we are not there yet okay there is a very sharp line between the AI that we are experiencing today and true AGI but the fact that the audience is already confusing the fact that people give trust to Gen AI tools 50 % of Israelis trust them more than they trust their friends many trust them more than they trust human professionals this puts us closer to AGI so I would say that it’s a matter of 3 years to 7 years until we reach that milestone

Mr. Vinayak Godse

so coming to you Alexandra how do you see this as a concept what is leading to this AGI what would we do that will impact the future of the AI bring this age of Asia in three or seven years of time?

Ms. Alexandra Bech Gjørv

Well, I’m not necessarily subscribing to the time frame. I think that depends on how much money we throw at it. And then there are other things to throw money at as well. Some of this, for example, we had a discussion with my team, you know, are machines able to make complex decisions as fast as humans? And in some areas, like, you know, many operations demand millisecond response and reflex level. You know, you can see that machines are quite good at detecting fire or doing various instinctive things as fast as we are, but the ability to interpret context, emotions, ambiguity, surroundings, body language, etc., that’s still quite far away. They take too long. And in a dynamic environment, you know, a wrong decision or a late decision is really a wrong decision.

So in order to get there, I, you know, there’s both low latency, energy efficient hardware, neuromorphic and edge computing and architectures beyond auto regression. But I think, you know, the researchers in Sintef, I head up the largest research institute in Norway. They, you know, they point to promising like hierarchical reflex reasoning systems, embodied multimodal learning, et cetera, et cetera. And there’s really no real doubt that you will get there. But there’s, in order to have the situational awareness like a human, you have to study a lot of data that would be considered private, personal. So there’s really limits on privacy. And then it triggers a lot of other questions that I’m sure we’ll get into.

Mr. Vinayak Godse

Yeah, we’ll come to that. So, Mr. Kenney, you must be serving many of the clients right now on AI, right? And every of us are getting stunned by… the progress and acceleration of the capability that is happening week by week basically right and that also scares us what is coming next right and when it comes to that level where there is a there is a two words uh somebody defines agi right so one is the consistency across the domain uh that it will be so general in a way that it will be consistently performing across the domain and second part is uh it will be reliable as well so currently probably sometimes it doesn’t have anything and it throws output and that’s why hallucination happens basically so consistency and reliability that’s what the agi will bring to the table basically so it will solve a lot of problems that we see uh uh right now we have been also getting stunned by the things that it can do basically so so there are routes to achieve the agi which will lead us to agi basically so how do you think uh uh your perspective the the journey that probably take us there

Mr. Kenny Kesar

So, you know, I agree with the panel that a couple of things we talked about in terms of where we’re getting to models evolving. But you bring up another component of accuracy. I’ll talk about accuracy first, and then I’ll come back to the disruption which is happening in the market. Now, the epitome of accuracy is five nines. So for AI to get from 90 % to 99%, it took five to ten years. Now, every nine that you add is another year or two years to the point where you get to 99 .99 and nines. So every nine that you’re adding has a time frame to it. And the number of nines that you add, you get closer to general intelligence because that’s what is going to look at the human brain.

I’ll take the topic of photographic regression that you talked about. Any regression, AI is right now built on regression. It’s built on learnings of the neural network. The neural network maturing on information that it sees. but the human brain is also inventing. It’s researching. So when AI really gets to the point of being able to research and bring new ideas to life that a human brain does, you’re getting closer to intelligence. Now, the disruption in the market that you’ve seen with announcements across the different players which dominate the AI market is creating a disruption in the industry and I think it’s the right disruption. It’s the disruption that word processor did to typewriter, what computers did to word processor, and what cloud did to data center.

This is another thing, but it’s much faster because it’s more pervasive and it impacts everybody in life. So the fact is people are talking about how does it translate to me. When I say it translates to me, it’s about how do we structure processes. Everybody and I agree accuracy is work in process. And since accuracy is work in process, we have to be really mature about… the use cases that we put onto it. We have to look at the human pyramid, what components of the pyramid that you’re going to look at. So the way we are advising our clients and what we’re doing ours is maker jobs, which is basically repetitive jobs with little context.

AI does very well, but create a controller for these autonomous. So combination of probabilistic and deterministic is what’s going to be the near future as we get to more and more deterministic when we get to general intelligence, because from a human perspective, it’s mostly deterministic.

Mr. Vinayak Godse

Right. Yeah. So these are and thank you all for putting some level of clarity in terms of what this means. And so at the end of the day, Asia is like so they say attention, right? Ability to give attention to all possible thing that. People, millions and billions of people asking questions. but as you rightly say the context matters so it’s not only attention the it should be contextual to your requirement and your things that you do right and third important part of which they are doing and last six months had been a great months for reasoning that bring to the table basically so my question is and anybody of you can answer this you then for achieving all of these things so why compute becomes very important so why you need this much of compute why there are trillions of dollars that is invested to make sure that it it use attention to each and every problem better and it is contextual and you reasoning and at the same time latency as I talk about so the role of compete what is the role of competitive this any of you

Simonas Cerniauskas

yeah so you know so of course if I may start and of course please accompany so currently we are at super high cycle let’s say of those investments and most of us are also wondering is it a bubble or when it will blow a bit etc is it really in some cases sustainable everyone of us most likely has our own opinion but still this race to be number let’s say one this belief that if you are number one you will remain number one and this momentum I think plus huge appetite all this hype definitely brings much much more money to the table than we could ever imagine and you know on the same time it depends a lot of course on the algorithms how efficient they will be all of us remember most likely last year this deep sea moment and there are also other models which are much more efficient but so So, you know, at some point we might understand that it’s overestimated, overinvested.

At the same time, I remember in Zuckerberg’s quotes that, you know, said, okay, in the worst case scenario, I will, you know, have overcapacity for a couple more years and then I will use it.

Mr. Simonas Satunas

So my humble opinion is that compute is one element in a chain of elements and that sometimes we treat this element as the only one. Let’s explore a metaphor. Let’s imagine that we are in the 19th century and a prophet arrives and he tells us, okay, in five years, a new technology will emerge that will enable you to arrive from Delhi to Bangkok in less than an hour. But I don’t know what the technology is. Maybe it’s a ship, maybe it’s a car, maybe it’s a train, maybe it’s an airplane, but we must be prepared. So everyone is trying to be prepared and to build the right infrastructure. So let’s look at the structure. The problem is everyone thinks about it as something else.

So one will build an airport and the other one will build rails and the other one will build boats. I think that we are in this moment. We know that AGI will arrive. We know that it is soon and we know that we must be prepared. Compute is one of the elements that is necessary, but energy is also important and heating and cold is also important. Data is extremely important. Implementation is important. Language is important in India as well. I think that one of the elements that we are not investing enough is the human element. Think about critical thinking, for example. I don’t know what AGI will arrive, but I know that already now for us it is very important to raise critical thinking among the public.

When you hear something in the news, when you see something, was it made by AI? What is the manipulation that is being forced upon me? So I think that investing in education is not less critical than investing in computing.

Mr. Vinayak Godse

And then another element I want to come to you on this that you talked about. there is very interesting discussion about this system one and system two thinking human is more intuitive in terms of response and system two is more logical and AI is probably helping with that basically but there is a latency that is an important area and that’s why they are putting a lot of effort and improving the competence such that the latency of system two thinking is also less so that your intuitive thinking can improve with that basically but it’s not only the competence the perception, the ambient, the senses, the emotions so all that also matters a lot and that’s where the limitation of language based models are getting exposed basically and you did talk about that in your initial remark can you just throw light on that?

On the language? On the different type of the models right? Ambient, compute for that matter, world model that people talk about so…

Ms. Alexandra Bech Gjørv

Well I just wanted to first agree with the… Mir, sorry that you know if you are a government and this is democratic access to compute is a big topic I think you can really get lost in just investing in compute power so investing in skills and leading edge technology understanding in your own country and participating in the regulatory approach because some of the things that I care about is that everybody says that they should be human oversight but you know that once you get into these dilemma situations like what should happen in a car accident, humans are not very good at understanding risks and humans are not very good at really making ethical discussions they tend to go as far, you know, do your best and then let moral luck decide who gets lost but you have to in machine driven systems you actually have to make decisions about those things so I think becoming, you know, educating also our politicians to be able to to know that you have to make the hard choices because otherwise the machines will make them for you and they will continue our biases and they will, you know, it will not end well.

But then I just wanted to share a little story that I heard. You know, Michael Lewis, the guy with the money ball and everything, he has this anecdote that in the Basketball Association in the States, they started video surveillance and the coaches were all making racist decisions and home team decisions. And by showing the videos and by showing the statistics, next season they couldn’t find any bias at all. So I think that’s a good example of how the machines make people better, whereas we’re not able to better ourselves over time. So I think I just thought this was a nice anecdote for this

Mr. Vinayak Godse

Thank you. And I’ll come to Kenny. So… As we are… trying to solve problems of security, privacy in current big capability of AI and we are struggling to understand what it means for security, what it means for privacy and suddenly there is a significant acceleration that is happening so what we are doing right now for security privacy which could help us to graduate to more and more powerful model comes in or any other things basically so can you just help us

Mr. Kenny Kesar

yeah I think security as we evolve and we talked about compute compute gets bigger, context get bigger, we get smarter in terms of what AI can do and definitely the same AI that can generate, can pose more sophisticated attacks and when we get to AGI right, the biggest thing is I could be emulating a human Let’s say in a company, I could emulate a CEO and make a decision because I’m getting so close to being natural. The threat is real. Now, even today, let’s say without AI, you need to be just a step ahead of the bad actors or the persons who are into cybercrime. You just have to be a step ahead. And similarly, we talked about, you know, we’re mentioning about the human portion, right?

That the human portion needs to get more educated where there are going to be set of humans that are going to use the same AI to build better agents to fight them. So now it’s a question of the tooling that you have at hand. Even today, it’s the tools. It’s a human who’s building tools to fight your cyber threats. Imagine, in the next era, the only thing is… It’ll become nearly close to science fiction when agents try locking humans out. But that’s, I would say, still science fiction. But the fact is as we evolve, we need to right -size the solution and that’s how we will manage compute too. You don’t use I7 computer or to do a simple calculator task of adding two numbers, right?

You use a calculator. So in the context of the world, we’re going to have SLMs which is small language models that will do smaller things so that we can manage compute. You have the bigger models that will solve world hunger in terms of how we do with different levels of machines and processing that we do. I think there will be tiering. Right now, we were talking about it’s a fight to who’s first. So with the fight to first, bigger, better, more elaborate. But now as it evolves, you’ll get the right size fitting to them. Then only it will be commercially viable. AI is not commercially viable today. The costs outweigh the RO.

Mr. Vinayak Godse

Yeah, current cost is quite significantly higher. You can do POC but… once you put into production environment the token cost is too much high to the ROI so so near want to come to you there is a established understanding of security privacy safety or ethics right and that’s what the paradigm that we at least try to understand right now but would the Asia altogether different paradigm and the concepts of security privacy will be foundationally very different than what we discussed right now

Mr. Simonas Satunas

so as I see it when we try to deal with the risks that AI pose we distinguish between four different levels the first level is the classical risks like privacy security cyber fraud every technology that we have since the 90s we need to explain how does it meet the current risk in that matter and AI is much more powerful and it poses a lot of more risks but these are the kinds of risks that we when we design products we know how to deal with them. Above it there is a level of human health and mental health and we find out that AI solutions can be quite problematic for mental health, can cause a lot of damage in some cases and this is something that is not yet well understood and investigated above that there is a social level.

What does it does to the empathy between people? What does it does normally people say oh I see that it’s bad for my kids. They are experiencing bullying or addiction usually what’s bad for your kids is also bad for you and we understand that these are complications that we didn’t think about when we code and the higher level is a macro level what does it do to society? What does it do to democracy? I think that several countries are now experiencing foreign manipulation and it is very easy to run campaigns that are built of fake news and we see that manipulation can become very problematic. So I think that a strategy, a national strategy and an international strategy should access, should address all these levels and all these levels have mitigations but they are costly and they need collaboration.

So we need to be in close collaboration in order to mitigate these risks.

Mr. Vinayak Godse

It’s good that the way you put the structure, right? Things it would do to us, our brain and the thing that will impact us as individually and we discussed that in one of the sessions that we hosted on neuroscience and AI. So what this means to the brain development process if we are using AI for every small thing that we want to do, what that means to society, brain development process plateaus for that matter, what will be in society and then what is the macro kind of impact it. Do you want to add something on that?

Ms. Alexandra Bech Gjørv

Yeah, I just, sorry. I just want to build on that. How it’s not just targeted manipulation or the things that we see in our kids and somebody walking around with a button called friend and that’s your only friend that you need but also the well -structured in the geopolitical context the ability to create completely different information universes you don’t need to be neurologically strange you just see a completely different view we just published a paper in science on these agent swarms and just reading a book about the Ukraine and Russia war going on now and how large populations are overpowered by totally different images of the world from what we are and at least obviously your defense systems need to be hardened against those kinds of manipulations but it’s also you know actually an offensive strategy to find good bots that enter those universes.

It’s an actual battleground in and of itself, and it’s very strange to think about the world in that way, but I think you’re very naive if you don’t start systematically working on how you make your conviction of what the world is like also part of the people that you need to somehow, hopefully not defeat, but relate to and convince that things can be better. So it’s not just a technological challenge. I would say it’s a huge mental leap for most of us.

Mr. Vinayak Godse

So Siman, the question is like the more we use, the more we become dependent on AI system, right? And the more acceleration of the people’s ability to think critically, that will go down basically, right? The speed will increase the more dependence, and then more… More AI become powerful for that matter, right? so what we see in terms of this misinformation, disinformation and defake, so probably there will be different kind of cognitive warfare that may happen so how do you see such kind of challenges in the society, you talked about society or individual for that matter, so what kind of implication it will have for individual society and overall the way the world is organized

Simonas Cerniauskas

yeah so absolutely so basically all those layers and all the dependencies like you rightly stated they also critical thinking of course is one but also awareness, education and you know the skills, abilities for people to understand the things here I think this audience is you know for us it’s more or less everything self obvious but you know when you start talking to people in the streets or different backgrounds then you you know realize that what is self -obvious for you for another person might be completely different. To find those ways I would say to educate to basically help them identify the threats, that’s one of the key priorities and also obligation I would say from our side.

Mr. Vinayak Godse

one of the important challenge of this critical thinking which I come across is critical thinking is nothing but your ability to give attention to various different dimensions nuances, different perspective, different views basically right. Where it is tremendous amount of effort that I would have to become a critical thinker. And AI saws that quite easily for me. It can make me to bring all the attention, all the dimensions, all the nuances, all the viewpoints, you can quickly get access to me, right? So, even for critical thinking, Kenny, for you, this question is, you will be depending too much on AI as well, right? So, we need to know distinction between what do you critical thinking? Critical thinking is not just getting information, giving attention, but critical thinking is what?

So, that question probably is very important question to ask.

Mr. Kenny Kesar

critical thinking that is very necessary for us to innovate further. So the biggest issue that the AI world is facing, 30 % of the content is consuming is AI generated already. So basically you’re feeding back and it’s learning on the same model. When originally it was learning on artifacts that were built through different thinking processes. So I would say one of the, it’s a risk, it’s a boon because it gets work done. But in overtime it’s a risk that we will stop evolving because if we don’t exercise the brain as a muscle, if we don’t exercise it and don’t build those neurons which really influence critical thinking, it will be actually a very big loss to society.

So I would say general intelligence, everybody is asking for it. Now how do we make sure as AI. computers get general intelligence we’re not losing our intelligence to create that general intelligence again so it’s a it’s a it’s a vicious cycle it’s a question which we’re debating we’re trying to answer in ourselves everybody has perspectives but it’s a it’s something that I think about do I have an answer to it no but I feel that critical thinking on both sides is something that we really need to critically think about

Mr. Vinayak Godse

yeah so that’s what may every thing that you think as a solution and kind of thing so there is always this challenge of what it means right in this new paradigm is an important so now a little bit concluding part of this discussion is can we when this is question to each of you briefly we can discuss about it can we still think about I know we know we have been doing security privacy and particular safety privacy particular way right but as this paradigm is new can we think about some anchor control right now that we should be mindful of right that when it comes it happened right when AI was getting built after 3 years we are talking about AI governance and all these things so is there a way for us to think about some kind of anchor control some idea some concept basically that could help us to browse through challenges the AGI could throw I can start with you briefly and each of you can comment on this

Simonas Cerniauskas

yeah so well of course you know there are some technical things like you know the same what are marks or something you know labeling and other technical features that could help us a bit to identify at least some threats … then also we can talk about regulator measures but you know that’s a broader topic for the further discussion but especially here we in Europe we tend to regulate and overregulate everything so but in a way I think also at least some measures here also can be really viable and really reasonable

Mr. Simonas Satunas

well I come from a very small country Israel is so small that you can put it it’s like a pin on the map and therefore our regulative approach is that we are unable to determine the global regulation and in this AI race I think that what is more important is the global regulation so since we are a very tiny country we must work with positive tools say, okay, we cannot affect the regulation, but how can we work together with the AI developers in order to make the personality of the AI more moral, more ethic? How can we put egalitarian and equality into the consideration? How can we avoid bias? And I think that it makes us work together with the industry and together with the academia in order to find out about new consequences.

I think that in many cases the giants, the big tech doesn’t point towards unethical conclusions, but they work towards financial incentives that make AI behave in a very immoral way. If I’ll take, for example, the conflict in Myanmar, in Burma, we saw that Meta was not actively promoting violence in Myanmar, but the algorithm of Meta was designed to attract attention in a way that make the AI the more violent post much more viral and make violence flourish. So if we’ll be able to promote a dialogue and if we’ll be able to be together with the industry in development of new AI, sometimes we’ll be able to make AI more ethical.

Mr. Vinayak Godse

So Alexandra, your view. So one is the anchor control idea concept, but second part is how do you get into early? How do you get into? Early in the game, right? So when AI happened, now we are discussing in 25, 26 about the responsibility and alignment and adoption and governance basically, right? So in Asia discussion is the anchor control ways, ideas and ways for us to get into early discussion of it.

Ms. Alexandra Bech Gjørv

Well, I think at least you need to work on resilience and robust rollback mechanisms. A little bit like what we’re experiencing now in Europe, where we all have to practice on living without electricity. You know that it’s a realistic option that somebody. sabotages your electricity and then looking at well how dependent are we really and what are the alternative you know and and planning from a point of view where you not only work to reduce risk but you really work to reduce consequences of those risks occurring so if you work on the traditional risk matrix it’s always you know avoiding bad outcomes but then making the bad outcomes less bad that’s something that at least we think is well the new realities are propelling that kind of thinking and I think that’s important

Mr. Vinayak Godse

Kenny your voice on this

Mr. Kenny Kesar

sure actually the way we look at it in terms of AI from ethical AI to biases to data privacy it’s very similar akin to what a human would do even today what today we have a standard operating procedure that we review for biases, we review for content. You know, in our organizations, we have organizations that manage this. Now, and the other thing is we train people on ethical practices, on non -bias and things like that. So ultimately, AI is very similar to that, where we will have, you know, in today’s world, for the lack of a better word, I call it AOP instead of SOP, agent operating procedure or AI operating procedure, where we have to train AI in terms not to be biased.

So I feel that there is a big industry which is in the offing, which is going to manage and create models, LLMs, to manage or to validate that the responses from, you know, your common models are ethically right, non -biased. Because today, as organizations, we invite experts from outside to come and see our practices, whether we are following ethical, we are transparent, a number of those things. Very similarly as we mature towards more general intelligence and the more ways of working, I feel that these control structures will come in cyber security, will come in ethical use of AI, unbiased use of AI. So ultimately it will be a checks and balances system and we will see innovation in these areas.

That is how we feel it. It’s an evolving area. Let’s see how it happens.

Mr. Vinayak Godse

Thank you all of you to really help us understand the meaning of this concept of AGI and how that will pan out from now and what kind of challenges it will throw to us. There are definitely opportunities that we don’t have time to discuss about what it will bring to us. But then what could we start doing right now? And this was definitely one of the important conversations. Help this would help you understand what we are talking about the AGI today. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Join me to give big hand to my co -panelists for helping us understand. Thank you. Thank you, Simon. Thank you, Nir.

Thank you. We have some photo shoot. Alexandra, we need to come here for photo shoot. I also request the fireside panels, Hendrikus sir and Narendra sir to please join us for the photo shoot. Thank you. Thank you. Before we commence the session for the Fireside I would like to announce the launch I would like to announce the launch of AI Cyber Security Terminal This is published today Thank you. Thank you. you you Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (30)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“The panel discussion on Artificial General Intelligence included Vinayak Godse and Simonas Cerniauskas among the speakers.”

The knowledge base lists the same panelists for the AGI discussion, confirming their participation [S4] and [S1].

Confirmedhigh

“Roughly 50 % of Israelis said they trust generative‑AI tools more than friends.”

A poll cited in the knowledge base shows that 50 % of respondents consider trust foundational for long-term success of transformative technologies, matching the reported figure [S70].

Additional Contextmedium

“A three‑to‑seven‑year horizon is projected for AGI to perform any professional human task with comparable accuracy.”

Other sources note that many experts expect AGI development to take about five years, and some leaders explicitly favor slower timelines, providing nuance to the 3-7 year estimate [S24] and [S52].

Additional Contextmedium

“The industry may be heading toward an “over‑capacity” situation for a couple of years, with excess compute resources.”

Discussion about a global compute divide highlights that regions lacking compute fall further behind, underscoring concerns about mismatched capacity and potential over-supply in well-resourced areas [S77].

External Sources (77)
S1
Artificial General Intelligence and the Future of Responsible Governance — – Ms. Alexandra Bech Gjørv- Mr. Simonas Satunas – Simonas Cerniauskas- Mr. Simonas Satunas
S2
Artificial General Intelligence and the Future of Responsible Governance — -Mr. Vinayak Godse- Moderator/Host of the panel discussion on AGI (Artificial General Intelligence)
S3
Subrata K. Mitra Jivanta Schottli Markus Pauli — Gandhi was vehemently opposed to Partition, an outcome which other senior Congress leaders like Jawaharlal …
S5
Artificial General Intelligence and the Future of Responsible Governance — – Simonas Cerniauskas- Mr. Simonas Satunas- Mr. Kenny Kesar – Simonas Cerniauskas- Mr. Simonas Satunas- Ms. Alexandra B…
S6
Artificial General Intelligence and the Future of Responsible Governance — – Mr. Kenny Kesar- Ms. Alexandra Bech Gjørv – Ms. Alexandra Bech Gjørv- Mr. Kenny Kesar
S7
Artificial General Intelligence and the Future of Responsible Governance — – Mr. Kenny Kesar- Ms. Alexandra Bech Gjørv – Mr. Simonas Satunas- Ms. Alexandra Bech Gjørv – Ms. Alexandra Bech Gjørv…
S8
https://dig.watch/event/india-ai-impact-summit-2026/artificial-general-intelligence-and-the-future-of-responsible-governance — So my humble opinion is that compute is one element in a chain of elements and that sometimes we treat this element as t…
S9
https://dig.watch/event/india-ai-impact-summit-2026/ai-automation-in-telecom_-ensuring-accountability-and-public-trust-india-ai-impact-summit-2026 — I mean two steps globally. So the proof of concept we are trying to do in Southeast Asia is actually prove that data can…
S10
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Collaboration with industry was deemed essential in the regulation of AI. Industry was seen as a valuable source of reso…
S11
AGI moves closer to reshaping society — Therewasa time when machines that think like humans existed only in science fiction. But AGI now stands on the edge of b…
S12
https://app.faicon.ai/ai-impact-summit-2026/how-the-global-south-is-accelerating-ai-adoption_-finance-sector-insights — And in terms of regulation, Reserve Bank’s approach has been largely tech neutral. It’s tech agnostic in some sense, bec…
S13
https://dig.watch/event/india-ai-impact-summit-2026/secure-finance-risk-based-ai-policy-for-the-banking-sector — When we dug deeper we came to know that initially it was deployed in 2004 by one entity and then slowly slowly it was th…
S14
The new European toolbox for cybersecurity regulation — Additionally, strategic regulations are needed to reduce dependence on specific manufacturers, particularly from China, …
S15
OPENING SESSION | IGF 2023 — Large language models require significant compute power and data
S16
Ethical principles for the use of AI in cybersecurity | IGF 2023 WS #33 — Dennis Kenji Kipker:Yeah, of course. When developing AI, we have high impact privacy risks. And I think this is quite cl…
S17
Networking Session #60 Risk & impact assessment of AI on human rights & democracy — Matt O’Shaughnessy: Thank you so much, David. And it’s great to be here, even just virtually. So, you asked about the…
S18
WS #31 Cybersecurity in AI: balancing innovation and risks — Even with good data, the human creating the algorithm must ensure fairness. This is a key point in addressing bias and e…
S19
What policy levers can bridge the AI divide? — Lithuania advocated for regulatory sandbox approaches with differentiated regulation based on risk levels, leveraging sm…
S20
Comprehensive Discussion Report: AI’s Existential Challenge to Human Identity and Society — Sociocultural | Human rights Tracey expresses concern that over-reliance on AI for decision-making and problem-solving …
S21
WS #110 AI Innovation Responsible Development Ethical Imperatives — Ricardo Israel Robles Pelayo: Thank you very much. Good afternoon, everyone. It is an honor to be here and share a refle…
S22
Artificial General Intelligence and the Future of Responsible Governance — Satunas argues that while compute gets most attention, achieving AGI requires a comprehensive approach including energy …
S23
Artificial General Intelligence and the Future of Responsible Governance — Compute is just one element; energy, data, implementation, language, and human education are equally critical Speakers …
S24
Folding Science / DAVOS 2025 — Mentions that AGI development may take a five-year timescale rather than the one or two years some are predicting. Time…
S25
HIGH LEVEL LEADERS SESSION I — Another key point highlighted in the discussions was the need for dialogue and consensus on data flow. Data has become t…
S26
WS #103 Aligning strategies, protecting critical infrastructure — How to balance security needs with privacy and human rights concerns in policy approaches
S27
Big data for prevention: Balancing opportunities with challenges — Conflict prevention largely depends on the availability of timely data and information. Whether it concerns the collecti…
S28
Comprehensive Discussion Report: AI’s Transformative Potential for Global Economic Growth — The conversation’s emphasis on democratization and broad participation indicates that successful AI adoption requires en…
S29
Is the AI bubble about to burst? Five causes and five scenarios — An investment in national security and technological sovereignty Risk is shifted from private investors to the public, …
S30
WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape — Shikoh Gitau: and I’m really glad to be here. Thank you so much for having me. And apologies for joining in late. So, th…
S31
AI investment shows strong momentum beyond bubble fears — AI investmentis not showingsigns of a speculative bubble, according to theAlibaba Groupchairman. Instead, he argued at t…
S32
Open Forum #38 Harnessing AI innovation while respecting privacy rights — Audience: Thank you so much for your presentation. My name is Hasara Tebi. I’m from Mawadda Association for Family Sta…
S33
Ethical principles for the use of AI in cybersecurity | IGF 2023 WS #33 — Noushin Shabab:principles. Yeah. That’s actually a very good question. The most, the two most important principles for m…
S34
Artificial Intelligence & Emerging Tech — Another significant consideration is the protection of data privacy. In an age characterised by concerns about data priv…
S35
Privacy concerns intensify as Big Tech announce new AI-enhanced functionalities — Apple, Microsoft, and Google arespearheadinga technological revolution with their vision of AI smartphones and computers…
S36
Artificial General Intelligence and the Future of Responsible Governance — Mr. Kenny Kesar introduced the concept of accuracy progression through “five nines,” explaining that while AI evolved fr…
S37
Artificial General Intelligence and the Future of Responsible Governance — Satunas provides a simple definition of AGI as systems capable of performing any human task with professional-level accu…
S38
https://app.faicon.ai/ai-impact-summit-2026/artificial-general-intelligence-and-the-future-of-responsible-governance — So my humble opinion is that compute is one element in a chain of elements and that sometimes we treat this element as t…
S39
Ethical principles for the use of AI in cybersecurity | IGF 2023 WS #33 — Dennis Kenji Kipker:Yeah, of course. When developing AI, we have high impact privacy risks. And I think this is quite cl…
S40
Ethics and AI | Part 6 — The EU Act categorizes AI systems into different risk levels—unacceptable, high-risk, and low-risk—each with correspondi…
S41
WS #31 Cybersecurity in AI: balancing innovation and risks — Even with good data, the human creating the algorithm must ensure fairness. This is a key point in addressing bias and e…
S42
Networking Session #60 Risk & impact assessment of AI on human rights & democracy — Matt O’Shaughnessy: Thank you so much, David. And it’s great to be here, even just virtually. So, you asked about the…
S43
Open Forum #38 Harnessing AI innovation while respecting privacy rights — Jimena Viveros: Hello. I don’t know if anyone can hear me. Yes? Okay, great. So it is great to be here, sorry for the …
S44
Smart Regulation Rightsizing Governance for the AI Revolution — The discussion began with a notably realistic and somewhat pessimistic assessment of global cooperation challenges, but …
S45
Defying Cognitive Atrophy in the Age of AI: A World Economic Forum Stakeholder Dialogue — But even those skills can be eroded without regular practice and engagement. Core cognitive capabilities, such as judgme…
S46
Comprehensive Discussion Report: AI’s Existential Challenge to Human Identity and Society — Sociocultural | Human rights Tracey expresses concern that over-reliance on AI for decision-making and problem-solving …
S47
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — The tone was consistently optimistic and forward-looking throughout the conversation. Speakers expressed excitement abou…
S48
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — Overall Tone:The conversation maintained an optimistic and patriotic tone throughout, with both participants expressing …
S49
Emerging Markets: Resilience, Innovation, and the Future of Global Development — The tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized oppor…
S50
From India to the Global South_ Advancing Social Impact with AI — The discussion maintained an overwhelmingly optimistic and energetic tone throughout. It began with excitement about you…
S51
Comprehensive Discussion Report: AI’s Transformative Potential for Global Economic Growth — The conversation maintains a consistently optimistic and enthusiastic tone throughout. Both speakers demonstrate genuine…
S52
Comprehensive Discussion Report: The Future of Artificial General Intelligence — Both speakers distinguished their positions from extreme “doomerism” while acknowledging serious risks that require care…
S53
Comprehensive Summary: AI Governance and Societal Transformation – A Keynote Discussion — The tone begins confrontational and personal as Hunter-Torricke distances himself from his tech industry past, then shif…
S54
OPENING STATEMENTS FROM STAKEHOLDERS — Discussions on artificial intelligence show that technological development is not without risk.
S55
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — The discussion maintained a thoughtful but somewhat cautious tone throughout, with speakers acknowledging both opportuni…
S56
AI and Digital Developments Forecast for 2026 — The tone begins as analytical and educational but becomes increasingly cautionary and urgent throughout the conversation…
S57
Advancing Scientific AI with Safety Ethics and Responsibility — The discussion maintained a collaborative and constructive tone throughout, characterized by technical expertise and pol…
S58
Towards a Resilient Information Ecosystem: Balancing Platform Governance and Technology — The discussion maintained a professional, collaborative tone throughout, characterized by constructive problem-solving r…
S59
Main Session 2: Protecting Internet infrastructure and general access during times of crisis and conflict — The tone of the discussion was largely serious and concerned, given the gravity of the issues being discussed. However, …
S60
Evolving Threat of Poor Governance / DAVOS 2025 — The tone was largely serious and analytical, with panelists offering thoughtful insights on complex governance challenge…
S61
Law, Tech, Humanity, and Trust — The discussion maintained a consistently professional, collaborative, and optimistic tone throughout. The speakers demon…
S62
Safeguarding Children with Responsible AI — The discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in B…
S63
AI and Human Connection: Navigating Trust and Reality in a Fragmented World — The tone began optimistically with audience engagement but became increasingly concerned and urgent as panelists reveale…
S64
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — The tone was consistently optimistic yet pragmatic throughout the conversation. Speakers maintained an encouraging outlo…
S65
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — The discussion maintained a cautiously optimistic tone throughout, balancing enthusiasm for AI’s potential with realisti…
S66
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — Robert Opp:So, please feel free to join us at the table. Don’t have to sit in the gallery. This is a round table after a…
S67
Policy Network on Artificial Intelligence | IGF 2023 — Audience:Good morning. Good morning. Jingbo from UN University. Actually, this is much more intimate so we can communica…
S68
Keynote-Demis Hassabis — This discussion features a keynote address by Sir Demis Hassabis, co-founder and CEO of Google DeepMind and Nobel laurea…
S69
The Dawn of Artificial General Intelligence? / DAVOS 2025 — In summary, the discussion emphasized the complex challenges and opportunities presented by AGI development, with no cle…
S70
What Is Sci-Fi, What Is High-Tech? / Davos 2025 — She references a poll showing that 50% of respondents believe trust is foundational for long-term success in introducing…
S71
Workshop 6: Perception of AI Tools in Business Operations: Building Trustworthy and Rights-Respecting Technologies — Katarzyna Ellis from EY Poland presented compelling research data that illustrated the dramatic transformation occurring…
S72
Generative AI and Synthetic Realities: Design and Governance | IGF 2023 Networking Session #153 — Diogo Cortiz:I totally agree with Heloisa about her intervention. So I would like to switch a little bit my comments reg…
S73
Shaping AI’s Story Trust Responsibility & Real-World Outcomes — Well, I think we have to look perhaps further out in five years because we’re building something that should work for so…
S74
Debating Technology / Davos 2025 — Yann LeCun: So my colleagues and I certainly understand where we are going. I can’t claim to understand what other peo…
S75
Building the Next Wave of AI_ Responsible Frameworks & Standards — yeah so I think to the point Ankush was mentioning AI technology is fundamentally designed on probabilistic model and an…
S76
Knowledge Café: WSIS+20 Consultation: Towards a Vision Beyond 2025 — Audience: Oh, thank you. Yeah, just curious to know how many UN agencies are involved in WSIS, and UNGIS, it stands for …
S77
WS #462 Bridging the Compute Divide a Global Alliance for AI — Alisson O’Beirne reinforced this analysis, noting that “as folks are left behind and as there’s a lack in compute capaci…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
M
Mr. Simonas Satunas
5 arguments161 words per minute1149 words426 seconds
Argument 1
AGI as AI that can perform every human task at professional level
EXPLANATION
Satunas defines AGI as a system capable of carrying out any human task with the same accuracy and professionalism as a qualified human professional. He notes that this definition is deliberately simple to make the concept digestible for a broad audience.
EVIDENCE
In his opening remarks he states, “my definition of AGI is very simplistic … AGI will be something that can perform every human task at the level of accuracy and professionality of a human professional” [21].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The definition is corroborated by the panel report which states that AGI will be able to perform every human task with professional-level accuracy [S1] and by a detailed summary of his remarks confirming this simple definition [S4].
MAJOR DISCUSSION POINT
Definition and Timeline of AGI
Argument 2
Expectation that AGI could appear within 3–7 years
EXPLANATION
Satunas argues that the milestone of achieving AGI is likely to be reached within a three‑to‑seven‑year horizon, based on recent advances and growing public trust in generative AI tools. He frames this as a near‑term prospect rather than a distant future.
EVIDENCE
He says, “I would say that it’s a matter of 3 years to 7 years until we reach that milestone” [21].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Satunas’ timeline is supported by the same report, which records his estimate of a 3-to-7-year horizon for reaching AGI [S1] and by the extended analysis of his statements in the discussion summary [S4].
MAJOR DISCUSSION POINT
Definition and Timeline of AGI
DISAGREED WITH
Mr. Vinayak Godse, Simonas Cerniauskas, Mr. Kenny Kesar
Argument 3
Compute is only one element; energy, data, implementation, and human skills are equally critical
EXPLANATION
Satunas stresses that while compute power is essential, other factors such as energy supply, high‑quality data, implementation frameworks, language considerations, and especially human critical‑thinking skills are equally vital for realizing AGI. He warns against treating compute as the sole bottleneck.
EVIDENCE
He uses a metaphor about different transport technologies and lists “Compute is one of the elements … energy is also important … Data is extremely important … Implementation is important … I think that one of the elements that we are not investing enough is the human element” [72-90].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He emphasizes a multi-factor view of AGI development, a point echoed in the external commentary that compute is just one link in a chain of necessary elements such as energy and data [S8] and reinforced by the panel’s own synthesis of his framework [S4].
MAJOR DISCUSSION POINT
Technical Foundations – Compute, Hardware, Data, Energy
AGREED WITH
Simonas Cerniauskas, Mr. Kenny Kesar, Ms. Alexandra Bech Gjørv
DISAGREED WITH
Simonas Cerniauskas, Mr. Vinayak Godse
Argument 4
Risks span four levels – classical (privacy, cyber‑fraud), health, social, macro (society, democracy) – requiring coordinated mitigation
EXPLANATION
Satunas categorises AI‑related risks into four layers: traditional security and privacy threats, impacts on physical and mental health, social‑level effects such as empathy erosion, and macro‑level threats to democracy and societal stability. He calls for national and international strategies that address each layer in a coordinated way.
EVIDENCE
He outlines the four levels, stating “classical risks like privacy security cyber fraud … human health and mental health … social level … macro level … democracy” and argues for a collaborative mitigation strategy [131-139].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The four-level risk taxonomy is documented in the discussion transcript and highlighted in the external summary of his risk framework [S4] as well as in a separate overview of the panel’s risk categorisation [S1].
MAJOR DISCUSSION POINT
Security and Privacy Challenges
Argument 5
Call for global regulation and industry‑academia collaboration to embed morality and avoid profit‑driven unethical behavior
EXPLANATION
Satunas argues that small nations cannot dictate global AI rules, so they must work with industry and academia to embed ethical principles, egalitarianism, and bias mitigation into AI systems. He cites the Myanmar example where platform algorithms amplified violent content despite the platform’s stated intent.
EVIDENCE
He says, “we must work together with the AI developers … to make the personality of the AI more moral … In Myanmar the algorithm of Meta was designed to attract attention in a way that make the AI the more violent post much more viral” [174-180].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for industry-academia partnership in AI governance is affirmed by a separate standards-focused report that stresses collaboration as essential for effective regulation [S10].
MAJOR DISCUSSION POINT
Ethics, Bias, Governance, and Anchor Controls
AGREED WITH
Mr. Vinayak Godse, Simonas Cerniauskas, Mr. Kenny Kesar, Ms. Alexandra Bech Gjørv
DISAGREED WITH
Mr. Vinayak Godse, Simonas Cerniauskas, Ms. Alexandra Bech Gjørv, Mr. Kenny Kesar
M
Mr. Vinayak Godse
2 arguments104 words per minute1988 words1138 seconds
Argument 1
Urgency to understand and define AGI now for societal governance
EXPLANATION
Godse warns that societies, especially India, have lagged behind AI developments and must now define AGI to shape governance, security, privacy and ethics before the technology matures. He frames the discussion as essential for preparing policy frameworks.
EVIDENCE
He notes that “if you don’t pay attention now what is coming … we will miss … governing it better” and asks the panel to help define AGI for security, privacy and ethics [1-7].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Broader analyses of AGI’s transformative potential underline the urgency for policy preparation, noting that AGI could reshape societies as profoundly as electricity or the internet [S11].
MAJOR DISCUSSION POINT
Definition and Timeline of AGI
AGREED WITH
Simonas Cerniauskas, Mr. Kenny Kesar, Ms. Alexandra Bech Gjørv, Mr. Simonas Satunas
DISAGREED WITH
Mr. Simonas Satunas, Simonas Cerniauskas, Mr. Kenny Kesar
Argument 2
Emphasizes need for immediate anchor controls (technical safeguards, regulatory steps) to guide AGI development
EXPLANATION
Godse calls for concrete, early‑stage controls—technical, regulatory, and procedural—to steer AGI development toward safe outcomes. He asks the panel to suggest anchor controls that can be applied now.
EVIDENCE
He explicitly asks for “anchor control” ideas and later repeats the request for early safeguards [7] and again at the end of the discussion [172].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Early-stage regulatory measures are advocated in the global AI standards community, which calls for pre-emptive technical safeguards and stakeholder collaboration [S10]; a contrasting view notes that some regulators adopt a technology-neutral stance, highlighting a debate over the timing of such controls [S12].
MAJOR DISCUSSION POINT
Call for Early Governance and Anchor Controls
DISAGREED WITH
Simonas Cerniauskas, Mr. Simonas Satunas, Ms. Alexandra Bech Gjørv, Mr. Kenny Kesar
S
Simonas Cerniauskas
3 arguments132 words per minute632 words286 seconds
Argument 1
AGI must reason, learn, adapt, transfer knowledge and be non‑narrow
EXPLANATION
Cerniauskas outlines the core capabilities that most definitions of AGI share: reasoning, learning, adaptation, knowledge transfer, and a breadth that goes beyond narrow, task‑specific AI. He suggests these traits distinguish AGI from current systems.
EVIDENCE
He lists these traits: “the smarter AI should reason … learn … adapt … transfer knowledge … shouldn’t be very narrow” [15-18].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A recent overview of AGI capabilities describes exactly these traits-reasoning, learning, adaptation, and cross-domain knowledge transfer-as distinguishing AGI from narrow AI [S11].
MAJOR DISCUSSION POINT
Definition and Timeline of AGI
Argument 2
Massive compute investment fuels progress but may be a bubble
EXPLANATION
Cerniauskas observes that the current surge of investment in compute resources is driving rapid AI advances, yet he questions whether this level of spending is sustainable or over‑estimated, hinting at a possible bubble.
EVIDENCE
He remarks that “we are at super high cycle of those investments … we might understand that it’s overestimated, overinvested” and cites Zuckerberg’s comment about overcapacity [70-71].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel’s own analysis notes a “super-high cycle” of compute investment that could be over-estimated, aligning with external commentary on the risk of an investment bubble in AI hardware [S4].
MAJOR DISCUSSION POINT
Technical Foundations – Compute, Hardware, Data, Energy
AGREED WITH
Mr. Simonas Satunas, Mr. Kenny Kesar, Ms. Alexandra Bech Gjørv
DISAGREED WITH
Mr. Simonas Satunas, Mr. Vinayak Godse
Argument 3
Technical controls such as labeling, regulatory measures, and European‑style oversight can provide early safeguards
EXPLANATION
Cerniauskas suggests that practical technical measures—like model labeling—and regulatory frameworks, especially those common in Europe, can act as early protective layers while broader governance discussions continue.
EVIDENCE
He mentions “technical things like labeling … regulator measures … Europe tends to overregulate” as possible early safeguards [173-174].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Industry-academia collaboration on standards is highlighted as a pathway to early safeguards, and European regulatory approaches are cited as examples of proactive oversight in AI governance [S10][S14].
MAJOR DISCUSSION POINT
Ethics, Bias, Governance, and Anchor Controls
AGREED WITH
Mr. Vinayak Godse, Mr. Kenny Kesar, Ms. Alexandra Bech Gjørv, Mr. Simonas Satunas
M
Mr. Kenny Kesar
5 arguments156 words per minute1299 words497 seconds
Argument 1
Accuracy improvements (moving from 90 % to 99.999 %) require years; deterministic models will grow with AGI
EXPLANATION
Kesar explains that moving AI accuracy from current levels to near‑perfect performance (five‑nines) is a multi‑year effort, with each additional nine requiring one to two more years. He links higher accuracy to the emergence of more deterministic models that will accompany AGI.
EVIDENCE
He states “the epitome of accuracy is five nines … for AI to get from 90 % to 99 % it took five to ten years … every nine you add is another year or two” and adds that deterministic models will increase as we approach general intelligence [44-48].
MAJOR DISCUSSION POINT
Technical Foundations – Compute, Hardware, Data, Energy
Argument 2
Advanced AI can launch sophisticated attacks, impersonate leaders, raising real threats
EXPLANATION
Kesar warns that as AI becomes more capable, it can be used to conduct advanced cyber‑attacks and even impersonate high‑level executives, creating genuine security threats that must be anticipated.
EVIDENCE
He notes “the biggest thing is I could be emulating a human … a CEO and make a decision … the threat is real” [105-107].
MAJOR DISCUSSION POINT
Security and Privacy Challenges
AGREED WITH
Mr. Simonas Satunas, Ms. Alexandra Bech Gjørv, Mr. Vinayak Godse
Argument 3
Tiered model approach (small LLMs for routine tasks, large models for complex problems) to manage compute and risk
EXPLANATION
Kesar proposes a hierarchy of AI models where lightweight language models handle simple, high‑frequency tasks while larger, more powerful models are reserved for complex, high‑impact problems, thereby balancing compute costs and security concerns.
EVIDENCE
He describes “small language models that will do smaller things … bigger models that will solve world hunger … I think there will be tiering” [120-126].
MAJOR DISCUSSION POINT
Security and Privacy Challenges
AGREED WITH
Mr. Simonas Satunas, Simonas Cerniauskas, Ms. Alexandra Bech Gjørv
Argument 4
Propose AI Operating Procedures (AOP) analogous to SOPs for bias, ethics, and compliance
EXPLANATION
Kesar suggests that organizations should develop dedicated AI Operating Procedures (AOP) similar to traditional SOPs, to systematically audit AI outputs for bias, ethical compliance, and data privacy as AI systems become more autonomous.
EVIDENCE
He explains “we will have … AOP … where we have to train AI in terms not to be biased … industry will manage and create models to validate responses” [191-198].
MAJOR DISCUSSION POINT
Ethics, Bias, Governance, and Anchor Controls
AGREED WITH
Mr. Vinayak Godse, Simonas Cerniauskas, Ms. Alexandra Bech Gjørv, Mr. Simonas Satunas
DISAGREED WITH
Mr. Vinayak Godse, Simonas Cerniauskas, Mr. Simonas Satunas, Ms. Alexandra Bech Gjørv
Argument 5
Over‑reliance on AI may erode critical thinking, creating feedback loops of AI‑generated content
EXPLANATION
Kesar points out that a growing share of online content is AI‑generated, which can create a feedback loop where AI trains on its own outputs, potentially stalling human critical‑thinking development and innovation.
EVIDENCE
He states “30 % of the content is consuming is AI generated already … we are feeding back and it’s learning on the same model … we will stop evolving because we don’t exercise the brain as a muscle” [165-169].
MAJOR DISCUSSION POINT
Societal Impact – Cognition, Critical Thinking, Misinformation
AGREED WITH
Mr. Simonas Satunas, Mr. Vinayak Godse
M
Ms. Alexandra Bech Gjørv
5 arguments148 words per minute942 words380 seconds
Argument 1
Need low‑latency, energy‑efficient neuromorphic/edge hardware for situational awareness
EXPLANATION
Gjørv argues that achieving human‑like situational awareness requires hardware that can process information with millisecond latency while being energy‑efficient, highlighting neuromorphic and edge computing architectures as essential.
EVIDENCE
She mentions “low latency, energy efficient hardware, neuromorphic and edge computing and architectures beyond auto regression” as necessary for fast, contextual decisions [31-33].
MAJOR DISCUSSION POINT
Technical Foundations – Compute, Hardware, Data, Energy
AGREED WITH
Mr. Simonas Satunas, Simonas Cerniauskas, Mr. Kenny Kesar
Argument 2
Human oversight and ethical frameworks are essential; machines can inherit and amplify bias
EXPLANATION
Gjørv stresses that human oversight is crucial because AI systems can replicate and magnify existing biases. She illustrates this with a basketball‑referee example where video analytics removed racial bias from decisions.
EVIDENCE
She recounts Michael Lewis’s anecdote about basketball video surveillance eliminating racist decisions, showing how machines can improve human bias [96-102].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of human oversight and bias mitigation is reinforced by external discussions on collaborative governance models that stress ethical frameworks and oversight mechanisms [S10].
MAJOR DISCUSSION POINT
Ethics, Bias, Governance, and Anchor Controls
Argument 3
Privacy limits on personal data hinder development of human‑level situational awareness
EXPLANATION
Gjørv notes that building AI with true human‑like contextual understanding requires large amounts of personal data, but privacy regulations and concerns restrict access to such data, slowing progress toward AGI.
EVIDENCE
She says “in order to get there … we have to study a lot of data that would be considered private, personal … so there’s really limits on privacy” [35-37].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Regulatory perspectives note that privacy-centric policies can constrain data availability for advanced AI, offering an alternative viewpoint on the trade-off between privacy and AI progress [S12].
MAJOR DISCUSSION POINT
Security and Privacy Challenges
AGREED WITH
Mr. Simonas Satunas, Mr. Kenny Kesar, Mr. Vinayak Godse
DISAGREED WITH
Mr. Simonas Satunas
Argument 4
Building resilience, robust rollback mechanisms, and reducing consequences of failures are key to future‑proof societies
EXPLANATION
Gjørv advocates for preparing societies to survive disruptions (e.g., electricity outages) by developing resilient systems, rollback capabilities, and contingency plans that limit the impact of AI failures.
EVIDENCE
She draws a parallel to European electricity-outage preparedness, stating “we all have to practice on living without electricity … looking at how dependent we are … planning … making the bad outcomes less bad” [187-189].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
European cybersecurity policy emphasizes resilience, rollback capabilities, and reducing impact of system failures, directly supporting her call for such measures [S14].
MAJOR DISCUSSION POINT
Call for Early Governance and Anchor Controls
AGREED WITH
Mr. Vinayak Godse, Simonas Cerniauskas, Mr. Kenny Kesar, Mr. Simonas Satunas
Argument 5
AI‑driven manipulation can create divergent information universes, threatening democracy and geopolitics
EXPLANATION
Gjørv describes how AI‑generated content can produce separate, self‑reinforcing information ecosystems that distort public perception, posing risks to democratic processes and international stability.
EVIDENCE
She references a paper on “agent swarms” and the Ukraine-Russia war, noting how large populations can be overpowered by completely different views of reality [146-149].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of AGI’s societal impact warn that AI-generated content can fragment information ecosystems and pose risks to democratic stability, echoing her concern [S11].
MAJOR DISCUSSION POINT
Societal Impact – Cognition, Critical Thinking, Misinformation
Agreements
Agreement Points
Urgent need for early/anchor controls and governance mechanisms for AGI development
Speakers: Mr. Vinayak Godse, Simonas Cerniauskas, Mr. Kenny Kesar, Ms. Alexandra Bech Gjørv, Mr. Simonas Satunas
Urgency to understand and define AGI now for societal governance Technical controls such as labeling, regulatory measures, and European‑style oversight can provide early safeguards Propose AI Operating Procedures (AOP) analogous to SOPs for bias, ethics, and compliance Building resilience, robust rollback mechanisms, and reducing consequences of failures are key to future‑proof societies Call for global regulation and industry‑academia collaboration to embed morality and avoid profit‑driven unethical behavior
All panelists stress that, given the rapid progress toward AGI, concrete early-stage safeguards-ranging from technical labeling and regulatory measures to AI-specific operating procedures, resilience planning, and global collaborative regulation-are essential to steer development safely [1][173-174][191-198][187-189][174-180].
Recognition of multi‑layered risks (privacy, security, health, social, macro) and the need for coordinated mitigation
Speakers: Mr. Simonas Satunas, Mr. Kenny Kesar, Ms. Alexandra Bech Gjørv, Mr. Vinayak Godse
Risks span four levels — classical (privacy, cyber‑fraud), health, social, macro (society, democracy) — requiring coordinated mitigation Advanced AI can launch sophisticated attacks, impersonate leaders, raising real threats Privacy limits on personal data hinder development of human‑level situational awareness Urgency to understand and define AGI now for societal governance (including security and privacy)
The speakers converge on a taxonomy of risks-from traditional privacy and cyber-fraud to broader societal and democratic threats-and agree that coordinated, multi-level strategies are required, noting both technical vulnerabilities and privacy constraints [131-139][105-107][35-37][1-7].
POLICY CONTEXT (KNOWLEDGE BASE)
This multi-dimensional risk framing mirrors discussions on data flow governance and the need to balance security with privacy and human rights at international workshops such as WS #103 and IGF sessions, highlighting coordinated mitigation as a policy priority [S25][S26].
Compute is a critical but not sole factor; energy, data, implementation, and human skills are equally vital
Speakers: Mr. Simonas Satunas, Simonas Cerniauskas, Mr. Kenny Kesar, Ms. Alexandra Bech Gjørv
Compute is only one element; energy, data, implementation, and human skills are equally critical Massive compute investment fuels progress but may be a bubble Tiered model approach (small LLMs for routine tasks, large models for complex problems) to manage compute and risk Need low‑latency, energy‑efficient neuromorphic/edge hardware for situational awareness
All agree that while compute power drives AI advances, it must be balanced with energy supply, high-quality data, appropriate hardware architectures, and human critical-thinking capacities; over-investment risks are noted, and tiered model strategies are proposed to optimise compute use [72-90][70-71][120-126][31-33].
POLICY CONTEXT (KNOWLEDGE BASE)
Authoritative analyses stress a holistic approach to AGI, emphasizing that compute must be complemented by energy infrastructure, data quality, implementation strategies, language considerations, and human education rather than being the sole driver [S22][S23].
Potential erosion of human critical thinking due to over‑reliance on AI‑generated content
Speakers: Mr. Simonas Satunas, Mr. Kenny Kesar, Mr. Vinayak Godse
Critical thinking is nothing but your ability to give attention to various dimensions; AI makes this easier but may reduce genuine critical thinking Over‑reliance on AI may erode critical thinking, creating feedback loops of AI‑generated content Discussion on how dependence on AI reduces critical thinking and increases misinformation risk
Satunas, Kesar, and Godse all highlight that heavy reliance on AI tools can diminish human critical-thinking skills, leading to feedback loops of AI-generated content and heightened misinformation risks [156-163][165-169][150-153].
Similar Viewpoints
Both argue that compute should be managed strategically—Satunas stresses a multi‑factor ecosystem, while Kesar proposes a tiered model architecture to balance compute demands and risk [72-90][120-126].
Speakers: Mr. Simonas Satunas, Mr. Kenny Kesar
Compute is only one element; energy, data, implementation, and human skills are equally critical Tiered model approach (small LLMs for routine tasks, large models for complex problems) to manage compute and risk
Both point out that privacy constraints are a major barrier to advancing AI capabilities and must be addressed early in governance frameworks [1-7][35-37].
Speakers: Mr. Vinayak Godse, Ms. Alexandra Bech Gjørv
Urgency to understand and define AGI now for societal governance Privacy limits on personal data hinder development of human‑level situational awareness
Both recognize that compute investment is driving AI forward but warn against treating it as the sole bottleneck, emphasizing a broader ecosystem of resources [70-71][72-90].
Speakers: Simonas Cerniauskas, Mr. Simonas Satunas
Massive compute investment fuels progress but may be a bubble Compute is only one element; energy, data, implementation, and human skills are equally critical
Both stress the necessity of human oversight, ethical frameworks, and multi‑stakeholder collaboration to prevent bias and ensure moral AI behavior [174-180][96-102].
Speakers: Mr. Simonas Satunas, Ms. Alexandra Bech Gjørv
Call for global regulation and industry‑academia collaboration to embed morality and avoid profit‑driven unethical behavior Human oversight and ethical frameworks are essential; machines can inherit and amplify bias
Unexpected Consensus
Privacy as both a barrier to AI progress and a core security concern
Speakers: Mr. Vinayak Godse, Ms. Alexandra Bech Gjørv
Urgency to understand and define AGI now for societal governance (including security and privacy) Privacy limits on personal data hinder development of human‑level situational awareness
While Godse frames privacy primarily as a governance challenge, Gjørv treats it as a technical limitation to achieving human-like AI, yet both converge on the view that privacy constraints must be tackled early [1-7][35-37].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy debates consistently highlight privacy as a double-edged issue: it can impede AI development while also being essential for security, as reflected in privacy-security balancing frameworks and calls for transparent AI practices [S26][S32][S34][S35].
Critical thinking erosion linked to AI‑generated content feedback loops
Speakers: Mr. Simonas Satunas, Mr. Kenny Kesar
Critical thinking is nothing but your ability to give attention to various dimensions; AI makes this easier but may reduce genuine critical thinking Over‑reliance on AI may erode critical thinking, creating feedback loops of AI‑generated content
Satunas raises the conceptual risk of diminished critical thinking, while Kesar provides empirical evidence that 30 % of online content is already AI-generated, reinforcing the same concern unexpectedly from different angles [156-163][165-169].
Overall Assessment

The panel shows strong convergence on four core themes: (1) the necessity of early, multi‑layered governance and anchor controls; (2) a shared risk taxonomy spanning privacy, security, health, social and macro dimensions; (3) acknowledgement that compute is vital but must be complemented by energy, data, hardware, and human skills; and (4) concern that AI over‑reliance could erode human critical thinking. These agreements cut across AI technical development, security, human rights, and broader socio‑economic impacts.

High consensus – most speakers articulate overlapping viewpoints on governance, risk management, and the broader ecosystem needed for safe AGI development. The alignment suggests that future policy and research agendas can build on these common foundations, though divergence remains on precise timelines and the scale of investment.

Differences
Different Viewpoints
Timeline for achieving AGI
Speakers: Mr. Simonas Satunas, Mr. Vinayak Godse, Simonas Cerniauskas, Mr. Kenny Kesar
Expectation that AGI could appear within 3–7 years Urgency to understand and define AGI now for societal governance Massive compute investment fuels progress but may be a bubble No explicit timeline given; focus on accuracy improvements over many years
Satunas states that AGI is likely to be reached in three to seven years [21]. Godse stresses the immediate need to define AGI for governance but does not commit to a specific horizon, implying a longer-term view [1-7]. Cerniauskas points out that most definitions lack a clear timing and that the field may be over-invested, suggesting uncertainty about when AGI will materialise [12-15]. Kenny does not provide a timeline, instead discussing multi-year accuracy gains, which signals a more distant outlook [44-48]. Thus the panel is split between a near-term optimistic horizon and a more cautious, uncertain timeline.
POLICY CONTEXT (KNOWLEDGE BASE)
Recent forecasts presented at Davos 2025 suggest a five-year horizon for AGI, contrasting with more aggressive one- to two-year predictions, providing a historical reference point for timeline debates [S24].
Relative importance of compute versus other resources for AGI development
Speakers: Mr. Simonas Satunas, Simonas Cerniauskas, Mr. Vinayak Godse
Compute is only one element; energy, data, implementation, and human skills are equally critical Massive compute investment fuels progress but may be a bubble Why compute becomes very important; need for massive compute
Satunas argues that compute is just one link in a chain and that energy, data, implementation and especially human critical-thinking are equally vital [72-90]. Cerniauskas emphasizes the current surge in compute spending as the main driver of rapid AI advances, while also warning it could be over-estimated [70-71]. Godse repeatedly asks why compute is so central to the discussion, suggesting a view that compute is the primary bottleneck [65-68]. The speakers therefore disagree on whether compute should be treated as the dominant factor or as one of several equally important resources.
POLICY CONTEXT (KNOWLEDGE BASE)
Expert commentary underscores that while compute is pivotal, equal emphasis on energy, data, implementation, and human expertise is required for AGI, challenging compute-centric narratives [S22][S23].
What constitutes appropriate early‑stage “anchor controls” for AGI governance
Speakers: Mr. Vinayak Godse, Simonas Cerniauskas, Mr. Simonas Satunas, Ms. Alexandra Bech Gjørv, Mr. Kenny Kesar
Emphasizes need for immediate anchor controls (technical safeguards, regulatory steps) to guide AGI development Technical controls such as labeling, regulatory measures, and European‑style oversight Call for global regulation and industry‑academia collaboration to embed morality and avoid profit‑driven unethical behavior Building resilience, robust rollback mechanisms, and reducing consequences of failures Propose AI Operating Procedures (AOP) analogous to SOPs for bias, ethics, and compliance
Godse explicitly asks the panel for concrete early-stage anchor controls to steer AGI safely [7][172]. Cerniauskas suggests technical measures like model labeling and points to European regulatory habits as early safeguards [173-174]. Satunas pushes for a global regulatory framework and collaboration with industry and academia to embed ethical principles [174-180]. Gjørv recommends societal resilience and rollback mechanisms to mitigate failures [187-189]. Kenny proposes institutionalising AI Operating Procedures (AOP) as a checks-and-balances system for bias and ethics [191-199]. The disagreement lies in which mechanism (technical labeling, global law, resilience planning, or procedural governance) should be prioritized as the first line of defense.
Balancing privacy constraints with the data needs for human‑level situational awareness
Speakers: Ms. Alexandra Bech Gjørv, Mr. Simonas Satunas
Privacy limits on personal data hinder development of human‑level situational awareness Data is extremely important; lack of investment in human element
Gjørv highlights that accessing large volumes of personal data is essential for true situational awareness, but privacy regulations impose limits that slow progress [35-37]. Satunas stresses that data is a critical pillar for AGI, listing it alongside compute, energy and implementation, without addressing privacy trade-offs [85-86]. The two positions diverge on how to reconcile privacy protection with the data requirements for advanced AI.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between privacy safeguards and the demand for timely, high-resolution data for situational awareness has been a recurring theme in policy forums addressing data flow, big-data for prevention, and AI-enabled public safety, underscoring the need for nuanced governance [S25][S26][S27][S32][S34].
Unexpected Differences
Role of compute in shaping AI progress versus the risk of a compute‑driven investment bubble
Speakers: Mr. Simonas Satunas, Simonas Cerniauskas
Compute is only one element; energy, data, implementation, and human skills are equally critical Massive compute investment fuels progress but may be a bubble
Satunas downplays compute as the sole driver, emphasizing a balanced ecosystem of resources [72-90]. Cerniauskas, however, points to the current “super high cycle” of compute investment as the engine of rapid AI advances, while also cautioning that it may be over-invested and unsustainable [70-71]. The tension between viewing compute as a necessary but not dominant factor versus seeing it as the primary catalyst (and potential bubble) was not anticipated given the overall consensus on multi-factor development.
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses of the AI market note concerns about a speculative compute-driven bubble, with some leaders arguing that current investment reflects sustained demand rather than hype, while others warn of geopolitical risk transfer to the public sector [S28][S29][S31].
Interpretation of “critical thinking” as a solution versus a symptom
Speakers: Mr. Simonas Satunas, Mr. Kenny Kesar
One of the elements that we are not investing enough is the human element (critical thinking) Over‑reliance on AI may erode critical thinking, creating feedback loops of AI‑generated content
Satunas treats critical thinking as a resource that needs more investment to prepare society for AGI [87-90]. Kenny frames critical thinking as a capability that is being eroded by AI-generated content, suggesting that the problem is a feedback loop that must be broken [165-169]. The unexpected twist is that both see critical thinking as central, yet they locate the problem on opposite sides of the AI-human interaction spectrum.
Overall Assessment

The panel shows broad consensus that AGI will pose significant societal, security, and ethical challenges and that proactive governance is essential. However, there are clear disagreements on the expected timeline for AGI, the primacy of compute versus a multi‑resource approach, the specific form of early anchor controls, and how to balance privacy with data needs. These divergences reflect differing strategic priorities (short‑term optimism vs. cautious uncertainty) and disciplinary lenses (technical, regulatory, societal resilience).

Moderate to high. While all participants agree on the need for action, the lack of alignment on timelines, resource prioritisation, and concrete governance mechanisms could hinder coordinated policy responses and lead to fragmented national strategies.

Partial Agreements
All participants agree that proactive governance is needed to manage AGI risks, but they differ on the preferred pathway: Godse wants immediate anchor controls, Cerniauskas favours technical labeling and European regulation, Satunas pushes for global, multi‑stakeholder regulation, Gjørv stresses societal resilience and rollback, while Kenny proposes institutional AOPs. The shared goal is safe AGI development, yet the routes diverge [7][172][173-174][174-180][187-189][191-199].
Speakers: Mr. Vinayak Godse, Simonas Cerniauskas, Mr. Simonas Satunas, Ms. Alexandra Bech Gjørv, Mr. Kenny Kesar
Urgency to understand and define AGI now for societal governance Technical controls such as labeling, regulatory measures, and European‑style oversight Call for global regulation and industry‑academia collaboration to embed morality and avoid profit‑driven unethical behavior Building resilience, robust rollback mechanisms, and reducing consequences of failures Propose AI Operating Procedures (AOP) analogous to SOPs for bias, ethics, and compliance
Both agree that human critical thinking is at risk in an AI‑driven world. Satunas calls for investment in critical‑thinking education [87-90], while Kenny warns that AI‑generated content can create a feedback loop that diminishes critical thinking [165-169]. They share the concern but differ on whether the primary remedy is education investment or controlling AI‑generated content.
Speakers: Mr. Simonas Satunas, Mr. Kenny Kesar
One of the elements that we are not investing enough is the human element (critical thinking) Over‑reliance on AI may erode critical thinking, creating feedback loops of AI‑generated content
Takeaways
Key takeaways
AGI is envisioned as AI that can perform any human task at a professional level, reason, learn, adapt, and transfer knowledge, moving beyond narrow, task‑specific systems. Panelists estimate a possible emergence of AGI within the next 3–7 years, creating urgency for societal understanding and governance. Compute power is a critical driver of current AI progress, but it is only one element; energy efficiency, data availability, hardware (neuromorphic/edge), and human skills are equally essential. Security and privacy risks will intensify as AI becomes more capable, including sophisticated cyber‑attacks, impersonation of leaders, and large‑scale manipulation of information. Risks are layered: classical (privacy, fraud), health/mental‑health, social (empathy, addiction), and macro (democracy, geopolitical manipulation). Each layer requires specific mitigation strategies. Ethical oversight, bias mitigation, and human‑in‑the‑loop controls are necessary; proposals include AI Operating Procedures (AOP) analogous to SOPs and technical safeguards such as model labeling. Over‑reliance on AI may erode critical thinking and create feedback loops of AI‑generated content; education and critical‑thinking training are essential countermeasures. Early “anchor controls” – technical, regulatory, and resilience measures – should be instituted now to guide AGI development and limit adverse outcomes. Collaboration across industry, academia, and governments (both national and international) is required to embed morality, ensure fairness, and avoid profit‑driven unethical behavior.
Resolutions and action items
Develop and adopt AI Operating Procedures (AOP) for bias, ethics, and compliance within organizations. Invest in education programs that strengthen critical‑thinking and AI literacy for the general public. Pursue a tiered model strategy: small, efficient LLMs for routine tasks and larger models for complex problems to manage compute costs and risk. Encourage global coordination on AI regulation and standards, with particular emphasis on privacy‑preserving data practices. Advance research on low‑latency, energy‑efficient neuromorphic and edge hardware to support real‑time situational awareness. Implement technical safeguards such as model labeling, provenance tracking, and robust rollback mechanisms for AI systems. Create resilience planning analogous to electricity‑outage preparedness, including contingency and mitigation strategies for AI failures.
Unresolved issues
Exact timeline for achieving true AGI remains uncertain; estimates vary and no consensus was reached. How to reconcile the need for massive personal data to achieve human‑level situational awareness with strict privacy regulations. Specific mechanisms for global AI governance and how to align disparate national regulatory approaches. Concrete methods to prevent the erosion of critical thinking while AI provides rapid information synthesis. Details of how to balance compute investment against potential over‑investment bubbles and sustainability concerns. Implementation pathways for the proposed AI Operating Procedures across diverse industries and jurisdictions.
Suggested compromises
Adopt a hybrid probabilistic‑deterministic approach, using deterministic models where reliability is critical while retaining probabilistic flexibility for innovation. Employ a tiered model ecosystem to balance performance needs against compute cost, allowing smaller models to handle low‑risk tasks. Combine technical controls (e.g., labeling, sandboxing) with regulatory oversight, leveraging both industry self‑regulation and government standards. Encourage responsible AI investment by pairing compute expansion with efficiency improvements to mitigate the risk of a bubble. Blend human oversight with automated checks, recognizing that neither humans nor AI alone can guarantee ethical outcomes.
Thought Provoking Comments
AGI will be something that can perform every human task at the level of accuracy and professionality of a human professional… 50% of Israelis trust Gen‑AI tools more than they trust their friends.
Provides a concrete, human‑centric definition of AGI and backs it with a striking statistic on public trust, highlighting how perception of AI is already shifting toward AGI‑like expectations.
Shifted the conversation from abstract definitions to societal perception, prompting others to discuss timelines (3‑7 years) and the gap between current AI capabilities and public trust.
Speaker: Simonas Satunas
Machines are quite good at detecting fire or doing various instinctive things as fast as we are, but the ability to interpret context, emotions, ambiguity, surroundings, body language, etc., that’s still quite far away.
Draws a clear line between narrow AI strengths (speed, pattern detection) and the missing human‑like situational awareness, emphasizing the technical and ethical challenges of moving toward AGI.
Introduced the technical‑privacy dimension, leading the panel to discuss hardware (neuromorphic, edge computing) and data privacy constraints as essential hurdles.
Speaker: Alexandra Bech Gjørv
The epitome of accuracy is five nines. So for AI to get from 90 % to 99 %, it took five to ten years. Every nine you add is another year or two, and each extra nine brings us closer to general intelligence.
Frames progress toward AGI in quantitative terms (accuracy nines) and links it to a historical analogy of disruptive technology cycles, giving a measurable perspective on how far we are.
Prompted a discussion on the pace of improvement, the role of regression models, and the need for deterministic‑probabilistic hybrids, steering the talk toward practical engineering roadmaps.
Speaker: Kenny Kesar
Compute is one element in a chain of elements… we know AGI will arrive, we must be prepared… the human element—critical thinking, education—is as important as compute.
Challenges the prevailing narrative that compute alone will deliver AGI, expanding the focus to include energy, data, language, and especially human capital.
Redirected the dialogue from a hardware‑centric view to a broader ecosystem view, leading others to mention education, regulation, and societal readiness.
Speaker: Simonas Satunas
Michael Lewis anecdote: in the NBA, video surveillance and statistics eliminated racist coaching decisions. Machines can make people better.
Provides a concrete, positive case where AI corrected human bias, counterbalancing fear‑based narratives and illustrating a pathway for ethical AI deployment.
Shifted tone toward optimism, encouraging participants to consider how AI can improve governance and fairness rather than only posing risks.
Speaker: Alexandra Bech Gjørv
We distinguish between four levels of risk: classical (privacy, security), human health/mental health, social (empathy, bullying), and macro (democracy, foreign manipulation). A national and international strategy must address all levels.
Offers a structured risk taxonomy that moves the conversation from vague concerns to a layered, actionable framework.
Guided subsequent speakers to address specific domains (security, privacy, societal impact) and set the stage for discussing coordinated policy responses.
Speaker: Simonas Satunas
When AI reaches AGI, it could emulate a CEO and make decisions; the threat is real because the AI could act indistinguishably from a human.
Highlights a concrete, high‑stakes scenario of AI misuse, moving the discussion from abstract risk to a tangible governance challenge.
Prompted deeper conversation on security, the need for tiered model deployment, and the importance of robust safeguards before such capabilities emerge.
Speaker: Kenny Kesar
30 % of the content on the internet is already AI‑generated. This feedback loop risks stopping human intellectual evolution because we stop exercising our brains.
Raises a novel, systemic risk: the self‑reinforcing cycle where AI‑generated data trains future models, potentially eroding critical thinking and innovation.
Led to a reflective turn, with participants emphasizing education, awareness, and the necessity of preserving human critical thinking alongside AI adoption.
Speaker: Kenny Kesar
We are in a super‑high cycle of investment; some wonder if it’s a bubble or over‑investment. Zuckerberg even said we might have overcapacity for a couple of years.
Introduces market dynamics and the possibility of a speculative bubble, adding economic context to the technical and ethical discussion.
Tempered optimism, causing the panel to consider sustainability, cost‑effectiveness, and the need for balanced investment strategies.
Speaker: Simonas Cerniauskas
We need resilience and robust rollback mechanisms—plan for the worst‑case like living without electricity—to reduce the consequences of AI failures, not just avoid them.
Proposes a pragmatic, risk‑mitigation approach that focuses on limiting damage rather than solely preventing it, aligning with disaster‑recovery thinking.
Steered the final part of the discussion toward actionable “anchor control” ideas, influencing the concluding remarks on governance and preparedness.
Speaker: Alexandra Bech Gjørv
Overall Assessment

The discussion evolved from a broad, introductory framing of AGI to a nuanced, multi‑dimensional analysis thanks to several pivotal remarks. Definitions anchored in public trust, quantitative accuracy metrics, and a layered risk taxonomy gave the conversation concrete footing. Counterbalancing perspectives—such as the hardware‑centric view versus the human‑capital emphasis, and the optimistic bias‑reduction anecdote versus the stark security‑emulation scenario—created a dynamic tension that pushed participants to explore both opportunities and threats. Economic considerations about investment cycles added realism, while the final focus on resilience and rollback mechanisms translated the debate into actionable governance concepts. Collectively, these thought‑provoking comments shaped a rich dialogue that moved from abstract speculation to concrete policy and societal implications for the impending era of AGI.

Follow-up Questions
What actions and strategies can accelerate AI development toward AGI within the next three to seven years?
Understanding concrete steps and investments needed to influence the AI trajectory is crucial for policymakers and industry to plan resources and research priorities.
Speaker: Mr. Vinayak Godse (to Ms. Alexandra Bech Gjørv)
Why is massive compute essential for AGI, and what role does compute play in achieving attention, context, reasoning, and low latency?
Clarifying the necessity of compute helps justify infrastructure investments and informs discussions on sustainability and scalability of future AI systems.
Speaker: Mr. Vinayak Godse (to panel)
How do language models, ambient computing, and world models affect AGI development, and what are the challenges associated with them?
Exploring these technical dimensions is important to identify research gaps and guide development of more capable and context-aware AI.
Speaker: Mr. Vinayak Godse (to Ms. Alexandra Bech Gjørv)
What current security, privacy, and safety measures can be adopted now to safely scale AI models toward more powerful capabilities?
Identifying actionable safeguards is vital to mitigate emerging threats as AI systems become more advanced.
Speaker: Mr. Vinayak Godse (to Mr. Kenny Kesar)
What ‘anchor control’ mechanisms or concepts can be established now to manage future AGI risks and governance challenges?
Early establishment of control frameworks can provide a foundation for responsible AGI deployment and reduce reactive policy making.
Speaker: Mr. Vinayak Godse (to panel)
How can stakeholders get involved early in shaping AI governance and alignment before AGI becomes mainstream?
Early engagement ensures that ethical, legal, and societal considerations are embedded in AI development rather than retrofitted later.
Speaker: Mr. Vinayak Godse (to Ms. Alexandra Bech Gjørv)
What constitutes true critical thinking in the age of AI, and how can individuals maintain it without over‑relying on AI assistance?
Defining and preserving critical thinking is essential to prevent cognitive atrophy and ensure humans remain capable of independent judgment.
Speaker: Mr. Vinayak Godse (to Mr. Kenny Kesar)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.