Artificial General Intelligence and the Future of Responsible Governance

20 Feb 2026 11:00h - 12:00h

Artificial General Intelligence and the Future of Responsible Governance

Session at a glance

Summary

This panel discussion focused on Artificial General Intelligence (AGI) and its implications for security, privacy, and ethics, with the goal of understanding what AGI means and preparing for its potential arrival in the coming years. The panelists, including Mr. Vinayak Godse, Simonas Cerniauskas, Mr. Simonas Satunas, Ms. Alexandra Bech Gjørv, and Mr. Kenny Kesar, explored various definitions and timelines for AGI development.


The participants generally agreed that AGI represents AI systems capable of performing any human task at professional levels, with abilities to reason, learn, adapt, and transfer knowledge across broad domains rather than narrow applications. Timeline estimates ranged from 3-7 years, though some panelists expressed skepticism about specific timeframes, noting that achievement depends heavily on investment levels and technological breakthroughs. Key technical challenges discussed included achieving human-level contextual understanding, emotional intelligence, and real-time decision-making capabilities that currently require massive computational resources.


The discussion highlighted significant concerns about AGI’s societal impact across multiple levels: individual privacy and security risks, mental health effects, social implications for human relationships and critical thinking, and macro-level threats to democracy through sophisticated manipulation and misinformation campaigns. Panelists emphasized that current AI systems already pose challenges in these areas, which will be amplified as capabilities approach AGI levels.


Regarding preparatory measures, the experts stressed the importance of investing in human education and critical thinking skills, developing robust regulatory frameworks, creating resilience and rollback mechanisms, and establishing ethical AI practices similar to current organizational standards. They advocated for early engagement with these challenges rather than reactive responses, emphasizing international collaboration and proactive governance structures to manage AGI’s transformative potential responsibly.


Keypoints

Overall Purpose/Goal

This panel discussion at a Pet Summit aimed to explore the concept of Artificial General Intelligence (AGI) and its implications for society, particularly focusing on security, privacy, and ethics. The moderator emphasized the need for early engagement with AGI concepts to avoid being unprepared, as happened with AI development, and to better understand governance challenges before AGI potentially arrives in the next 2-10 years.


Major Discussion Points

Defining AGI and Timeline Predictions: Panelists offered various definitions of AGI, from “AI that can perform every human task at professional level” to systems that can reason, learn, adapt, and transfer knowledge across broad domains. Timeline estimates ranged from 3-7 years, though there was skepticism about precise predictions and emphasis that timing depends heavily on investment levels.


Technical Challenges and Compute Requirements: The discussion explored why massive computational resources are needed for AGI development, covering the role of attention mechanisms, contextual understanding, and reasoning capabilities. Panelists debated whether the current focus on compute is overemphasized compared to other critical elements like data, energy, implementation, and human factors.


Multi-layered Risk Framework: A structured approach to AGI risks was presented, identifying four levels: classical risks (privacy, security, cyber threats), human health and mental health impacts, social-level effects (empathy, relationships, child development), and macro-level consequences (democracy, society-wide manipulation, geopolitical information warfare).


Critical Thinking and Human Dependency Concerns: Significant discussion centered on how increased AI dependence might erode human critical thinking abilities, creating a “vicious cycle” where humans lose cognitive skills needed to innovate further. The challenge of distinguishing between AI-assisted information gathering and genuine critical thinking was highlighted.


Governance and Control Mechanisms: Panelists discussed various approaches to managing AGI risks, including technical solutions (watermarking, labeling), regulatory measures, international cooperation, resilience planning, and the development of “AI Operating Procedures” similar to current organizational ethical frameworks. Emphasis was placed on early engagement and building robust rollback mechanisms.


Overall Tone

The discussion maintained a serious, analytical tone throughout, characterized by cautious optimism mixed with genuine concern. While panelists acknowledged AGI’s potential benefits, the conversation was notably more focused on risks and challenges. The tone was collaborative and educational, with experts sharing different perspectives without significant disagreement. There was an underlying urgency about the need for proactive preparation, but the discussion remained measured and professional rather than alarmist.


Speakers

Speakers from the provided list:


Mr. Vinayak Godse – Moderator/Host of the panel discussion on AGI (Artificial General Intelligence)


Simonas Cerniauskas – Panel participant discussing AGI concepts and definitions


Mr. Simonas Satunas – Panel participant from Israel, provides perspectives on AGI timeline and implementation


Ms. Alexandra Bech Gjørv – Head of the largest research institute in Norway (Sintef), expert on AI research and technology


Mr. Kenny Kesar – Panel participant who serves clients on AI implementation, discusses AI accuracy and market disruption


Additional speakers:


Hendrikus sir – Mentioned for fireside panel and photo shoot (no other details provided)


Narendra sir – Mentioned for fireside panel and photo shoot (no other details provided)


Full session report

This panel discussion at a Pet Summit brought together leading experts to explore Artificial General Intelligence (AGI) and its implications for society, focusing on security, privacy, and ethical considerations. The session addressed the urgent need to engage proactively with AGI concepts, learning from society’s lack of preparation for AI’s rapid advancement over the past three years.


Defining AGI and Timeline Predictions

The panellists offered varied perspectives on what constitutes AGI. Simonas Cerniauskas established that AGI must demonstrate the ability to reason, learn, adapt, and transfer knowledge across domains, moving beyond today’s narrow AI applications.


Mr. Simonas Satunas provided a practical definition, describing AGI as “something that can perform every human task at the level of accuracy and professionality of a human professional.” He acknowledged limitations in this definition but argued for its practical utility. His timeline prediction of 3-7 years was based on shifting public perception rather than technical benchmarks, noting his observation that 50% of Israelis now trust generative AI tools more than their friends.


Mr. Vinayak Godse emphasized that AGI requires three key capabilities: attention (the ability to give attention to all possible things), contextual understanding, and reasoning. He discussed how current AI helps with System 2 thinking (logical, deliberate processes) but has limitations with System 1 thinking (fast, intuitive responses).


Ms. Alexandra Bech Gjørv, who heads the largest research institute in Norway (Sintef), brought a more cautious perspective, emphasizing that timeline predictions depend heavily on investment levels. She highlighted critical gaps in machines’ ability to interpret context, emotions, ambiguity, and body language at the speed required for dynamic environments.


Mr. Kenny Kesar introduced the concept of accuracy progression through “five nines,” explaining that while AI evolved from 90% to 99% accuracy over several years, each additional nine of accuracy requires increasingly longer timeframes. He emphasized that true AGI must transcend current regression-based learning to achieve genuine research and invention capabilities.


Technical Challenges and Investment Patterns

The discussion revealed significant complexity around the massive computational investments driving AGI development. Simonas Cerniauskas characterized the current period as a “super high cycle” of investment, driven by the belief that achieving first-mover advantage will ensure lasting dominance.


Mr. Simonas Satunas offered a compelling metaphor, comparing the situation to a 19th-century prophet predicting travel from Delhi to Bangkok in under an hour without specifying the technology. Just as different groups might build airports, railways, or ports based on their assumptions, today’s AGI race sees massive investments in compute infrastructure without certainty about optimal approaches. He argued that compute represents just one element in a complex chain including energy, data quality, and crucially, human education and critical thinking development.


Ms. Alexandra Bech Gjørv highlighted privacy implications, noting that achieving human-level situational awareness would require studying vast amounts of data considered private and personal. She also emphasized the importance of democratic access to compute power to ensure AGI benefits aren’t concentrated among wealthy nations or corporations.


Mr. Kenny Kesar provided practical perspective from client advisory work, noting that while current AI capabilities are impressive, commercial viability remains challenging due to high token costs. He observed that 30% of content AI systems now consume is already AI-generated, creating a concerning feedback loop.


Risk Framework and Societal Implications

Mr. Simonas Satunas presented a structured framework for understanding AGI risks across four levels. The first level encompasses classical technology risks—privacy violations, security breaches, and cyber fraud—amplified by AI’s superior capabilities. The second level addresses human health and mental health impacts, where understanding remains limited but evidence of harm is emerging.


The third level examines social implications, particularly effects on human empathy, relationships, and child development. The fourth level addresses macro-societal impacts on democracy and governance. Ms. Alexandra Bech Gjørv discussed how AI systems can create separate information universes, making targeted manipulation more sophisticated. She published a paper on agent swarms and their potential for information warfare.


Critical Thinking and Human Dependency

A central concern emerged around the relationship between AI advancement and human cognitive development. The panellists used the analogy of the brain as a muscle requiring exercise to maintain function. As humans increasingly rely on AI for critical thinking tasks, there’s genuine risk of cognitive atrophy.


Mr. Vinayak Godse raised important questions about distinguishing between AI-assisted information gathering and genuine critical thinking. While AI can efficiently provide multiple perspectives on complex issues, the fundamental cognitive work of evaluation, synthesis, and judgment remains distinctly human.


The panellists emphasized that critical thinking education must evolve to address AI-mediated information environments, helping citizens identify AI-generated content and maintain independent analytical capabilities.


Governance and Practical Applications

The panellists presented diverse governance perspectives. Simonas Cerniauskas advocated for technical solutions like watermarking combined with measured regulatory interventions. Mr. Simonas Satunas suggested that smaller nations should focus on collaboration with AI developers to embed ethical considerations into system design rather than external regulation.


Ms. Alexandra Bech Gjørv emphasized resilience and robust rollback mechanisms, drawing parallels to infrastructure protection. She mentioned Europe’s practice of preparing to live without electricity as an example of resilience planning. She also shared how video surveillance eliminated referee bias in basketball by providing objective evidence of calls.


Mr. Kenny Kesar proposed AI Operating Procedures (AOP) analogous to current Standard Operating Procedures, including bias checking and ethical review processes.


Preparatory Measures and Future Outlook

The panellists strongly emphasized early engagement with AGI challenges. Investment in human capital emerged as critical, with Mr. Simonas Satunas arguing that educational investment is “not less critical than investing in computing.”


Ms. Alexandra Bech Gjørv mentioned her work on devices like a “friend” button and emphasized the need for democratic access to AGI capabilities. The discussion highlighted the importance of developing robust testing frameworks and safety protocols before large-scale deployment.


Conclusion

This panel discussion reframed AGI challenges from purely technical problems to broader questions of human-AI coevolution. The panellists’ insights suggest that AGI preparation requires holistic approaches encompassing technical infrastructure, human education, governance frameworks, and social resilience measures.


The recognition that AGI’s arrival may be determined as much by societal acceptance and trust as by technical benchmarks provides important perspective for stakeholders. The urgency conveyed reflects genuine concern about society’s preparedness, but the collaborative approach demonstrated suggests that proactive engagement with these challenges remains both possible and essential.


The session concluded with announcements about continued collaboration and the launch of an “AI Cyber Security Terminal,” indicating ongoing practical applications of the discussed concepts.


Session transcript

Mr. Vinayak Godse

Pet Summit and the basic idea and intent behind setting up this session is while all the things were happening in AI in the period of 2020, a lot of development happening and somehow all that is now leading to kind of acceleration that we are seeing in last three years of time and especially this year, since January, all the new launches that we see, we are getting the first sign of a powerful AI, right? And now because of that, there is a discussion about AGI seems to be gaining quite a significant ground, right? And although people still have a lot of doubt and skepticism about whether it is really reality or possibility in coming future or what that means, many people are still skeptical.

They are struggling to define what that means for a cigarette. So as an overall society. and I can tell about India so probably we didn’t pay much attention when AI was coming. If you don’t pay attention now what is coming in next 2, 3, 5 years of time or 10 years of time that is probably the timeline for AGI, then probably we will miss on again thinking, talking, discussing, governing it better basically. So this discussion is about what is to help understand for us and for the audience here basically what do we mean by AGI can we really think about that right now what are different conference that we need to thank you for getting welcome to the panel and try to then find the meaning possible meaning for security, privacy and ethics basically.

So I would like to talk with someone with you, so how do you see this concept of AGI and formulationally how that will be different that we would see what is your understanding about the concept of artificial intelligence and artificial intelligence

Simonas Cerniauskas

So, yeah, thank you very much for having us here. And, yeah, like you said, it’s a really nice topic to wrap up the conference. So, well, so, you know, of course, there are kind of different definitions of AGI. And on the same time, most of them agree that it’s, you know, it’s about smarter AI than we have right now. We were joking a bit that, you know, on the way, the traffic is really, you know, exceptional. And, yeah, that’s a sign that maybe we are still not here today. So, but, yeah, but basically kind of among those common agreements that, let’s say, the smarter AI should reason. It should learn. It should adapt. And also it should transfer knowledge.

And also it shouldn’t be, you know, very. narrow, like, you know, of course, right now we have great, let’s say, areas where AI is really helping a lot, like co -development, customer service, and et cetera, but, you know, it should be much broader. So, and, you know, don’t think that any of us, maybe the colleagues will be able to answer when we will have, you know, and what timing, but definitely, you know, that’s one of the big topics right now.

Mr. Vinayak Godse

Let me come to you and you look at the digital initiative and artificial intelligence as one of the important research areas, so we are grappling with understanding what is right now, but can we think about what would happen in the next three, five years of time, and that seems to be the timeline for each area.

Mr. Simonas Satunas

So I’m the one with the date I’ll do my best So first of all my definition of AGI is very simplistic and I think that we need some simple explanation in this field and my very simple explanation is AGI will be something that can perform every human task at the level of accuracy and professionality of a human professional Now this is not an optimal definition because people can ask every task if a baby is crying will the AGI help him stop crying and people can ask what is the level of professionality but I think that this is something that we can digest and I think that for me I understood that we are getting closer there not from a technology perspective but from the perspective of talking with real Israelis about their problems and five years ago when I was telling this definition of AGI people were like oh it’ll never happen not in our lifetime and right now when I’m speaking with Israelis and I’m telling them this is AGI they’re saying oh aren’t we there yet oh because I thought that Chachi Biddy can help me like a lawyer isn’t it true now I think that we are not there yet okay there is a very sharp line between the AI that we are experiencing today and true AGI but the fact that the audience is already confusing the fact that people give trust to Gen AI tools 50 % of Israelis trust them more than they trust their friends many trust them more than they trust human professionals this puts us closer to AGI so I would say that it’s a matter of 3 years to 7 years until we reach that milestone

Mr. Vinayak Godse

so coming to you Alexandra how do you see this as a concept what is leading to this AGI what would we do that will impact the future of the AI bring this age of Asia in three or seven years of time?

Ms. Alexandra Bech Gjørv

Well, I’m not necessarily subscribing to the time frame. I think that depends on how much money we throw at it. And then there are other things to throw money at as well. Some of this, for example, we had a discussion with my team, you know, are machines able to make complex decisions as fast as humans? And in some areas, like, you know, many operations demand millisecond response and reflex level. You know, you can see that machines are quite good at detecting fire or doing various instinctive things as fast as we are, but the ability to interpret context, emotions, ambiguity, surroundings, body language, etc., that’s still quite far away. They take too long. And in a dynamic environment, you know, a wrong decision or a late decision is really a wrong decision.

So in order to get there, I, you know, there’s both low latency, energy efficient hardware, neuromorphic and edge computing and architectures beyond auto regression. But I think, you know, the researchers in Sintef, I head up the largest research institute in Norway. They, you know, they point to promising like hierarchical reflex reasoning systems, embodied multimodal learning, et cetera, et cetera. And there’s really no real doubt that you will get there. But there’s, in order to have the situational awareness like a human, you have to study a lot of data that would be considered private, personal. So there’s really limits on privacy. And then it triggers a lot of other questions that I’m sure we’ll get into.

Mr. Vinayak Godse

Yeah, we’ll come to that. So, Mr. Kenney, you must be serving many of the clients right now on AI, right? And every of us are getting stunned by… the progress and acceleration of the capability that is happening week by week basically right and that also scares us what is coming next right and when it comes to that level where there is a there is a two words uh somebody defines agi right so one is the consistency across the domain uh that it will be so general in a way that it will be consistently performing across the domain and second part is uh it will be reliable as well so currently probably sometimes it doesn’t have anything and it throws output and that’s why hallucination happens basically so consistency and reliability that’s what the agi will bring to the table basically so it will solve a lot of problems that we see uh uh right now we have been also getting stunned by the things that it can do basically so so there are routes to achieve the agi which will lead us to agi basically so how do you think uh uh your perspective the the journey that probably take us there

Mr. Kenny Kesar

So, you know, I agree with the panel that a couple of things we talked about in terms of where we’re getting to models evolving. But you bring up another component of accuracy. I’ll talk about accuracy first, and then I’ll come back to the disruption which is happening in the market. Now, the epitome of accuracy is five nines. So for AI to get from 90 % to 99%, it took five to ten years. Now, every nine that you add is another year or two years to the point where you get to 99 .99 and nines. So every nine that you’re adding has a time frame to it. And the number of nines that you add, you get closer to general intelligence because that’s what is going to look at the human brain.

I’ll take the topic of photographic regression that you talked about. Any regression, AI is right now built on regression. It’s built on learnings of the neural network. The neural network maturing on information that it sees. but the human brain is also inventing. It’s researching. So when AI really gets to the point of being able to research and bring new ideas to life that a human brain does, you’re getting closer to intelligence. Now, the disruption in the market that you’ve seen with announcements across the different players which dominate the AI market is creating a disruption in the industry and I think it’s the right disruption. It’s the disruption that word processor did to typewriter, what computers did to word processor, and what cloud did to data center.

This is another thing, but it’s much faster because it’s more pervasive and it impacts everybody in life. So the fact is people are talking about how does it translate to me. When I say it translates to me, it’s about how do we structure processes. Everybody and I agree accuracy is work in process. And since accuracy is work in process, we have to be really mature about… the use cases that we put onto it. We have to look at the human pyramid, what components of the pyramid that you’re going to look at. So the way we are advising our clients and what we’re doing ours is maker jobs, which is basically repetitive jobs with little context.

AI does very well, but create a controller for these autonomous. So combination of probabilistic and deterministic is what’s going to be the near future as we get to more and more deterministic when we get to general intelligence, because from a human perspective, it’s mostly deterministic.

Mr. Vinayak Godse

Right. Yeah. So these are and thank you all for putting some level of clarity in terms of what this means. And so at the end of the day, Asia is like so they say attention, right? Ability to give attention to all possible thing that. People, millions and billions of people asking questions. but as you rightly say the context matters so it’s not only attention the it should be contextual to your requirement and your things that you do right and third important part of which they are doing and last six months had been a great months for reasoning that bring to the table basically so my question is and anybody of you can answer this you then for achieving all of these things so why compute becomes very important so why you need this much of compute why there are trillions of dollars that is invested to make sure that it it use attention to each and every problem better and it is contextual and you reasoning and at the same time latency as I talk about so the role of compete what is the role of competitive this any of you

Simonas Cerniauskas

yeah so you know so of course if I may start and of course please accompany so currently we are at super high cycle let’s say of those investments and most of us are also wondering is it a bubble or when it will blow a bit etc is it really in some cases sustainable everyone of us most likely has our own opinion but still this race to be number let’s say one this belief that if you are number one you will remain number one and this momentum I think plus huge appetite all this hype definitely brings much much more money to the table than we could ever imagine and you know on the same time it depends a lot of course on the algorithms how efficient they will be all of us remember most likely last year this deep sea moment and there are also other models which are much more efficient but so So, you know, at some point we might understand that it’s overestimated, overinvested.

At the same time, I remember in Zuckerberg’s quotes that, you know, said, okay, in the worst case scenario, I will, you know, have overcapacity for a couple more years and then I will use it.

Mr. Simonas Satunas

So my humble opinion is that compute is one element in a chain of elements and that sometimes we treat this element as the only one. Let’s explore a metaphor. Let’s imagine that we are in the 19th century and a prophet arrives and he tells us, okay, in five years, a new technology will emerge that will enable you to arrive from Delhi to Bangkok in less than an hour. But I don’t know what the technology is. Maybe it’s a ship, maybe it’s a car, maybe it’s a train, maybe it’s an airplane, but we must be prepared. So everyone is trying to be prepared and to build the right infrastructure. So let’s look at the structure. The problem is everyone thinks about it as something else.

So one will build an airport and the other one will build rails and the other one will build boats. I think that we are in this moment. We know that AGI will arrive. We know that it is soon and we know that we must be prepared. Compute is one of the elements that is necessary, but energy is also important and heating and cold is also important. Data is extremely important. Implementation is important. Language is important in India as well. I think that one of the elements that we are not investing enough is the human element. Think about critical thinking, for example. I don’t know what AGI will arrive, but I know that already now for us it is very important to raise critical thinking among the public.

When you hear something in the news, when you see something, was it made by AI? What is the manipulation that is being forced upon me? So I think that investing in education is not less critical than investing in computing.

Mr. Vinayak Godse

And then another element I want to come to you on this that you talked about. there is very interesting discussion about this system one and system two thinking human is more intuitive in terms of response and system two is more logical and AI is probably helping with that basically but there is a latency that is an important area and that’s why they are putting a lot of effort and improving the competence such that the latency of system two thinking is also less so that your intuitive thinking can improve with that basically but it’s not only the competence the perception, the ambient, the senses, the emotions so all that also matters a lot and that’s where the limitation of language based models are getting exposed basically and you did talk about that in your initial remark can you just throw light on that?

On the language? On the different type of the models right? Ambient, compute for that matter, world model that people talk about so…

Ms. Alexandra Bech Gjørv

Well I just wanted to first agree with the… Mir, sorry that you know if you are a government and this is democratic access to compute is a big topic I think you can really get lost in just investing in compute power so investing in skills and leading edge technology understanding in your own country and participating in the regulatory approach because some of the things that I care about is that everybody says that they should be human oversight but you know that once you get into these dilemma situations like what should happen in a car accident, humans are not very good at understanding risks and humans are not very good at really making ethical discussions they tend to go as far, you know, do your best and then let moral luck decide who gets lost but you have to in machine driven systems you actually have to make decisions about those things so I think becoming, you know, educating also our politicians to be able to to know that you have to make the hard choices because otherwise the machines will make them for you and they will continue our biases and they will, you know, it will not end well.

But then I just wanted to share a little story that I heard. You know, Michael Lewis, the guy with the money ball and everything, he has this anecdote that in the Basketball Association in the States, they started video surveillance and the coaches were all making racist decisions and home team decisions. And by showing the videos and by showing the statistics, next season they couldn’t find any bias at all. So I think that’s a good example of how the machines make people better, whereas we’re not able to better ourselves over time. So I think I just thought this was a nice anecdote for this

Mr. Vinayak Godse

Thank you. And I’ll come to Kenny. So… As we are… trying to solve problems of security, privacy in current big capability of AI and we are struggling to understand what it means for security, what it means for privacy and suddenly there is a significant acceleration that is happening so what we are doing right now for security privacy which could help us to graduate to more and more powerful model comes in or any other things basically so can you just help us

Mr. Kenny Kesar

yeah I think security as we evolve and we talked about compute compute gets bigger, context get bigger, we get smarter in terms of what AI can do and definitely the same AI that can generate, can pose more sophisticated attacks and when we get to AGI right, the biggest thing is I could be emulating a human Let’s say in a company, I could emulate a CEO and make a decision because I’m getting so close to being natural. The threat is real. Now, even today, let’s say without AI, you need to be just a step ahead of the bad actors or the persons who are into cybercrime. You just have to be a step ahead. And similarly, we talked about, you know, we’re mentioning about the human portion, right?

That the human portion needs to get more educated where there are going to be set of humans that are going to use the same AI to build better agents to fight them. So now it’s a question of the tooling that you have at hand. Even today, it’s the tools. It’s a human who’s building tools to fight your cyber threats. Imagine, in the next era, the only thing is… It’ll become nearly close to science fiction when agents try locking humans out. But that’s, I would say, still science fiction. But the fact is as we evolve, we need to right -size the solution and that’s how we will manage compute too. You don’t use I7 computer or to do a simple calculator task of adding two numbers, right?

You use a calculator. So in the context of the world, we’re going to have SLMs which is small language models that will do smaller things so that we can manage compute. You have the bigger models that will solve world hunger in terms of how we do with different levels of machines and processing that we do. I think there will be tiering. Right now, we were talking about it’s a fight to who’s first. So with the fight to first, bigger, better, more elaborate. But now as it evolves, you’ll get the right size fitting to them. Then only it will be commercially viable. AI is not commercially viable today. The costs outweigh the RO.

Mr. Vinayak Godse

Yeah, current cost is quite significantly higher. You can do POC but… once you put into production environment the token cost is too much high to the ROI so so near want to come to you there is a established understanding of security privacy safety or ethics right and that’s what the paradigm that we at least try to understand right now but would the Asia altogether different paradigm and the concepts of security privacy will be foundationally very different than what we discussed right now

Mr. Simonas Satunas

so as I see it when we try to deal with the risks that AI pose we distinguish between four different levels the first level is the classical risks like privacy security cyber fraud every technology that we have since the 90s we need to explain how does it meet the current risk in that matter and AI is much more powerful and it poses a lot of more risks but these are the kinds of risks that we when we design products we know how to deal with them. Above it there is a level of human health and mental health and we find out that AI solutions can be quite problematic for mental health, can cause a lot of damage in some cases and this is something that is not yet well understood and investigated above that there is a social level.

What does it does to the empathy between people? What does it does normally people say oh I see that it’s bad for my kids. They are experiencing bullying or addiction usually what’s bad for your kids is also bad for you and we understand that these are complications that we didn’t think about when we code and the higher level is a macro level what does it do to society? What does it do to democracy? I think that several countries are now experiencing foreign manipulation and it is very easy to run campaigns that are built of fake news and we see that manipulation can become very problematic. So I think that a strategy, a national strategy and an international strategy should access, should address all these levels and all these levels have mitigations but they are costly and they need collaboration.

So we need to be in close collaboration in order to mitigate these risks.

Mr. Vinayak Godse

It’s good that the way you put the structure, right? Things it would do to us, our brain and the thing that will impact us as individually and we discussed that in one of the sessions that we hosted on neuroscience and AI. So what this means to the brain development process if we are using AI for every small thing that we want to do, what that means to society, brain development process plateaus for that matter, what will be in society and then what is the macro kind of impact it. Do you want to add something on that?

Ms. Alexandra Bech Gjørv

Yeah, I just, sorry. I just want to build on that. How it’s not just targeted manipulation or the things that we see in our kids and somebody walking around with a button called friend and that’s your only friend that you need but also the well -structured in the geopolitical context the ability to create completely different information universes you don’t need to be neurologically strange you just see a completely different view we just published a paper in science on these agent swarms and just reading a book about the Ukraine and Russia war going on now and how large populations are overpowered by totally different images of the world from what we are and at least obviously your defense systems need to be hardened against those kinds of manipulations but it’s also you know actually an offensive strategy to find good bots that enter those universes.

It’s an actual battleground in and of itself, and it’s very strange to think about the world in that way, but I think you’re very naive if you don’t start systematically working on how you make your conviction of what the world is like also part of the people that you need to somehow, hopefully not defeat, but relate to and convince that things can be better. So it’s not just a technological challenge. I would say it’s a huge mental leap for most of us.

Mr. Vinayak Godse

So Siman, the question is like the more we use, the more we become dependent on AI system, right? And the more acceleration of the people’s ability to think critically, that will go down basically, right? The speed will increase the more dependence, and then more… More AI become powerful for that matter, right? so what we see in terms of this misinformation, disinformation and defake, so probably there will be different kind of cognitive warfare that may happen so how do you see such kind of challenges in the society, you talked about society or individual for that matter, so what kind of implication it will have for individual society and overall the way the world is organized

Simonas Cerniauskas

yeah so absolutely so basically all those layers and all the dependencies like you rightly stated they also critical thinking of course is one but also awareness, education and you know the skills, abilities for people to understand the things here I think this audience is you know for us it’s more or less everything self obvious but you know when you start talking to people in the streets or different backgrounds then you you know realize that what is self -obvious for you for another person might be completely different. To find those ways I would say to educate to basically help them identify the threats, that’s one of the key priorities and also obligation I would say from our side.

Mr. Vinayak Godse

one of the important challenge of this critical thinking which I come across is critical thinking is nothing but your ability to give attention to various different dimensions nuances, different perspective, different views basically right. Where it is tremendous amount of effort that I would have to become a critical thinker. And AI saws that quite easily for me. It can make me to bring all the attention, all the dimensions, all the nuances, all the viewpoints, you can quickly get access to me, right? So, even for critical thinking, Kenny, for you, this question is, you will be depending too much on AI as well, right? So, we need to know distinction between what do you critical thinking? Critical thinking is not just getting information, giving attention, but critical thinking is what?

So, that question probably is very important question to ask.

Mr. Kenny Kesar

critical thinking that is very necessary for us to innovate further. So the biggest issue that the AI world is facing, 30 % of the content is consuming is AI generated already. So basically you’re feeding back and it’s learning on the same model. When originally it was learning on artifacts that were built through different thinking processes. So I would say one of the, it’s a risk, it’s a boon because it gets work done. But in overtime it’s a risk that we will stop evolving because if we don’t exercise the brain as a muscle, if we don’t exercise it and don’t build those neurons which really influence critical thinking, it will be actually a very big loss to society.

So I would say general intelligence, everybody is asking for it. Now how do we make sure as AI. computers get general intelligence we’re not losing our intelligence to create that general intelligence again so it’s a it’s a it’s a vicious cycle it’s a question which we’re debating we’re trying to answer in ourselves everybody has perspectives but it’s a it’s something that I think about do I have an answer to it no but I feel that critical thinking on both sides is something that we really need to critically think about

Mr. Vinayak Godse

yeah so that’s what may every thing that you think as a solution and kind of thing so there is always this challenge of what it means right in this new paradigm is an important so now a little bit concluding part of this discussion is can we when this is question to each of you briefly we can discuss about it can we still think about I know we know we have been doing security privacy and particular safety privacy particular way right but as this paradigm is new can we think about some anchor control right now that we should be mindful of right that when it comes it happened right when AI was getting built after 3 years we are talking about AI governance and all these things so is there a way for us to think about some kind of anchor control some idea some concept basically that could help us to browse through challenges the AGI could throw I can start with you briefly and each of you can comment on this

Simonas Cerniauskas

yeah so well of course you know there are some technical things like you know the same what are marks or something you know labeling and other technical features that could help us a bit to identify at least some threats … then also we can talk about regulator measures but you know that’s a broader topic for the further discussion but especially here we in Europe we tend to regulate and overregulate everything so but in a way I think also at least some measures here also can be really viable and really reasonable

Mr. Simonas Satunas

well I come from a very small country Israel is so small that you can put it it’s like a pin on the map and therefore our regulative approach is that we are unable to determine the global regulation and in this AI race I think that what is more important is the global regulation so since we are a very tiny country we must work with positive tools say, okay, we cannot affect the regulation, but how can we work together with the AI developers in order to make the personality of the AI more moral, more ethic? How can we put egalitarian and equality into the consideration? How can we avoid bias? And I think that it makes us work together with the industry and together with the academia in order to find out about new consequences.

I think that in many cases the giants, the big tech doesn’t point towards unethical conclusions, but they work towards financial incentives that make AI behave in a very immoral way. If I’ll take, for example, the conflict in Myanmar, in Burma, we saw that Meta was not actively promoting violence in Myanmar, but the algorithm of Meta was designed to attract attention in a way that make the AI the more violent post much more viral and make violence flourish. So if we’ll be able to promote a dialogue and if we’ll be able to be together with the industry in development of new AI, sometimes we’ll be able to make AI more ethical.

Mr. Vinayak Godse

So Alexandra, your view. So one is the anchor control idea concept, but second part is how do you get into early? How do you get into? Early in the game, right? So when AI happened, now we are discussing in 25, 26 about the responsibility and alignment and adoption and governance basically, right? So in Asia discussion is the anchor control ways, ideas and ways for us to get into early discussion of it.

Ms. Alexandra Bech Gjørv

Well, I think at least you need to work on resilience and robust rollback mechanisms. A little bit like what we’re experiencing now in Europe, where we all have to practice on living without electricity. You know that it’s a realistic option that somebody. sabotages your electricity and then looking at well how dependent are we really and what are the alternative you know and and planning from a point of view where you not only work to reduce risk but you really work to reduce consequences of those risks occurring so if you work on the traditional risk matrix it’s always you know avoiding bad outcomes but then making the bad outcomes less bad that’s something that at least we think is well the new realities are propelling that kind of thinking and I think that’s important

Mr. Vinayak Godse

Kenny your voice on this

Mr. Kenny Kesar

sure actually the way we look at it in terms of AI from ethical AI to biases to data privacy it’s very similar akin to what a human would do even today what today we have a standard operating procedure that we review for biases, we review for content. You know, in our organizations, we have organizations that manage this. Now, and the other thing is we train people on ethical practices, on non -bias and things like that. So ultimately, AI is very similar to that, where we will have, you know, in today’s world, for the lack of a better word, I call it AOP instead of SOP, agent operating procedure or AI operating procedure, where we have to train AI in terms not to be biased.

So I feel that there is a big industry which is in the offing, which is going to manage and create models, LLMs, to manage or to validate that the responses from, you know, your common models are ethically right, non -biased. Because today, as organizations, we invite experts from outside to come and see our practices, whether we are following ethical, we are transparent, a number of those things. Very similarly as we mature towards more general intelligence and the more ways of working, I feel that these control structures will come in cyber security, will come in ethical use of AI, unbiased use of AI. So ultimately it will be a checks and balances system and we will see innovation in these areas.

That is how we feel it. It’s an evolving area. Let’s see how it happens.

Mr. Vinayak Godse

Thank you all of you to really help us understand the meaning of this concept of AGI and how that will pan out from now and what kind of challenges it will throw to us. There are definitely opportunities that we don’t have time to discuss about what it will bring to us. But then what could we start doing right now? And this was definitely one of the important conversations. Help this would help you understand what we are talking about the AGI today. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Join me to give big hand to my co -panelists for helping us understand. Thank you. Thank you, Simon. Thank you, Nir.

Thank you. We have some photo shoot. Alexandra, we need to come here for photo shoot. I also request the fireside panels, Hendrikus sir and Narendra sir to please join us for the photo shoot. Thank you. Thank you. Before we commence the session for the Fireside I would like to announce the launch I would like to announce the launch of AI Cyber Security Terminal This is published today Thank you. Thank you. you you Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

M

Mr. Simonas Satunas

Speech speed

161 words per minute

Speech length

1149 words

Speech time

426 seconds

Simple professional‑task definition of AGI

Explanation

Satunas defines AGI as a system that can perform every human task with the same accuracy and professionalism as a human expert. He sees this as a clear, digestible way to communicate what true AGI would look like.


Evidence

“AGI will be something that can perform every human task at the level of accuracy and professionality of a human professional” [1]. “I would say that it’s a matter of 3 years to 7 years until we reach that milestone” [1].


Major discussion point

Definition & Timeline of AGI


Topics

Artificial intelligence


Compute is one element in a chain

Explanation

Satunas stresses that compute alone does not drive AI progress; energy, data, hardware, and human expertise are equally vital components of the ecosystem.


Evidence

“Compute is one of the elements that is necessary, but energy is also important and heating and cold is also important” [32]. “so my humble opinion is that compute is one element in a chain of elements and that sometimes we treat this element as the only one” [33].


Major discussion point

Technical Requirements & Compute


Topics

Artificial intelligence | Environmental impacts | The enabling environment for digital development


Multi‑level risk framework

Explanation

Satunas outlines a four‑tiered risk hierarchy for AI, starting with classic privacy and cyber threats, then mental‑health impacts, followed by social‑cohesion issues, and finally macro‑societal threats such as democratic manipulation.


Evidence

“we distinguish between four different levels the first level is the classical risks like privacy security cyber fraud … then mental health … then social level … then macro level …” [68].


Major discussion point

Security, Privacy & Safety


Topics

Human rights and the ethical dimensions of the information society | Building confidence and security in the use of ICTs


Small‑nation global coordination

Explanation

Satunas argues that tiny countries like Israel cannot dictate global AI regulation alone and must collaborate internationally to embed ethics, egalitarianism, and bias mitigation into AI development.


Evidence

“well I come from a very small country Israel … we are unable to determine the global regulation … we must work with positive tools … how can we work together with the AI developers in order to make the personality of the AI more moral, more ethic?” [74]. “I think that a strategy, a national strategy and an international strategy should address all these levels … they need collaboration” [87].


Major discussion point

Governance, Regulation & Anchor Controls


Topics

Artificial intelligence | The enabling environment for digital development


Investing in education & critical thinking

Explanation

Satunas highlights that education and public critical‑thinking skills are as essential as compute investments for preparing societies for AGI and preventing manipulation.


Evidence

“I think that investing in education is not less critical than investing in computing” [35]. “I don’t know what AGI will arrive, but I know that already now for us it is very important to raise critical thinking among the public” [24].


Major discussion point

Societal Impact & Critical Thinking


Topics

Capacity development | Social and economic development


M

Mr. Vinayak Godse

Speech speed

104 words per minute

Speech length

1988 words

Speech time

1138 seconds

India may miss the AGI window

Explanation

Godse warns that if India does not focus now on emerging AI capabilities, it will lose the opportunity to shape AGI governance and reap its benefits.


Evidence

“If you don’t pay attention now what is coming in next 2, 3, 5 years of time or 10 years of time that is probably the timeline for AGI, then probably we will miss on again thinking, talking, discussing, governing it better basically” [20]. “and I can tell about India so probably we didn’t pay much attention when AI was coming” [26].


Major discussion point

Definition & Timeline of AGI


Topics

Artificial intelligence | Capacity development


Compute drives attention & latency

Explanation

Godse links compute capacity to the ability of AI systems to allocate attention and meet real‑time, contextual demands, emphasizing that insufficient compute hampers latency and responsiveness.


Evidence

“Ability to give attention to all possible thing that” [11]. “Compute is one of the elements that is necessary, but energy is also important and heating and cold is also important” [32].


Major discussion point

Technical Requirements & Compute


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


S

Simonas Cerniauskas

Speech speed

132 words per minute

Speech length

632 words

Speech time

286 seconds

Smarter, non‑narrow AI

Explanation

Cerniauskas describes AGI as an AI that must reason, learn, adapt, and transfer knowledge, moving beyond narrow, task‑specific applications.


Evidence

“the smarter AI should reason” [16]. “It should transfer knowledge” [17]. “It should adapt” [18]. “It should learn” [19].


Major discussion point

Definition & Timeline of AGI


Topics

Artificial intelligence


Investment surge may be over‑inflated; algorithmic efficiency matters

Explanation

Cerniauskas cautions that the current flood of AI investment could be excessive and that long‑term sustainability will depend on algorithmic efficiency rather than raw spending.


Evidence

“we are at super high cycle … this hype definitely brings much much more money to the table … it depends a lot of course on the algorithms how efficient they will be” [45].


Major discussion point

Technical Requirements & Compute


Topics

Artificial intelligence | Financial mechanisms


Technical safeguards & European regulation

Explanation

He advocates for technical measures such as labeling and watermarking, coupled with strong regulatory oversight, noting Europe’s tendency toward stringent AI regulation.


Evidence

“there are some technical things like … labeling and other technical features that could help us … then also we can talk about regulator measures … especially here we in Europe we tend to regulate and overregulate everything” [86].


Major discussion point

Governance, Regulation & Anchor Controls


Topics

Artificial intelligence | The enabling environment for digital development


Cognitive warfare & misinformation

Explanation

Cerniauskas warns that AI‑generated “information universes” can be weaponized for large‑scale manipulation, requiring systematic societal defenses.


Evidence

“we just published a paper in science on these agent swarms … large populations are overpowered by totally different images of the world … your defense systems need to be hardened against those kinds of manipulations” [92]. “so what we see in terms of this misinformation, disinformation and defake, so probably will be different kind of cognitive warfare” [88].


Major discussion point

Societal Impact & Critical Thinking


Topics

Building confidence and security in the use of ICTs | Artificial intelligence


M

Ms. Alexandra Bech Gjørv

Speech speed

148 words per minute

Speech length

942 words

Speech time

380 seconds

Resilience & rollback mechanisms

Explanation

Gjørv stresses the need for robust resilience planning and rollback capabilities to mitigate worst‑case disruptions such as power loss.


Evidence

“Well, I think at least you need to work on resilience and robust rollback mechanisms” [93]. “… planning … reducing consequences of those risks occurring” [91].


Major discussion point

Security, Privacy & Safety


Topics

Building confidence and security in the use of ICTs | Artificial intelligence


Anchor controls for early‑stage AGI

Explanation

She supports the concept of “anchor controls” – early‑stage governance tools that embed resilience and rollback into AGI development.


Evidence

“can we think about some anchor control right now that we should be mindful of … is there a way for us to think about some kind of anchor control …” [22]. “Well, I think at least you need to work on resilience and robust rollback mechanisms” [93].


Major discussion point

Governance, Regulation & Anchor Controls


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


AI can expose and reduce human bias

Explanation

Gjørv cites an example where video‑surveillance in sports helped eliminate racist coaching decisions, illustrating AI’s potential to surface and curb bias.


Evidence

“You know, Michael Lewis … they started video surveillance and the coaches were all making racist decisions and home team decisions” [118].


Major discussion point

Societal Impact & Critical Thinking


Topics

Human rights and the ethical dimensions of the information society


M

Mr. Kenny Kesar

Speech speed

156 words per minute

Speech length

1299 words

Speech time

497 seconds

Accuracy measured in “nines”

Explanation

Kesar explains that moving AI accuracy from 90 % toward 99.99 % (adding “nines”) requires many years of research, and each additional nine adds significant time.


Evidence

“So for AI to get from 90 % to 99%, it took five to ten years” [57]. “Now, every nine that you add is another year or two years to the point where you get to 99 .99 and nines” [58]. “Now, the epitome of accuracy is five nines” [59].


Major discussion point

Technical Requirements & Compute


Topics

Artificial intelligence


Tiered model approach for commercial viability

Explanation

He proposes using small, task‑specific language models for low‑cost operations while reserving large models for high‑impact problems, keeping AI economically sustainable.


Evidence

“So in the context of the world, we’re going to have SLMs which is small language models that will do smaller things so that we can manage compute” [69]. “Then only it will be commercially viable” [70]. “You have the bigger models that will solve world hunger …” [71].


Major discussion point

Technical Requirements & Compute


Topics

Artificial intelligence | The digital economy


AGI could emulate humans for sophisticated attacks

Explanation

Kesar warns that an AGI capable of mimicking human behavior (e.g., a CEO) could launch highly sophisticated cyber‑attacks, demanding proactive security measures.


Evidence

“the same AI that can generate, can pose more sophisticated attacks and when we get to AGI right, the biggest thing is I could be emulating a human … I could emulate a CEO and make a decision because I’m getting so close to being natural” [9].


Major discussion point

Security, Privacy & Safety


Topics

Building confidence and security in the use of ICTs | Artificial intelligence


AI Operating Procedures (AOP)

Explanation

He introduces the concept of AI Operating Procedures, analogous to SOPs, to embed bias reviews and ethical audits into AI system lifecycles.


Evidence

“I call it AOP instead of SOP, agent operating procedure or AI operating procedure, where we have to train AI in terms not to be biased” [15].


Major discussion point

Governance, Regulation & Anchor Controls


Topics

Artificial intelligence


Over‑reliance erodes critical thinking

Explanation

Kesar argues that excessive dependence on AI hampers human cognitive development, creating a feedback loop where AI‑generated content dominates learning and reduces societal critical thinking.


Evidence

“But in overtime it’s a risk that we will stop evolving because if we don’t exercise the brain as a muscle, if we don’t exercise it and don’t build those neurons which really influence critical thinking, it will be actually a very big loss to society” [114].


Major discussion point

Societal Impact & Critical Thinking


Topics

Social and economic development | Capacity development


Agreements

Agreement points

AGI requires broader capabilities beyond current narrow AI applications

Speakers

– Simonas Cerniauskas
– Mr. Simonas Satunas
– Mr. Kenny Kesar

Arguments

AGI should reason, learn, adapt, transfer knowledge and be broader than current narrow AI applications


AGI is something that can perform every human task at the level of accuracy and professionality of a human professional, achievable in 3-7 years


AGI requires consistency across domains and reliability, moving from probabilistic to deterministic systems


Summary

All speakers agree that AGI represents a significant leap from current AI systems, requiring broader capabilities, consistency across domains, and human-level performance in diverse tasks


Topics

Artificial intelligence


Education and human capacity development are critical for managing AI/AGI risks

Speakers

– Simonas Cerniauskas
– Mr. Simonas Satunas
– Mr. Kenny Kesar

Arguments

Education and awareness are crucial as threats may not be obvious to general populations


Compute is just one element; energy, data, implementation, language, and human education are equally critical


AI dependency threatens critical thinking as 30% of AI training content is already AI-generated, creating a feedback loop


Summary

All speakers emphasize that human education, critical thinking, and awareness are essential components for successfully managing AI/AGI development and risks


Topics

Capacity development | Human rights and the ethical dimensions of the information society


Multi-layered approach needed for AI/AGI governance and risk management

Speakers

– Mr. Simonas Satunas
– Ms. Alexandra Bech Gjørv
– Mr. Kenny Kesar

Arguments

Four risk levels exist: classical risks, human/mental health, social empathy impacts, and macro societal/democratic effects


Information warfare and agent swarms create different information universes, requiring defensive and offensive strategies


AI Operating Procedures (AOP) similar to human Standard Operating Procedures, with external validation systems for ethical compliance


Summary

Speakers agree that managing AI/AGI requires comprehensive approaches addressing multiple risk levels from individual to societal, with structured governance frameworks


Topics

Building confidence and security in the use of ICTs | Human rights and the ethical dimensions of the information society | The enabling environment for digital development


Compute investment alone is insufficient for AGI development

Speakers

– Simonas Cerniauskas
– Mr. Simonas Satunas
– Ms. Alexandra Bech Gjørv

Arguments

Massive compute investment is driven by the race to be first, though efficiency improvements may reduce requirements


Compute is just one element; energy, data, implementation, language, and human education are equally critical


Low latency, energy efficient hardware, and neuromorphic computing are needed for human-like situational awareness


Summary

Speakers agree that while compute is important, AGI development requires a holistic approach including energy, hardware efficiency, data quality, and human factors


Topics

Artificial intelligence | Environmental impacts | Financial mechanisms


Similar viewpoints

Both speakers advocate for practical, collaborative approaches to AI governance rather than purely regulatory ones, emphasizing resilience and working with industry

Speakers

– Mr. Simonas Satunas
– Ms. Alexandra Bech Gjørv

Arguments

Small countries should focus on working with AI developers to embed moral and ethical considerations rather than attempting regulation


Need for resilience planning and robust rollback mechanisms, similar to preparing for infrastructure failures


Topics

The enabling environment for digital development | Building confidence and security in the use of ICTs


Both speakers see potential for AI systems to improve human decision-making and reduce biases through structured approaches and external validation

Speakers

– Ms. Alexandra Bech Gjørv
– Mr. Kenny Kesar

Arguments

Machines can help reduce human biases, as demonstrated in basketball referee decision-making improvements


AI Operating Procedures (AOP) similar to human Standard Operating Procedures, with external validation systems for ethical compliance


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Both speakers emphasize the serious security and information warfare threats posed by advanced AI systems, requiring proactive defensive measures

Speakers

– Mr. Kenny Kesar
– Ms. Alexandra Bech Gjørv

Arguments

AGI poses sophisticated attack capabilities, including the ability to emulate humans like CEOs for malicious purposes


Information warfare and agent swarms create different information universes, requiring defensive and offensive strategies


Topics

Building confidence and security in the use of ICTs | Artificial intelligence


Unexpected consensus

AI can improve human decision-making and reduce bias

Speakers

– Ms. Alexandra Bech Gjørv
– Mr. Kenny Kesar

Arguments

Machines can help reduce human biases, as demonstrated in basketball referee decision-making improvements


AI Operating Procedures (AOP) similar to human Standard Operating Procedures, with external validation systems for ethical compliance


Explanation

Despite extensive discussion of AI risks, there was unexpected consensus that AI systems can actually help humans make better, less biased decisions – a positive counterpoint to the dominant risk narrative


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Timeline uncertainty and investment dependency

Speakers

– Simonas Cerniauskas
– Ms. Alexandra Bech Gjørv
– Mr. Simonas Satunas

Arguments

Massive compute investment is driven by the race to be first, though efficiency improvements may reduce requirements


Timeline depends on investment levels, and machines still struggle with context interpretation, emotions, and dynamic decision-making


AGI is something that can perform every human task at the level of accuracy and professionality of a human professional, achievable in 3-7 years


Explanation

Despite different perspectives on AGI timelines, speakers unexpectedly agreed that the timeline is heavily dependent on investment levels and resource allocation rather than being technologically predetermined


Topics

Artificial intelligence | Financial mechanisms


Overall assessment

Summary

The speakers demonstrated strong consensus on the need for holistic approaches to AGI development, emphasizing education, multi-layered governance, and the insufficiency of compute-only solutions. They agreed on the transformative potential of AGI while acknowledging significant technical and societal challenges.


Consensus level

High level of consensus on fundamental principles and approaches, with some variation in timelines and specific implementation strategies. This consensus suggests a mature understanding of AGI challenges and the need for comprehensive, collaborative solutions across technical, social, and governance dimensions.


Differences

Different viewpoints

Timeline for achieving AGI

Speakers

– Mr. Simonas Satunas
– Ms. Alexandra Bech Gjørv

Arguments

AGI is something that can perform every human task at the level of accuracy and professionality of a human professional, achievable in 3-7 years


Timeline depends on investment levels, and machines still struggle with context interpretation, emotions, and dynamic decision-making


Summary

Satunas provides a specific 3-7 year timeline based on changing public trust, while Gjørv argues the timeline is primarily dependent on investment levels and emphasizes current technical limitations in contextual understanding


Topics

Artificial intelligence


Approach to AI governance for smaller nations

Speakers

– Mr. Simonas Satunas
– Simonas Cerniauskas

Arguments

Small countries should focus on working with AI developers to embed moral and ethical considerations rather than attempting regulation


Technical solutions like watermarks and regulatory measures are needed, though over-regulation is a concern


Summary

Satunas advocates for collaborative approaches with industry rather than regulation for small countries, while Cerniauskas supports both technical solutions and regulatory measures, though with caution about over-regulation


Topics

Artificial intelligence | The enabling environment for digital development | Human rights and the ethical dimensions of the information society


Primary focus for AGI preparedness

Speakers

– Mr. Simonas Satunas
– Simonas Cerniauskas

Arguments

Compute is just one element; energy, data, implementation, language, and human education are equally critical


Massive compute investment is driven by the race to be first, though efficiency improvements may reduce requirements


Summary

Satunas emphasizes a holistic approach including human education and multiple infrastructure elements, while Cerniauskas focuses more on the competitive dynamics driving compute investment and potential efficiency gains


Topics

Artificial intelligence | Capacity development | Environmental impacts | Financial mechanisms


Unexpected differences

Role of AI in reducing vs. perpetuating bias

Speakers

– Ms. Alexandra Bech Gjørv
– Mr. Simonas Satunas

Arguments

Machines can help reduce human biases, as demonstrated in basketball referee decision-making improvements


Small countries should focus on working with AI developers to embed moral and ethical considerations rather than attempting regulation


Explanation

Unexpectedly, Gjørv presents AI as a solution to human bias while Satunas warns about AI perpetuating bias through financial incentives, creating a fundamental disagreement about AI’s inherent tendency toward bias


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Sustainability of current compute investment levels

Speakers

– Simonas Cerniauskas
– Mr. Simonas Satunas

Arguments

Massive compute investment is driven by the race to be first, though efficiency improvements may reduce requirements


Compute is just one element; energy, data, implementation, language, and human education are equally critical


Explanation

Unexpectedly, there’s disagreement on whether current compute investment levels are appropriate – Cerniauskas suggests they may be excessive and unsustainable, while Satunas argues compute is being overemphasized relative to other equally important elements


Topics

Artificial intelligence | Financial mechanisms | Environmental impacts


Overall assessment

Summary

The main areas of disagreement center on AGI timeline predictions, governance approaches for different sized nations, investment priorities, and the role of AI in bias reduction. Speakers generally agreed on the importance of education and security concerns but differed significantly on implementation strategies.


Disagreement level

Moderate level of disagreement with significant implications for AGI development strategy. The disagreements suggest different philosophical approaches to AI development – some favoring collaborative industry engagement, others preferring regulatory frameworks, and still others emphasizing technical resilience measures. These differences could lead to fragmented approaches to AGI governance and preparedness.


Partial agreements

Partial agreements

Both agree on significant security threats from advanced AI but disagree on approach – Kesar focuses on staying ahead of bad actors through better tools, while Gjørv emphasizes building resilience and rollback mechanisms

Speakers

– Mr. Kenny Kesar
– Ms. Alexandra Bech Gjørv

Arguments

AGI poses sophisticated attack capabilities, including the ability to emulate humans like CEOs for malicious purposes


Information warfare and agent swarms create different information universes, requiring defensive and offensive strategies


Topics

Building confidence and security in the use of ICTs | Artificial intelligence


Both agree on the importance of education and awareness about AI risks, but disagree on implementation – Satunas focuses on critical thinking and collaboration with industry, while Cerniauskas emphasizes technical solutions and regulatory measures

Speakers

– Mr. Simonas Satunas
– Simonas Cerniauskas

Arguments

Education and awareness are crucial as threats may not be obvious to general populations


Four risk levels exist: classical risks, human/mental health, social empathy impacts, and macro societal/democratic effects


Topics

Capacity development | Human rights and the ethical dimensions of the information society


Both agree that human education and critical thinking are essential, but disagree on priority – Kesar focuses on the risk of cognitive atrophy from AI dependency, while Satunas emphasizes education as one of several equally important elements for AGI preparedness

Speakers

– Mr. Kenny Kesar
– Mr. Simonas Satunas

Arguments

AI dependency threatens critical thinking as 30% of AI training content is already AI-generated, creating a feedback loop


Compute is just one element; energy, data, implementation, language, and human education are equally critical


Topics

Artificial intelligence | Capacity development | Human rights and the ethical dimensions of the information society


Similar viewpoints

Both speakers advocate for practical, collaborative approaches to AI governance rather than purely regulatory ones, emphasizing resilience and working with industry

Speakers

– Mr. Simonas Satunas
– Ms. Alexandra Bech Gjørv

Arguments

Small countries should focus on working with AI developers to embed moral and ethical considerations rather than attempting regulation


Need for resilience planning and robust rollback mechanisms, similar to preparing for infrastructure failures


Topics

The enabling environment for digital development | Building confidence and security in the use of ICTs


Both speakers see potential for AI systems to improve human decision-making and reduce biases through structured approaches and external validation

Speakers

– Ms. Alexandra Bech Gjørv
– Mr. Kenny Kesar

Arguments

Machines can help reduce human biases, as demonstrated in basketball referee decision-making improvements


AI Operating Procedures (AOP) similar to human Standard Operating Procedures, with external validation systems for ethical compliance


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Both speakers emphasize the serious security and information warfare threats posed by advanced AI systems, requiring proactive defensive measures

Speakers

– Mr. Kenny Kesar
– Ms. Alexandra Bech Gjørv

Arguments

AGI poses sophisticated attack capabilities, including the ability to emulate humans like CEOs for malicious purposes


Information warfare and agent swarms create different information universes, requiring defensive and offensive strategies


Topics

Building confidence and security in the use of ICTs | Artificial intelligence


Takeaways

Key takeaways

AGI is defined as AI that can perform every human task at professional levels with consistency across domains and reliability, potentially achievable within 3-7 years


AGI development requires massive compute investment, but also needs advances in energy efficiency, data quality, implementation strategies, and critically – human education and critical thinking skills


AGI poses escalating security risks including sophisticated attacks, human impersonation capabilities, and the ability to create entirely different information universes for manipulation


Four levels of AI risks exist: classical cybersecurity risks, human/mental health impacts, social empathy degradation, and macro-level threats to democracy and society


Current AI dependency is already creating a dangerous feedback loop where 30% of AI training content is AI-generated, potentially limiting future innovation and human critical thinking development


Small countries should focus on collaborating with AI developers to embed ethical considerations rather than attempting independent regulation, while larger entities should work on global regulatory frameworks


Resilience planning and robust rollback mechanisms are essential, similar to preparing for infrastructure failures, along with AI Operating Procedures (AOP) that mirror human ethical oversight systems


Resolutions and action items

Develop AI Operating Procedures (AOP) similar to Standard Operating Procedures for humans, with external validation systems for ethical compliance


Invest in education and critical thinking skills development alongside compute infrastructure


Create technical solutions like watermarks and labeling systems to identify AI-generated content


Establish collaboration frameworks between industry, academia, and government for ethical AI development


Implement resilience planning and rollback mechanisms for AI systems


Work on global regulatory coordination rather than fragmented local approaches


Unresolved issues

No definitive timeline consensus for AGI achievement – estimates range from 3-7 years with significant uncertainty


Unclear how to balance AI dependency benefits with the risk of diminishing human critical thinking capabilities


Unresolved question of whether current massive compute investments represent a sustainable approach or an unsustainable bubble


No clear solution for preventing the AI training feedback loop where AI learns from AI-generated content


Uncertainty about how to maintain human agency and decision-making capability as AGI approaches human-level performance


Unaddressed question of how to ensure democratic access to AGI capabilities across different countries and populations


No consensus on optimal regulatory approach – balancing innovation with safety and ethical considerations


Suggested compromises

Use tiered AI systems with Small Language Models (SLMs) for simple tasks and larger models for complex problems to manage compute costs and efficiency


Implement combined probabilistic and deterministic AI systems as a bridge toward fully deterministic AGI


Focus on positive collaboration tools with AI developers to embed ethics rather than restrictive regulation


Balance technical solutions (watermarks, labeling) with regulatory measures while avoiding over-regulation


Develop both defensive and offensive strategies for information warfare while maintaining ethical boundaries


Create external validation systems for AI ethics similar to current organizational auditing practices


Thought provoking comments

50% of Israelis trust [Gen AI tools] more than they trust their friends, many trust them more than they trust human professionals… this puts us closer to AGI so I would say that it’s a matter of 3 years to 7 years until we reach that milestone

Speaker

Mr. Simonas Satunas


Reason

This comment is deeply insightful because it reframes the AGI timeline discussion from purely technical capabilities to societal acceptance and trust. It suggests that AGI arrival may be determined more by human perception and adoption than by achieving perfect technical benchmarks.


Impact

This shifted the conversation from abstract technical definitions to concrete social indicators, leading other panelists to discuss practical implications like security threats and the need for education. It grounded the AGI discussion in current human behavior rather than future technical possibilities.


I think that one of the elements that we are not investing enough is the human element. Think about critical thinking, for example… investing in education is not less critical than investing in computing.

Speaker

Mr. Simonas Satunas


Reason

This comment challenges the dominant narrative that AGI development is primarily about compute power and technical infrastructure. It introduces a crucial counterpoint that human cognitive skills and education are equally important investments for preparing for AGI.


Impact

This comment fundamentally redirected the discussion from technical infrastructure to human preparedness. It sparked a sustained conversation about critical thinking, education, and the risk of cognitive dependency on AI that continued throughout the remainder of the panel.


30% of the content [AI] is consuming is AI generated already. So basically you’re feeding back and it’s learning on the same model… it’s a risk that we will stop evolving because if we don’t exercise the brain as a muscle… it will be actually a very big loss to society.

Speaker

Mr. Kenny Kesar


Reason

This observation reveals a profound recursive problem in AI development – the system is increasingly training on its own outputs, potentially leading to intellectual stagnation. It also draws a compelling parallel between cognitive atrophy and physical muscle atrophy.


Impact

This comment introduced a new dimension of risk that hadn’t been discussed – the feedback loop problem and human cognitive decline. It deepened the conversation about long-term societal implications and reinforced the earlier point about the importance of maintaining human critical thinking capabilities.


We distinguish between four different levels [of AI risks]: classical risks like privacy security… human health and mental health… social level… and macro level – what does it do to democracy?

Speaker

Mr. Simonas Satunas


Reason

This structured framework for understanding AI risks is exceptionally valuable because it moves beyond technical concerns to encompass psychological, social, and democratic implications. It provides a comprehensive taxonomy for thinking about AGI’s broader impacts.


Impact

This framework organized the subsequent discussion around concrete risk categories, allowing panelists to address specific levels of concern. It elevated the conversation from general fears about AGI to structured analysis of different types of societal impact.


The ability to create completely different information universes… it’s an actual battleground in and of itself, and it’s very strange to think about the world in that way, but I think you’re very naive if you don’t start systematically working on how you make your conviction of what the world is like also part of the people that you need to… relate to and convince.

Speaker

Ms. Alexandra Bech Gjørv


Reason

This comment reveals the epistemological crisis that AGI could create – not just misinformation, but entirely separate realities. It suggests that the challenge isn’t just technical but involves fundamental questions about shared truth and democratic discourse.


Impact

This observation added a geopolitical and philosophical dimension to the discussion, connecting AGI development to current conflicts and information warfare. It broadened the scope from individual and societal impacts to international relations and the nature of reality itself.


Overall assessment

These key comments transformed what could have been a purely technical discussion about AGI capabilities into a nuanced exploration of human-AI coevolution. The most impactful insights shifted focus from ‘when will we achieve AGI?’ to ‘how is AGI already changing us?’ The discussion evolved from infrastructure and compute power to human psychology, social structures, and democratic institutions. The panelists’ most thought-provoking contributions revealed that AGI’s arrival may be less about reaching technical benchmarks and more about the recursive relationship between advancing AI capabilities and declining human cognitive independence. This created a sobering narrative about the need for proactive human development alongside technological advancement.


Follow-up questions

When exactly will AGI be achieved and what are the precise technical milestones?

Speaker

Simonas Cerniauskas


Explanation

Cerniauskas explicitly stated that none of the panelists would be able to answer when AGI will arrive, indicating this as a key unanswered question that requires further research and discussion.


How can we develop low latency, energy efficient hardware and neuromorphic computing architectures beyond auto regression?

Speaker

Ms. Alexandra Bech Gjørv


Explanation

She identified these as technical requirements needed to achieve human-level situational awareness and decision-making speed, suggesting these are areas requiring significant research and development.


How do we balance the need for studying private, personal data to achieve human-level situational awareness while maintaining privacy limits?

Speaker

Ms. Alexandra Bech Gjørv


Explanation

She highlighted this as a fundamental tension that needs resolution – AGI requires extensive data including private information, but this conflicts with privacy requirements.


How do we develop AI systems that can research and invent new ideas like the human brain, beyond just regression-based learning?

Speaker

Mr. Kenny Kesar


Explanation

He identified that current AI is built on regression and learning from existing data, but true intelligence requires the ability to research and create new ideas, which is still an unsolved challenge.


How do we raise critical thinking among the public to identify AI-generated content and manipulation?

Speaker

Mr. Simonas Satunas


Explanation

He emphasized that investing in education and critical thinking is as important as investing in computing power, but the specific methods for achieving this remain unclear.


How do we ensure humans don’t lose their intelligence and critical thinking abilities as AI becomes more capable?

Speaker

Mr. Kenny Kesar


Explanation

He raised concerns about a vicious cycle where humans become dependent on AI for thinking, potentially losing their own cognitive abilities, but admitted he doesn’t have an answer to this challenge.


How do we address the problem of AI consuming increasingly AI-generated content, potentially limiting its learning and evolution?

Speaker

Mr. Kenny Kesar


Explanation

He noted that 30% of content AI consumes is already AI-generated, creating a feedback loop that could limit AI’s ability to learn from diverse human thinking processes.


How can we work with AI developers to make AI personality more moral and ethical, especially regarding financial incentives that drive immoral behavior?

Speaker

Mr. Simonas Satunas


Explanation

He suggested that collaboration with industry is needed to address cases where AI behaves immorally not by design but due to financial incentives, using the Myanmar violence example.


How do we develop robust rollback mechanisms and resilience strategies for when AI systems fail or are compromised?

Speaker

Ms. Alexandra Bech Gjørv


Explanation

She emphasized the need to not just reduce risks but also reduce the consequences when bad outcomes occur, suggesting this requires further research and planning.


What specific technical measures like watermarks and labeling can effectively help identify AI-generated threats?

Speaker

Simonas Cerniauskas


Explanation

He mentioned these as potential technical solutions but didn’t elaborate on their effectiveness or implementation, suggesting this needs further investigation.


How do we distinguish between information gathering and true critical thinking in an AI-assisted world?

Speaker

Mr. Vinayak Godse


Explanation

He raised the concern that if AI can provide all perspectives and information easily, we need to understand what constitutes genuine critical thinking beyond just accessing information.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.