Advancing Scientific AI with Safety Ethics and Responsibility

20 Feb 2026 11:00h - 12:00h

Advancing Scientific AI with Safety Ethics and Responsibility

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel explored how the rapid emergence of AI-enabled biodesign tools is shifting biosecurity risk from traditional laboratory containment to the upstream design phase, creating a new governance challenge that demands attention to data governance, model evaluation and red-team activities [6-13]. Participants argued that India’s heterogeneous scientific ecosystem cannot rely on a single central authority; instead, oversight must be decentralized to empower biosafety officers, information-security units and other institutional actors, establishing multiple, coordinated checks and balances [24-27][28-31].


To reconcile open-science benefits with high-risk capabilities, a tiered-access model combined with contextual norms and pre-deployment assessments using structured rubrics was recommended, drawing on RAND Europe’s risk index and a “know-your-customer” style credentialing [41-49][50-57]. The speakers emphasized that assessment results should be shared through a credentialed network with tiered confidentiality rather than kept proprietary, and that a six-monthly independent monitoring ritual-potentially housed in an AI-safety institute linked to governments and international bodies-would provide continuous risk oversight [92-99][100-119]. Recognizing limited AI readiness in many Global South institutions, they called for socio-cultural evaluation, deployment of small-model solutions, self-regulation commitments and capacity-building programmes to make safeguards proportionate and functional [62-71][75-79]. A unified yet adaptable framework was proposed, integrating participatory stakeholder involvement, accountability mechanisms for developers to document testing, and self-regulation endorsements [72-77][78-80].


Cross-border challenges were highlighted, with fragmented data standards and divergent legal regimes hampering biosurveillance; the panel urged harmonised federated standards (e.g., HL7-FHIR-style), pre-negotiated legal safe-harbours, and shared evaluation criteria embedded in national systems [226-233][234-241]. To close incident-reporting gaps, a new AI incident taxonomy covering physical, psychological, cyber, algorithmic, socio-economic and environmental harms was described, alongside toolkits for assessing user perceptions and building AI literacy in healthcare settings [270-276]. Emerging powers such as India are creating sandboxes, a Global-South trustworthy-AI network and an AI-safety commons to enable low-resource countries to adopt tailored governance while learning from each other’s experiences [161-169][170-180].


The discussion concluded that effective governance must move beyond model-centric audits to systemic, socio-technical assessments that consider capability uplift, incentive structures and the cross-border diffusion of risk, integrate AI evaluation into grant reviews and biosafety panels, and incorporate tech-sovereignty measures for AI security [189-197][198-202][147-149][155-156]. Overall, a decentralized, collaborative, and context-aware architecture-supported by regular independent evaluation, capacity building and interoperable standards-is essential to safely harness AI-driven scientific innovation [24-27][41-57][122-130][226-233][189-202].


Keypoints

Major discussion points


AI is moving bio-risk upstream from physical labs to the design stage, creating a new governance challenge.


The panel highlighted that traditional bio-security relied on “physical infrastructure and lab facilities” [7-8] but AI-driven biodesign tools now let researchers “engineer proteins, optimise DNA sequences…” without those constraints [10-12]. This shift means risk must be managed earlier in the design pipeline [12-13] and calls for “more adaptive oversight mechanisms” [23-24].


Oversight must be decentralized, capacity-building focused, and tailored to heterogeneous ecosystems (especially in India and the Global South).


Speakers argued that a single authority “in Delhi…won’t work” [25-27] and advocated for empowering “information security or biosecurity offices” [28-31] and creating “cross-trained AI biosafety review panels” [147-149]. They also stressed the wide variation in “governance capacity, compliance culture and technical expertise” across institutions [126-129] and the need for “proportionate, capability-aware safeguards” [138-144].


Open-science benefits must be preserved through tiered, contextual access and pre-deployment assessments rather than blanket restrictions.


The discussion proposed “tiered access and contextual norms” [41-42] and praised the RAND Europe “pre-deployment assessment with structured rubrics” [44-48]. It was emphasized that “differentiated governance at capability level is always better than blanket restriction at access level” [57-58] and that open-source tools remain essential, especially for low-resource settings [53-56].


Institutionalising independent evaluation (red-teamings) and continuous monitoring is essential, but requires new structures and investment.


A six-monthly “monitoring and assessment of risk” ritual was recommended [105-106] and the creation of an “AI safety or security institute” with formal government links was suggested [113-118]. The need for “non-interactive methodology” and broader integration into institutions was also noted [107-110].


Cross-border data-standard harmonisation and legal safe-harbors are critical for AI-enabled biosurveillance and pandemic preparedness.


Participants pointed out fragmented standards across Southeast Asia [212-216] and advocated for federated frameworks like HL7-FHIR adapted for public-health [227-230]. They called for pre-negotiated “legal safe harbors” for data sharing [230-234] and shared evaluation criteria embedded in national systems [235-241].


Overall purpose / goal of the discussion


The panel convened to explore how emerging AI-driven biodesign and biosurveillance tools reshape bio-security risk, and to identify governance, policy, and capacity-building measures-especially for the Global South-that can ensure safety while retaining the scientific and societal benefits of open AI research.


Tone of the discussion


Opening (0:00-5:00): Cautious and exploratory; speakers acknowledge uncertainty (“not an AI safety expert…take it with a pinch of salt” [3-4]) and the novelty of the risk landscape.


Middle (5:00-22:00): Constructive and solution-oriented; ideas about decentralized oversight, tiered access, and institutional mechanisms are presented with optimism.


Later (22:00-38:00): Collaborative and forward-looking; emphasis on building networks, commons, and cross-border standards, with a tone of partnership and urgency.


Closing (38:00-end): Summative and hopeful; participants reiterate key actions, express confidence in emerging frameworks, and thank each other, ending on a cooperative note.


Overall, the conversation moves from identifying a novel problem to proposing concrete, multi-level solutions, maintaining a collegial and proactive tone throughout.


Speakers

Speaker 1


Area of expertise: Biosecurity, AI-enabled biodesign, AI safety in life-sciences.


Role / Title: (not specified in the transcript) – presented as a biosecurity expert discussing institutional readiness and safety measures.


Citation: [S13]


Speaker 2


Area of expertise: AI governance, open-science policy, risk assessment for AI-enabled biological tools.


Role / Title: (not specified in the transcript) – referenced as a contributor to RAND Europe studies and a proponent of pre-deployment assessments.


Citation: [S10][S12]


Speaker 3


Area of expertise: AI policy, socio-technical assessment, AI readiness for emerging economies, governance frameworks.


Role / Title: (not specified in the transcript) – identified as “Geetha” who works on institutional gaps and AI-trustworthiness initiatives.


Citation: [S1][S2]


Moderator


Name: Shyam


Area of expertise: Session facilitation / AI impact discussions.


Role / Title: Moderator of the panel.


Citation: [S16]


Audience Member 1


Area of expertise: Psychological harms of AI, AI safety research.


Role / Title: Researcher in AI safety at the University of York.


Audience Member 2


Area of expertise: Model monitoring, data-drift and temporal robustness.


Role / Title: Audience participant (no further affiliation provided).


Audience Member 3


Area of expertise: Biosecurity incident response, cross-border prevention frameworks.


Role / Title: Audience participant (no further affiliation provided).


Additional speakers:


– None beyond those listed above.


Full session reportComprehensive analysis and detailed insights

The moderator opened the session by asking whether the emerging challenges should be framed as a data-governance issue, a model-design problem, or a compliance-verification matter [1].


Bio-security perspective (Speaker 1).


Speaker 1, whose expertise lies in bio-security rather than AI safety, framed his remarks in terms of life-science risk governance [2-4]. He noted that traditional bio-security has relied on physical infrastructure, inspections and material-transfer controls [7-8], but the rapid proliferation of AI-enabled biodesign tools-over 1 500 according to a RAND study-has begun to decouple risk from those physical safeguards [9-10][9-13]. These AI-driven capabilities now allow researchers to engineer proteins, optimise DNA sequences and model pathogen-host interactions without laboratory containment [10-12]. Consequently, the risk landscape is shifting upstream to the design phase of biological work [12-13], demanding new, more adaptive oversight mechanisms [23-24]. While data governance, model evaluation and red-team activities remain essential [13-15], the panel argued they must be re-oriented to address this upstream threat.


Open-science discussion (Speaker 2).


Speaker 2 advocated a tiered-access and contextual-norms approach [41-42], supported by pre-deployment assessments using structured rubrics such as RAND Europe’s risk index [44-48]. He emphasized that open-source tools are crucial for low-resource settings and should not be conflated with danger [53-56]; instead, differentiated governance at the capability level should replace blanket restrictions [57-58]. Building on this, he proposed a systematic pre-deployment assessment regime akin to a “know-your-customer” (KYC) approach, where developers of high-risk biodesign tools undergo credentialed scrutiny before release [49-52]. The results of these assessments would be shared across a credentialed network with tiered confidentiality [115-119], helping to prevent the “danger…once released” from spreading unchecked [45-48].


Institutional-gap analysis (Speaker 3).


Speaker 3 highlighted that India’s high global ranking masks substantial intra-regional disparities, with countries such as Indonesia lagging far behind [63-64]. He pointed out that large language models trained predominantly on Western data fail 20-30 % of biological-safety benchmarks relevant to Southeast Asia [66-67][68-70], underscoring the need for socio-cultural evaluations and participatory approaches that involve end-users from the outset [71-73]. He also called for the development of small, edge-deployed language models for low-resource settings [71-73] and stressed the importance of building AI literacy and ensuring privacy protections for marginalized communities [270-274]. Finally, he reiterated India’s self-regulation commitments and argued that a unified yet adaptable framework can be tailored to diverse deployment settings [78-80].


Independent-evaluation / red-team proposal (Speaker 2).


Speaker 2 recommended institutionalising a six-monthly “monitoring and assessment of risk” ritual carried out by an AI-safety institute that is technically credentialed, independent, and formally linked to governments [105-108][111-118]. He cited the recent SECURE-Bio study in which a frontier language model outperformed expert virologists on wet-lab protocol troubleshooting [101-104], underscoring the urgency of continuous, non-interactive risk monitoring [107-110].


Ecosystem-specific safety measures (Speaker 1).


Speaker 1 suggested embedding AI evaluation modules into grant-review procedures and establishing cross-trained AI biosafety review panels at the institutional level [147-149]. He called for investment in domestic evaluation capacity, such as the AI safety institute at IIT Madras [148-149], and for leveraging tech-sovereignty measures to control data flows [155-156].


Emerging Global-South powers (Speaker 3).


Speaker 3 described India’s creation of sandboxes for health-care and ideological AI systems [162-163] and announced the launch of a Global-South network for trustworthy AI together with an AI-safety commons that will provide shared evaluation resources within the next one to two years [164-166]. He also noted the development of an incident-reporting framework customised for Indian contexts, capturing harms across physical, psychological, cyber-incident, algorithmic, socio-economic and environmental dimensions [270-274].


Model-vs-socio-technical focus (Speaker 1).


Speaker 1 warned that even with perfect digital safeguards, a physical infrastructure is still required to synthesise or modify viruses, highlighting the “digital-to-physical barrier” that limits immediate creation of dangerous pathogens [246-251]. He argued that AI can also aid safety, for example by using agentic AI to detect jailbreak attempts in vaccine-development platforms, but that governance must balance model-centric controls with broader socio-technical considerations.


Biosurveillance integration (Speaker 2).


Speaker 2 observed that fragmented data standards and divergent legal regimes in Southeast Asia have led to data hoarding that cost lives during COVID-19 [212-219]. He proposed adopting a federated, HL7-FHIR-style interoperability framework for public-health surveillance [227-230], establishing pre-negotiated legal safe-harbours for emergency data sharing [231-234], and embedding shared evaluation criteria within national surveillance systems [235-241]. He warned that the AI-governance community often treats biosurveillance as a niche, while biosecurity experts see AI merely as a tool, creating a dangerous communication gap [237-240].


Audience Q&A.


An audience member from the University of York raised the issue of psychological impacts, prompting Speaker 3 to present a taxonomy of harms-including physical, psychological, cyber-incident, algorithmic, socio-economic and environmental dimensions-and to share a toolkit for assessing healthcare workers’ perceptions of AI tools [265-276]. The discussion also covered temporal data drift, with Speaker 3 explaining that model-monitoring pipelines must detect distributional shifts over time-a key safety criterion [286-288].


Coordinated incident-response framework.


Speaker 1 advocated empowering biosafety officers at the lab level and providing them with clear reporting channels to central leadership, creating a “decentralised but integrated” system [295-299]. Speaker 2 illustrated Singapore’s multi-agency model (NEA, MOH, Communicable Disease Agency, etc.) as an exemplar of clear role allocation during crises [300-309]. Both agreed that prevention and preparedness, underpinned by robust governance, are essential.


Closing remarks.


The moderator summarised the key points: the upstream shift of bio-risk, the necessity of decentralised yet coordinated oversight, the preservation of open-science through tiered access, the importance of capacity-building in the Global South, the need for harmonised data standards and legal safe-harbours, and the value of a systematic, socio-technical approach to AI safety [255-263]. Speaker 1 added that AI can aid safety-e.g., agentic AI detecting jailbreak attempts-while reiterating the digital-to-physical barrier [246-251]. The panel concluded on a hopeful note, emphasizing collaborative networks, shared safety commons, and adaptive governance as the path forward [252-254].


Session transcriptComplete transcript of the session
Moderator

Key area should we think about it as a governance data governance problem, problem in model design or should it be more on a verification or compliance angle.

Speaker 1

Thanks thank you very much Shyam for having me and good morning to everyone and welcome to this session. So I think okay let me maybe just start with saying that I’m not an AI or AI safety expert so whatever I say take it with a pinch of salt. My work is in biosecurity and that’s the angle I’ll come from. I think all of those things whether it’s a model evaluation and other things those are there and those are very very important factors and that those are the things that we need to keep in mind. But on top of that there is also a very important deep structural change that is happening. For example in the field of life sciences historically whatever risk and risk governance things that we had were very much linked to the physical infrastructure and lab facilities and facility inspection and material transfer control and things like that.

But that seems to have changed and seems to be changing very rapidly now with the kind of AI biodesign tools as well as LLMs that are emerging. So I think Rand also did a study on this, but there are more than probably 1 ,500 biodesign tools that are out there, and those are totally transforming how life sciences, but in general, science is done. Now, what kind of change that we are seeing is with these capabilities, now it’s much easier to engineer proteins, optimize DNA sequences to do things that we want, have better pathogen host modeling, interaction modeling, and things like that. Now, these capabilities are… because of AI becoming partly decoupled from the physical containment measures which were usually used in the life sciences.

So we have a lot of this risk landscape shifting a little bit more upstream to the design side when it comes to at least biological side of things. So yes, data governance, things matter. Model evaluation and red teamings are essential and we should be doing that. But also it is very important that especially for a country like India where we have a very vibrant scientific ecosystem but that is also very uneven. How we can use this AI -enabled science which is rapidly evolving into the existing mechanisms to some extent but also at the same time develop those capabilities, have more people with the core capabilities and more people with the core capabilities and more people with the core capabilities chemical security, AI nuclear security, and things like that.

So we need to train more people on those things. So integrating, again, going back to the life sciences, so integrating AI evaluation into biosafety system, strengthening the institutional readiness. Some places there are information, some labs and some institutions have information security labs or information security offices. How we can get them better prepared for these new emerging risks that are coming due to AI. Some places they have biosafety officers or biosecurity officers. How we can enable them better to address the AI risk is what the direction that we need to move towards. And have more adaptive oversight mechanism that is not only based on the, limited to this once in a while inspection that happens, but that goes more with the rapidly evolving things that we are seeing with the AI models coming up.

And I think, I think, So, just in terms of paradigm change that we are seeing and that you mentioned, is that there need to be more decentralized checks and balances and oversight mechanisms. If there is one authority sitting somewhere in Delhi and trying to do everything, that’s not going to work. So that is one of the things that we have to collectively think about. How do we decentralize these kind of oversight systems to some extent? For example, as I was saying, how we can empower the information security or biosecurity offices and create what in the field of disarmament where I have worked on called way of prevention. One measure is not enough. It’s not sufficient.

You need to have a number of measures in place which collectively can help prevent something bad from happening. Thank you.

Moderator

Thank you. That’s very insightful. And I think we’ve already touched on some areas that, you know, that would be follow -up questions. P .T., focusing a bit more on… open science where high risk domains, especially in biological data and AI capabilities, as Surya was mentioning. How do we preserve the benefits of open science while preventing the destabilizing diffusion of capabilities that we were just discussing about?

Speaker 2

Thank you. Thank you for having me today. So I guess like I would love to be able to give like a binary yes or no answer. Right. I think we all want to have that. But unfortunately, that’s not quite the case. So we need to find a way to balance the openness and also the restrictions as well. So I guess my answer here would be sort of like a tiered access and contextual norms. I think those are really important. And I think RAN Europe has done a really great job at establishing the global risk index on AI enabled biological tools. And also just generally looking into AI safety in general, where they do this thing where they call the pre -deployment assessment.

with structured rubrics. And I’m a huge fan of that because I think that when you release very frontier models and frontier tools, the danger is already out there once released. It’s really hard to withdraw the danger. But however, prevention, right? There’s this window before you release where you can do a pre -deployment assessment. So I think I’m a really huge fan of that and also the same way that I’m a big fan of KYC, know your customers. And I guess this principle also pretty much applies whereas in the case of biosecurity, where we differentially allow the development of medical countermeasures and also the defensive measures that is necessary for the research, but also don’t limit the researchers from actually innovating either.

And I guess my point here is that we’re not going to be able to do that. The non -safeguarded access, like private access to credential researchers where necessary for like defensive research is absolutely necessary. And then, you know, like open source tools, it’s necessary. Like we can’t turn away from being open source. Like any governance structure that conflates open source with danger makes a huge mistake because that also is a very critical development point, especially for lower resource settings. So we cannot afford to conflate that altogether. So I guess a very long way to answer this and then to summarize my answer is that differentiated governance at capability level is always better than blanket restriction at access level.

Yeah.

Moderator

I think that’s a very structured answer and I think, you know, there’s a start of a very valid framework level conversation that’s already happening there. Geetha, turning to you, thinking more about institutional gaps in enabling some of the solutions that we are discussing, potential solutions, what are the most immediate gaps that you see in evaluating systems, technical capability, regulatory and coordination, largely from the policy angle that you work in?

Speaker 3

Thank you, Shyam. Good morning, everyone. So on the technical capabilities, right, the most fundamental thing I see is the AI readiness aspect of deployment. So in general, when we see India stands or ranks third globally, and when we see the Southeast Asian countries, I think Indonesia is around 49, and so there we see the gap, right? So whatever we do from the Western context or in the Indian context can never be catered to the AI readiness aspect of deployment. So I think it’s important to the needs, the unique needs of the Southeast. Asian countries and moreover what there is the end user perception where we see that we have to build lot of capacity for creating awareness among the end users who are actually going to use the products and from the policy perspective I would like to give you certain aspects where we think about the socio -cultural aspects that is relevant to the deployment environments.

So in general the large language models are usually trained on the western data and the very recent research work maybe I will cover a bit of both tech and policy here. So there is a Southeast Asia related benchmark, safety benchmark which says that all these leading large language models have failed when they evaluated for more than 20 to 30 percent of the risk. So in the biological settings so which means that we did not have enough safeguards which will protect people from encountering all these risks. And moreover, so this lets us know that we have to build in more sociocultural evaluations and assessments which will cater to the harms that is more particular to that particular deployment environment rather than just having a high level evaluation strategies.

And this cannot come just from the policy side, right? So we need to bring in all the participatory approach which will bring in the end users, the different stakeholders involved in using all these AI systems, be it model, right from the requirements definition, right? So when we assess whether we need an AI system or not, generally now there is a perception saying that for whatever we are going to build or the problem that we are going to solve, by default we assume. We assume that we need a large language model which will not care. which is not even possible to have it deployed in a low resource setting, right? So we need to think about small language models which will enable edge deployments at the low resource settings and also consider all the multicultural and socio -economic diversity that exist in these regions so that your model doesn’t hallucinate, is still fair and also establish some governance and accountability frameworks which will make the developers more accountable and also because having the developers more accountable will enable them considering more safeguards, right?

And also create more awareness about the main fundamental thing is that they will be expected to document whatever testing that has been gone through. And on the policy side, there is one more aspect which is the Indian government also endorses, right? The self -regulation. voluntary commitments on managing and mitigating risk that comes out of all these AI models. So I think we have to have a unified framework which can still be adaptable to different deployment settings.

Moderator

I think we are already getting a diversity of perspectives here and it is very useful to hear. Moving ahead and thinking about institutionalizing these kind of capabilities in scientific AI context, PT turning to you. Should independent evaluation and red teaming of AI systems from a technical kind of solution perspective for this problem that generate biological outputs, especially thinking biosecurity, given your perspective on this, should it become a norm and part of the global scientific specialist infrastructure? And if so, how would we go about that?

Speaker 2

I think we have to have a clear understanding of the role of the AI system and how it can And I think that is a key point. And I think that is a key point. And I think that is a key point. And I think that is a key point. And I think that is a key point. And I think that is a key point. And I think that is a key point. And I think that is a key point. So I guess a good example to use here is probably we’re thinking of nuclear weapons, right? Which falls under this organization called the International Atomic Energy Agency, the IAEA. Now, from my perspective, I think fissile materials, correct me if I’m wrong, they’re very scarce.

And they are, to a certain degree, technically trackable. And they are also, more than anything else, highly regulated. Whereas biology, on the other hand, is everything but that. It’s diffused, it’s dual -use by nature, and it’s also nearly impossible to trace. And also, most importantly, commercially available, right? And so in the recent study, actually, this was done by this organization called SECURE. Bio, where they actually tested frontier large language models against expert virologists. And it turns out that ChachiPT -03 actually outperformed expert virologists by 94 % at troubleshooting wet lab protocols. So that’s a very shocking number, right? And then, I mean, obviously you mentioned earlier that there’s a very concentrated effort that is happening between the US, UK, and China, like the global superpowers, basically.

And I guess there’s, we, in the recommendation from the RAND Europe that I was, you know, helping out with is that we recommended that governments and also independent researchers do this six -monthly ritual of monitoring and also assessment of risk on a continuous basis. And we also suggested, obviously, like using AI as an automation tool to increase the efficiency of this risk monitoring system. But I think, to your point, I think stuff like this, stuff like that is non -interactive methodology that doesn’t require, you know, researchers to actually query directly with the danger systems is actually already in and of itself a very meaningful, you know, safeguard. But that is not enough. You know, we need something that is much larger than that.

That is the integration into, like, you know, institutionalizing it. And I would argue that, like, a six -monthly, you know, ritual, that refresh cadence, for it to be delivered, it’s going to require a very significant investment from the government at multilateral level, right? And so we can’t go without any investment at all. So my suggestion would be to actually implement this AI safety or security institute model that we’ve been applying where largely… It is technically credentialed. It’s independent, but also has a very… formal relationship with the government. And something that I would caveat from the bio side is that for the institution to have some kind of anchoring around biological weapons convention or the WHO.

Because right now that relationship is not quite there yet. And I think, you know, back to my point of like pre -deployment assessment, I think that is definitely needed and then the result has to be shared then across the credential network with tiered confidentiality that rather than being kept, you know, as a proprietary to the different state. I think it’s kind of a

Moderator

That’s an interesting position, PT. Suryesh, thinking more about safety measures at large, how can we make sure that they remain rigorous and feasible within research ecosystems that you’re quite familiar with, you know, from a biosecurity angle, if you will, but largely also in the larger scientific ecosystem.

Speaker 1

Thanks Shyam. I think first, yeah first thing that we need to understand is how that ecosystem is and then see if certain measures will work there or not, right. One of the hallmarks of let’s say Indian scientific ecosystem is there is a lot of heterogeneity. There are some places which are really extremely well performing and there are other places who are not well resourced or have other all kind of challenges. So, understanding how the ecosystem is, what kind of regulation within the institutes that are there, what kind of administrative measures that are there, what kind of safety teams these kind of institutes might have, all of those things are extremely important, right. The governance capacity, compliance culture and technical expertise varies widely in Indian institutions.

And I believe this is true for many other countries in the global south as well. So it’s not something very unique. Particularly to India, we have challenges related to different kind of resources. And even when the resources are there, sometimes it’s also problematic to use them efficiently enough. Now, given that context, if we just import safety frameworks that are developed in a well -resourced place in a Western country or any developed countries, I don’t know if those would be a very good fit for the kind of system that we have here. So those might become more performative than functional to some extent. Another challenge that also P .T. mentioned to some extent is that the speed and scale of AI is huge, right?

And we need these traditional review mechanisms that institutes have for safety audits and all of those things are not going to work. We need something which is far more adaptive and quick. And also what we had traditionally is this periodic paper -based facility -centric kind of measures. And those are very much outdated in the era of AI that we live in. Now, so what… Now the question becomes, how do we design proportionate capability -aware safeguards that would be better matched for the challenges that we have? One of the major challenges, as I think a lot of us realize, is that there is limited awareness about AI safety when it comes to scientific issues, even among the scientists.

So a huge number, a large majority of scientists just don’t know what they are putting, let’s say in chat GPT might be harmful or what they are getting out of biodesign tools could be harmful to some extent. So there is some understanding about the privacy -related issues, but safety and security is still a big gap in understanding of even the scientific experts that are there. Now also regarding AI, I think there needs to be a tiered risk classification. So not everything is highly risky. There are certain biodesign tools, for example, that are trained in… in virus data. Those we’ll put in a higher risk category compared to something which is just working, let’s say, on certain animals which are not dangerous.

Now, also the safety measures, as I was mentioning earlier, as the risk has moved a bit upstream, it has come more on the design side, we should also have more safety measures moving upstream. And as Piti was mentioning that, you know, certain kind of evaluation that are before launching AI tools are necessary, but also integrating AI evaluation modules into grant review processes, creating cross -trained AI biosafety review panels, so panels specifically for AI biosafety at, from the bottom -up side, instead of having them from the top -down approach. Investing more in domestic evaluation capacity, having more AI safety institutes like Geeta’s home institute at IIT Madras. So we need a lot more of that. And lastly, I think what we have in the US and UK are these, a lot of AI safety work is being done there, right?

And as I was mentioning, importing that directly might not work. And we in the global south are largely the users and importer of this technology. So we have to see from the bottom up side, where do we put those safety measures? Do we, like when it comes to import, what kind of, when the data is being transferred, is there certain places where we can put those kind of safeguards? Also, how we can use some tech sovereignty measures in this context, right? That tech sovereignty measures are used for a number of things, but AI security is something, AI safety and security is something where those could also be used to some extent. So, yeah, I would stop here and then we can discuss.

Thank you.

Moderator

Thank you. And I think a lot of useful thoughts here for us to explore a bit more. I think we’ve… just crossed the mid mark and I’m going to use Geeta to kind of like bridge between the next two topics by combining two of your questions sorry for that so just as Surya just mentioned will the emerging scientific powers you know global south middle powers would they be able to shape governance in this context especially you know enable science or will they continue to inherit the frameworks and if they were to show leadership what would that look like in scientific AI and research ecosystems and you know you’ve already been working on some of this so I’m looking forward to kind of hearing concrete measures that you know are happening

Speaker 3

Sure. So in general what I think is definitely the emerging powers right they are putting on all efforts to bring in all the tools and frameworks that are required for governing these AI systems and for example, so India’s strategy towards all these emerging techs is that they are trying to create sandboxes which are highly essential for deploying or evaluating safety aspects for the models, right? So they do it for healthcare systems, they do it for ideology systems and whatever, right? So these type of tools and frameworks come from Indian settings will actually help the other underdeveloped countries to learn from the strategies that we use and then build something of their own or something which cannot go cross border can still happen through learning and collaboration, right?

So for example, we are going to launch a global south network for trustworthy AI and we are going to launch a global south network for trustworthy AI and we are going to launch a global south network for trustworthy AI which will enable all these mechanisms to happen, enable people to… develop and deploy AI systems which will be deployed in the low resource settings. And the other initiative which is going to give a very big leap in evaluating AI safety is coming up with an AI safety commons for the global south. That is part of the safe and trusted AI pillar that is one of the pillars in this impact summit and I think in another one or two years we will have safety commons which will help us evaluate and assess how these AI data models and systems work for different deployment settings.

Another important thing is that as Suresh mentioned about the audit frameworks. So when we come with, when we focus on the kind of risk and audit mechanisms that we have here, we still have it from an organization perspective and not from the end user perspective. So at CRI, we have come up with an incident reporting mechanism and a framework that caters to the Indian settings. So it tells you how to operationalize incident, AI incident reporting in the Indian settings, which is completely different from the Western settings. And here we have to get the harms that the people experience in the marginalized communities, which will never be recorded everywhere, right? So how do we enable all these things?

So since it is all about all these CERN -based systems, right, even those things will have certain impacts to the marginalized communities, which may be an indirect impact. But how do they are knowing about such things are happening to them, right? So those kinds of gaps we should mitigate by building more awareness, creating more AI literacy. And we should also be able to provide more privacy to all these people. The final thoughts about combining all these things is that we have to bring in some kind of collaborative work between the different stakeholders who are involved in developing and deploying these systems. And the governments have already given certain prompt knowledge about how to enable all these things through the techno -legal framework and guidelines that was recently published and the AI governance guidelines.

Which was recently published by METI. So the Southeast Asian countries can learn from the developing countries like India and then have curated a more tailored approach towards their unique needs. So that is what I think. So whoever has an opportunity or a willingness to have more things that will actually help them use or leverage these technologies can learn from whatever. Learn from. the mistakes as well as the experience that the other countries have, which is now openly available through all these summits.

Moderator

That’s very useful and I’m looking forward to following up on IIT Madras’s work in this front as well. Going to Suresh for kind of the last question in this series really, should, you know, safety measures, evaluations, primarily focus, where should the focus be at the model level? And you talked about upstream quite a bit. Should there be more broader socio -technical readiness measures, misuse considerations? Where do you think it should be?

Speaker 1

And also, very importantly, how we have to also see it from the context of, you know, people doing their own thing, DIY kind of science that happens. And also, small -scale commercial activities which are not fully under the oversight mechanism of the government, right? So, considering all of these points, right, the policy evaluation must expand from model -centric assessment to socio -technical assessment. And this would include, you know, evaluating things like how much capability uplift relative to the government capacity that is there. So, government has certain capacity to manage or do oversight, but these AI tools, how are they changing that? Incentive structures, very, very important, that shape the model deployment. Also, the diffusion of risk across borders.

All of these things don’t respect national borders, right? So, how it’s going to spread. If people using VPN or other things, a number of other things that are there. So integration, lastly, the integration with existing biosafety and resource security systems as I had already mentioned. So briefly, like performance evaluation is necessary, but governance -relevant evaluation must be systemic. And otherwise, we risk auditing algorithms while ignoring the institutions that operationalize them. And that is very, very important, how we focus on that institutional level mechanisms. Thank you. Thank you.

Moderator

Piti, kind of the last structured question before we move into a bit more of an open conversation. AI becomes embedded not just in new capacities, but also existing programs like biosurveillance, public health systems. And so there’s a mix between emerging kind of scientific knowledge with more legacy, let’s call engineering knowledge as well. So. So how do we make sure that safety, evaluation, interoperability, all of that exists in this divide without fragmentation happening across the ecosystem? Because, you know, you can easily imagine everyone’s doing their own AI, you know, safety evaluation and not necessarily talking to each other.

Speaker 2

Thank you, Shyam. I think this is a very important question. And it’s also a topic that I’m really passionate about as well, which is biosurveillance. To your point, I think, you know, countries are already deploying AI -enabled biosurveillance systems that are, you know, either syndromic surveillance or it could be, you know, genomic sequencing pipelines or outbreak modeling. The countries are already doing that, but they are not building on… the unified data standards. So they’re basically building on very incompatible data standards with very different legal regimes across the borders. We’ve seen that in Southeast Asia. We’ve seen that even countries like, for example, Singapore to Malaysia, you see different legal regiments on how they monitor the data and also the biosurveillance.

And so the fragmentation risk is actually not a technical risk, I would argue, because it’s not just a technical risk, because we’ve seen COVID. I feel like if anyone is anybody saying, I think we all were a little bit traumatized by COVID. We’ve seen how data hoarding and incompatible reporting actually cost lives. And I saw that especially happening across the region in the lower resource settings. Like countries like Cambodia, for example. AI systems that are trained on non -representative data obviously are going to perform much worse. And guess what happens? When they perform worse, the region that is most affected is the region that needs the help the most. And because of that, and also that region is also the same region with the least data infrastructure.

And so I guess to sort of like answer your question and what I think we need to do, I think there are three things to be addressed here. The first one is obviously the data standards harmonization. Currently, we don’t have that. I think we would need not like a global overhead standard that enforces on every country, but more of a federated interpretability that applies frameworks that applies to different countries. So I can think of like HL7FHIR, which is the federated… healthcare interpretability resources that are attempting to address these very specific issues on clinical data, but this one would be adapted for public health surveillance. And the second point is the legal safe harbors for basically just kind of cross -border sharing of data for public health emergencies that are negotiated beforehand because, and this is important, beforehand, because if you negotiate during an outbreak, people are going to be freaking out.

People are going to be like, I’m not going to share my data to you. What are you going to do with that data? So this needs to be done beforehand. And the last point, and also the most politically challenging point, is actually to have some kind of shared evaluation criteria across the board between different countries that are embedded into the national surveillance systems. And, for example, like Singapore data infrastructure environment might not apply to countries with like different climate data or like different demographic data. So this needs to be applied into, you know, the national surveillance systems. And what I noticed, I guess like the last message is that what I noticed the AI governance framework often thinks of biosurveillance as like an edge, like a niche edge case.

And then people in biosecurity frameworks, like doing biosecurity frameworks, thinks that AI governance is like a tool. And these people don’t talk to each other. And that gap, that gap right there is where the risk happens. So, yeah, we just need to talk to each other more. That’s easier said. Yeah.

Moderator

So I think I’m just about to close with maybe five minutes or just under that for audience questions. Thank you, Justin. 10 second final thoughts on each of you from the panel. Suresh.

Speaker 1

Just wanted to very quickly, we need to also keep in mind that how AI could help solve some of these AI safety challenges. How agentic AI could be used, let’s say, when people are trying to develop vaccines. CEPI has developed this platform where agentic AI is being used to check if there is someone who is trying to jailbreak or someone who is trying to misuse the tool that is there. Second very quick point, also, with all that what I said, there is still a gap to transfer things from digital to physical, what is called digital to physical barrier. So, even if you have everything, you still can’t just develop, modify viruses without having a proper physical infrastructure and there are still some ways to control that.

Thank you.

Speaker 3

I think we should move on transforming from issues to intelligence like learning from the risk that happens and feeding it back to the model training and other assessment activities to mitigate the risk in real time so that is where we need to move towards bringing in more people into evaluations and then making it safer for people to use

Speaker 2

I guess I’ll make it quick the point that I want to make here is that Shurya should echo his point I think you’re right that we should not shoot ourselves in the foot especially for developing countries, I think it’s really important and so my message for the last message here is just kind of like while we are forging ahead in innovation and while we are innovating ahead in whatever domains of scientific domains that we’re doing we need to be conscious of the impact that we have and I think in the AI Impact Summit is one of the really good places to jumpstart that kind of conversations and break the silos. Thank you.

Moderator

Thank you everyone. I’m just going to take probably one minute to kind of summarize key points Evaluation, I see largely a systemic question, safety measure systemic question. I especially like the point on incident response not being already there. And a couple of points on the cross -border solutions and problems, we already have that. Discussion on open signs, we talked about how managed access, safeguards, and comparing government capacity to manage that versus letting it out for more DIY -oriented signs, which is a good term, I really like that. That’s a key area. And for emerging scientific powers, of course, collaboration is key. Tailored approach, that’s something that I’m again waiting to see from IT Madras as well, their contribution on this.

And some cross -border work on legal safe harbors, data standard harmonization, PT that you mentioned, really land well from this panel. I’m going to… I’m going to stop my… summary right now and you know more of this would be kind of put together in a blog at some point in the uh nearby future uh perhaps uh we can go for questions uh first uh yes please i think i can give you mine

Audience Member 1

Thank you so much for your wonderful insights i really enjoyed this session as a researcher in safety of ai at the university of york so i focus on psychological harms of ai and so what i want to ask particularly gita is um when it comes to the definition of harms and traditional safety engineering they’re catering more to physical harms and now we see the whole spectrum of harms expanding beyond that so i would love to know the work being done by karai and you in this area and and in fact enrich my research with it

Speaker 3

Yeah, sure uh so when we actually assess harms and impacts right we do we have to do it from the different two different perspectives one is on the functional side where we assess all these algorithmic risk and other stuff. From the human centric perspective, like you said, we can keep doing everything from the psychology perspective and other ethics and other stuff. So, here at CIRI, we do work on assessing bias, determining whether the model is stereotypical or not and how do we generate explanations for the high level scientific models and all. So, from the perspective of the psychological things, there is this cognitive science or cognitive capabilities of AI models which will actually enhance or degrade the capabilities of humans.

So, those things are we are trying to do some assessments from the incident perspective. So, if you go to read the incident reporting framework that we have, we have a taxonomy of risk and harms and also the impacts. So, from the kinds of harms that we have defined, we have categorized it as physical, psychological, cyber incident based harm set. And moreover, we have all the generic kinds of harms like algorithmic harms, socio -economic harms, the environmental harms and all. So, we are trying to come up with a taxonomy that will cater to the different hierarchies that will be applied to these kind of harms and impacts which will again be model specific, use case specific and the domain specific.

So, that is where we are trying to work on. And we also have a healthcare based tool, a toolkit which will enable people to actually assess the perceptions of how they treat these models, how they see whether these AI applications are helpful for them or not and then come up with some capacity building programs for different roles in which they are working on. And this has been done with CMC Wellure Hospital and we have been assessing the perceptions of healthcare workers. and then come up with a training module which will enable them to use AI models or tools more confidently rather than, say, being resistant or not relying on them for so much.

Moderator

Last, probably last quick question. Maybe keep it short on the responses as well, please. Sorry.

Audience Member 2

Hi. So my question is about, like, we are discussing all the geographical barriers, right? The modality is geography. When we change the geography, the models tend to perform poorly. Are we concerned about the temporal modality as well? When we go forward in time, the data is going to change eventually, and that is going to affect modeling. And how do we plan on, like, you know, mitigating such a problem if it arises?

Speaker 3

Yeah. So this comes under the model monitoring, the system monitoring approach, where we consider the data drifts out of distribution. So we consider the distribution aspects of the data and models. So definitely this is one of the criteria where you assess safety and evaluate the impacts of it.

Moderator

Yes, I think last question

Audience Member 3

Thank you so much for the insightful discussion, really appreciated the expertise that you’re bringing to the topic and thanks PT for bringing up COVID because my question is about that. As we learn from COVID biosecurity risk can quickly become a cross border existential threat. So what would a successful web of prevention and incident response framework look like and who are you looking up to in this space? Like who’s doing it well in this space?

Speaker 1

I can start maybe PT can add. So I think as I was mentioning, it will have to be more decentralized but at the same time integrated to the leadership. So I think there needs to be more empowerment of people who are like biosafety officers in the lab or who are institutional biosafety committee members, who are people who are working on the ethics and research security side at the institutes. So those are the people who need to be empowered. So there needs to be more capacity building of those people and at the same time there needs to be a mechanism established so they can report those incidents to the very top and there is top leadership sitting in the capitals.

They can in some way get an overview or monitor the situation as it is going on at different institutes level.

Speaker 2

Thanks. I can add a little bit to that. So in Singapore we actually have different agencies responsible for this. So we have the National Environmental Agency and then we have the MOH, obviously the Ministry of Health and then we also have different smaller agencies like Communicable Disease Agency and also like Prepare Agency where they are responsible for different tasks. But I want you to envision this as almost like the way that Singapore is trying to establish itself. I think it’s trying to establish itself almost as a firefighter. So when there’s an incident where there’s a crisis, who is actually doing what is very clear but it’s always not always clear across like different countries. For example, in Laos, Vietnam, might be looking very different, but I think having a very coordinated response across the different agencies on who is doing what.

Like, for example, National Environmental Agency is responsible for wastewater surveillance. So monitoring how the sickness is increasing or spiking or not, those are the people, yeah, that you would look up to. And I think that’s the last word, right? It all comes down to prevention and preparedness, even in this much like anything else with biocontext.

Moderator

Thank you, everyone, for the question, and thank you to my brilliant panelists, Suryesh, Geeta, and P .T. This was a very insightful discussion. On the screen is the work from RAND Europe with CLTR, some of what was referred to by P .T. and other panelists as well, some aspects of what we were discussing about risk typification. You’ll probably get some ideas there as well. And with that, I close. I’m surprised. I’m supposed to hand over these mementos to apparently including me, so let us do that now. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (33)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“Speaker 1 is a bio‑security expert with experience in disarmament.”

The knowledge base lists Speaker 1 (Suryesh) as a bio-security expert who works in the field of biosecurity and disarmament, confirming the report’s description [S3].

Additional Contextmedium

“Open‑source tools are crucial for low‑resource settings and help democratise expertise.”

A source notes that AI democratises expertise previously limited by resources, giving people in underserved areas access to sophisticated diagnostics, which adds nuance to the claim about the importance of open-source tools for low-resource contexts [S105].

Additional Contextmedium

“Balancing security concerns with open‑source approaches requires case‑by‑case solutions.”

The knowledge base highlights a discussion on the tension between national security and open-source approaches, emphasizing the need for ongoing dialogue and tailored solutions, providing additional context to the report’s tiered-access/open-science discussion [S110].

Additional Contextmedium

“Data governance, model evaluation, and red‑team activities remain essential for responsible AI deployment.”

A source describes the practice of publishing model cards, evaluation benchmarks, and data to make model behavior transparent and to flag risks, which supports and expands on the claim about the continued importance of data governance and model evaluation [S108].

Additional Contextlow

“The panel discussion examined biosecurity challenges within the Biological Weapons Convention (BWC) framework.”

Another source discusses the focus on biosecurity within the BWC, emphasizing non-proliferation of dual-use research, which adds background to the bio-security perspective presented by Speaker 1 [S101].

External Sources (111)
S1
Building the Workforce_ AI for Viksit Bharat 2047 — -Speaker 1- Role/Title: Not specified, Area of expertise: Not specified -Speaker 3- Role/Title: Not specified, Area of …
S2
S3
Advancing Scientific AI with Safety Ethics and Responsibility — – Speaker 1- Speaker 2- Speaker 3 – Speaker 1- Speaker 3- Moderator
S4
Global Perspectives on Openness and Trust in AI — -Audience member 2- Part of a group from Germany
S5
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S6
The Arc of Progress in the 21st Century / DAVOS 2025 — – Paula Escobar Chavez: Audience member asking a question (specific role/title not mentioned)
S7
AI Transformation in Practice_ Insights from India’s Consulting Leaders — -Audience member 1- Founder of Corral Inc -Audience member 6- Role/title not mentioned
S8
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S9
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S10
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Speaker 1- Role/title not specified (appears to be a moderator/participant) -Speaker 2- Role/title not specified (appe…
S11
Policy Network on Artificial Intelligence | IGF 2023 — Moderator 2, Affiliation 2 Speaker 1, Affiliation 1 Speaker 2, Affiliation 2
S12
S13
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S14
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S15
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S16
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S17
Keynote-Vinod Khosla — -Moderator: Role/Title: Moderator of the event; Area of Expertise: Not mentioned -Mr. Jeet Adani: Role/Title: Not menti…
S18
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S19
Global Perspectives on Openness and Trust in AI — – Alondra Nelson- Audience member 3
S20
AI Transformation in Practice_ Insights from India’s Consulting Leaders — -Audience member 3- Student -Audience member 6- Role/title not mentioned
S21
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S22
WS #123 Responsible AI in Security Governance Risks and Innovation — Addressing global capacity disparities, Karimian noted the importance of proactive collaboration to reduce inequalities …
S23
Opening plenary session and adoption of the agenda — Equally important is the call for investments in capacity building, particularly in developing countries, in order to en…
S24
AI for Safer Workplaces & Smarter Industries Transforming Risk into Real-Time Intelligence — Speaker 1, introduced by Naveen as his colleague, conducted a detailed demonstration of their AI safety platform. The de…
S25
Artificial intelligence (AI) – UN Security Council — Furthermore, the discussions underscored the necessity forregulatory mechanisms that are both flexible and adaptive. As …
S26
Open Forum #17 AI Regulation Insights From Parliaments — Sarah Lister: Thank you very much. And as we conclude this open forum on AI regulation, I’d like to start by thanking, f…
S27
From principles to practice: Governing advanced AI in action — A critical challenge Tse identified is the timeline mismatch between AI development and standards creation. Current form…
S28
Networking Session #60 Risk & impact assessment of AI on human rights & democracy — Matt O’Shaughnessy: Thank you so much, David. And it’s great to be here, even just virtually. So, you asked about the…
S29
WS #98 Towards a global, risk-adaptive AI governance framework — Audience: My name is Amal Ahmed. I’m currently working in DGA. I’m not asking a question. I’m just having an emphasi…
S30
What is it about AI that we need to regulate? — What is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed…
S31
How to make AI governance fit for purpose? — Anne Bouverot: Thank you so much, Gabriela. Thank you for this. I’m lucky to go first because by the time everyone has s…
S32
Can we test for trust? The verification challenge in AI — Anja Kaspersen discussed the role of technical professional organizations like IEEE in AI governance conversations. She …
S33
Challenging the status quo of AI security — – **Security vulnerabilities**: Current systems showing susceptibility to prompt injection and manipulation attacks – A…
S34
Protecting Democracy against Bots and Plots — Artificial Intelligence can deliver various results that need to be regulated to prevent misuse.
S35
Laying the foundations for AI governance — Dawn Song: Thank you very much. Okay, I think we’ll turn now on this question of obstacles to Professor Dawn Song. Okay,…
S36
AI Infrastructure and Future Development: A Panel Discussion — Physical infrastructure constraints create bottlenecks – need for skilled trades workers, power, concrete, copper in mas…
S37
Policymaker’s Guide to International AI Safety Coordination — This comment crystallizes the fundamental tension at the heart of AI governance – the misalignment between market incent…
S38
Accelerating Structural Transformation and Industrialization in Developing Countries: Navigating the Future with Advanced ICTs and Industry 4.0 — **Comprehensive Ecosystem Development**: The need for systematic approaches covering education, finance, regulation, and…
S39
WS #103 Aligning strategies, protecting critical infrastructure — Need for capacity building, especially in the Global South
S40
Free Science at Risk? / Davos 2025 — There’s a need to balance open science with security concerns, but overly restrictive policies can hinder innovation
S41
Driving Social Good with AI_ Evaluation and Open Source at Scale — However, audience questions revealed tension between this contextual approach and institutional needs for standardizatio…
S42
WS #193 Cybersecurity Odyssey Securing Digital Sovereignty Trust — Adisa argues that policies should require AI threat modeling and red teaming as regulatory requirements for AI systems, …
S43
How can sandboxes spur responsible data-sharing across borders? (Datasphere Initiative) — Harmonizing cross-border regulations and practices within the African continent presents challenges due to differing reg…
S44
Artificial Intelligence & Emerging Tech — Jörn Erbguth:Thank you very much. So I’m EuroDIG subject matter expert for human rights and privacy and also affiliated …
S45
Rule of Law for Data Governance | IGF 2023 Open Forum #50 — Additionally, the analysis underscores the importance of harmonizing and aligning laws to facilitate cross-border data f…
S46
From principles to practice: Governing advanced AI in action — Both speakers advocate for embedding safety and responsibility considerations from the initial design phase rather than …
S47
WS #123 Responsible AI in Security Governance Risks and Innovation — This fundamentally challenges the conventional approach to AI governance by arguing against treating it as a compliance …
S48
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Virginia Dignam: Thank you very much, Isadora. No pressure, I see. You want me to say all kinds of things. I hope that i…
S49
Advancing Scientific AI with Safety Ethics and Responsibility — Need for decentralized oversight mechanisms with empowered local biosafety officers and institutional review panels Saf…
S50
Main Session | Policy Network on Internet Fragmentation — Multi-stakeholder collaboration is crucial for addressing fragmentation risks
S52
Artificial Intelligence & Emerging Tech — Jörn Erbguth:Thank you very much. So I’m EuroDIG subject matter expert for human rights and privacy and also affiliated …
S53
Opportunities of Cross-Border Data Flow-DFFT for Development | IGF 2023 WS #224 — It is clear from the audience’s questions that there is a concern about balancing the need for data localisation with th…
S54
Secure Finance Risk-Based AI Policy for the Banking Sector — India’s regulatory thinking reflects this balance, encouraging experimentation while reinforcing institutional responsib…
S55
UN SECRETARY-GENERAL’S STRATEGY ON NEW TECHNOLOGIES — In this context, difficult policy dilemmas and questions relating to the source, nature and scope of regulatory and…
S56
I NTRODUCTION — – Establishing a reference framework to guide government entities in adopting a best-in-class architecture for digital s…
S57
Meeting REPORT — The structured yet responsive policy implementation strategy—including provisions for regular review—reflects an adaptiv…
S58
Are AI safety institutes shaping the future of trustworthy AI? — As AI advances at an extraordinary pace, governments worldwide are implementing measures to manage associated opportunit…
S59
360° on AI Regulations — The advancements and widespread use of AI technology have raised concerns about its potential misuse. The dual-use natur…
S60
Digital policy in 2019: A mid-year review — Technological innovation is creating new possibilities. Artificial intelligence developments are moving at a fast pace, …
S61
AI for Humanity: AI based on Human Rights (WorldBank) — Stating that technology developments occur at a rapid pace implies a need for due diligence and risk assessment to keep …
S62
Panel Discussion: Europe’s AI Governance Strategy in the Face of Global Competition — While disagreeing that governance is dead, Curioni acknowledges that governance and regulation must evolve significantly…
S63
GOVERNING AI FOR HUMANITY — As far as ‘safety’ is contextual, involving various stakeholders and cultures in creating such standards enhances their …
S64
Comprehensive Report: European Approaches to AI Regulation and Governance — A particularly concerning dimension emerged around mental health impacts of AI use. An audience member reported people b…
S65
To share or not to share: the dilemma of open source vs. proprietary Large Language Models — In sum, this analysis illustrates that open source software serves not merely as a technical tool but as a catalyst for …
S66
Advancing Scientific AI with Safety Ethics and Responsibility — -Shifting Risk Landscape in Life Sciences: The discussion highlighted how AI biodesign tools and LLMs are fundamentally …
S67
Policymaker’s Guide to International AI Safety Coordination — This comment introduced a fundamentally different perspective on AI risk, shifting focus from deployment and governance …
S68
From principles to practice: Governing advanced AI in action — – Udbhav Tiwari- Brian Tse Chris argues that some AI risks require entirely new risk management approaches because they…
S69
WSIS Action Line C5: Building Trust in Cyberspace — Capacity building must be tailored to different national development levels and maturity
S70
WS #103 Aligning strategies, protecting critical infrastructure — Capacity building essential, especially for Global South
S71
Free Science at Risk? / Davos 2025 — There’s a need to balance open science with security concerns, but overly restrictive policies can hinder innovation
S72
WSIS Action Line C7 E-science: Assessment of progress made over the last 20 years — Open science platforms are highlighted as crucial, but they must be widely accessible to ensure equitable benefits from …
S73
test marko — concluded that while Geneva faces challenges, it retains significant advantages as a center for digital governance. Howe…
S74
https://dig.watch/event/india-ai-impact-summit-2026/advancing-scientific-ai-with-safety-ethics-and-responsibility — So we have a lot of this risk landscape shifting a little bit more upstream to the design side when it comes to at least…
S75
WS #193 Cybersecurity Odyssey Securing Digital Sovereignty Trust — Adisa argues that policies should require AI threat modeling and red teaming as regulatory requirements for AI systems, …
S76
Strategy — The document in its current form, serves as a high-level overview of Egypt’s National AI Strategy. In it is not mean…
S77
How can sandboxes spur responsible data-sharing across borders? (Datasphere Initiative) — Harmonizing cross-border regulations and practices within the African continent presents challenges due to differing reg…
S78
Artificial Intelligence & Emerging Tech — Jörn Erbguth:Thank you very much. So I’m EuroDIG subject matter expert for human rights and privacy and also affiliated …
S79
Day 0 Event #171 Legalization of data governance — Cross-border data flows require balancing security and utilization
S80
WS #31 Cybersecurity in AI: balancing innovation and risks — AUDIENCE: Hi, I’m Odas. I’m from… Digital Uganda. We’re based in Kigali, Rwanda. And I want to ask Yulia regarding w…
S81
Workshop 8: How AI impacts society and security: opportunities and vulnerabilities — Piotr Słowiński: Okay, great. And I think that you can see my screen, at least you should by now. So, yeah, welcome. I…
S82
OPENING STATEMENTS FROM STAKEHOLDERS — Discussions on artificial intelligence show that technological development is not without risk. Discussions on emerging…
S83
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — The tone was collaborative and solution-oriented throughout, with participants acknowledging both the urgency and comple…
S84
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — The discussion maintained a professional, collaborative, and optimistic tone throughout. Panelists demonstrated mutual r…
S85
WS #214 Youth-Led Digital Futures: Integrating Perspectives and Governance — The tone of the discussion was largely constructive and solution-oriented, with speakers offering insights from differen…
S86
WS #266 Empowering Civil Society: Bridging Gaps in Policy Influence — The tone was largely constructive and solution-oriented. Speakers acknowledged significant challenges but focused on ide…
S87
Open Forum #15 Digital cooperation: the road ahead — The tone was generally constructive and solution-oriented. Participants shared examples of successful partnerships and i…
S88
Panel 2 – Anticipating and Mitigating Risks Along the Global Subsea Network  — The discussion maintained a professional, collaborative tone throughout, with participants demonstrating technical exper…
S89
The Power of the Commons: Digital Public Goods for a More Secure, Inclusive and Resilient World — The overall tone was optimistic and forward-looking, with speakers expressing enthusiasm about the potential of DPGs whi…
S90
Launch / Award Event #57 Governing Identity Online Nations and Technologists — The discussion maintains an academic and informative tone throughout, characterized by scholarly presentation of researc…
S91
Safe Smart Cities and Climate Frustration — The discussion maintained a collaborative and solution-oriented tone throughout. Speakers were optimistic about the pote…
S92
Leaders TalkX: Partnership pivot: rethinking cooperation in the digital era — The discussion maintained a professional, collaborative, and forward-looking tone throughout. Despite the moderator’s ac…
S93
Closing remarks – Charting the path forward — The tone throughout was consistently formal, diplomatic, and optimistic. It maintained a collaborative and forward-looki…
S94
High-Level Track Facilitators Summary and Certificates — The discussion maintained a consistently positive and celebratory tone throughout, characterized by gratitude, accomplis…
S95
Opening and introduction — The AU’s commitment to working with Member States in adopting the meeting’s recommendations was reaffirmed, alongside th…
S96
Closing Session  — The tone throughout the discussion was consistently formal, collaborative, and optimistic. It maintained a celebratory y…
S97
Opening Ceremony — Kurtis Lindqvist: Your Excellencies, distinguished guests, ladies and gentlemen. First of all, I’d like to thank Ministe…
S98
Leaders TalkX: ICT Applications Unlocking the Full Potential of Digital – Part I — In summary, the speaker outlines Iraq’s progressive plans for development in information technology and digital skills e…
S99
Day 0 Event #257 Enhancing Data Governance in the Public Sector — – **Judith Hellerstein** – Moderator of the session on “Enhancing Data Governance in the Public Sector” Guy Berger rais…
S100
The Challenges of Data Governance in a Multilateral World — An advocate in the discussion strongly supports data governance models that prioritize cooperation, privacy, and the com…
S101
morning session — The argument calls for a clearer focus on the specific aspects of biosecurity within the BWC framework. An alternative v…
S102
Thinking through Augmentation — While Ucuzoglu is optimistic about the long-term impact of transformative technology, he acknowledges that it is not an …
S103
Breakthroughs in human-centric bioscience with AI — A consortium led by Integra Therapeutics, Pompeu Fabra University, and the Centre for Genomic Regulation in Barcelona, S…
S104
AI Governance Dialogue: Steering the future of AI — Development | Sociocultural Last year, the Nobel Prize for Chemistry was awarded to the developers of AlphaFold, an AI …
S105
Enhancing rather than replacing humanity with AI — AI democratises expertise that was previously limited by resources. People in underserved areas have access to sophistic…
S106
Flexibility 2.0 / Davos 2025 — There is a moderate to high level of consensus among the speakers on key issues. This consensus suggests a growing recog…
S107
NRIs MAIN SESSION: DATA GOVERNANCE — Furthermore, it is noted that support for data systems should not be limited to the private sector. The analysis suggest…
S108
Keynote-Alexandr Wang — “We publish model cards and evaluation benchmarks and data so you can see how they work, their intended use, and how we …
S109
WSIS Action Line C7:E-Science: Open Science, Data, Science cooperation, IYQ, International Decade of Science for Sustainable Development — Strong consensus emerged around human-centered technology development, the need for equitable access to scientific resou…
S110
High Level Session 2: Digital Public Goods and Global Digital Cooperation — – **Balancing Security and Openness**: The tension between national security concerns and open-source approaches require…
S111
Digital Public Goods and the Challenges with Discoverability | IGF 2023 — Interestingly, it’s apparent that technical capacity does not represent the only challenge when it comes to integrating …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
9 arguments159 words per minute1969 words742 seconds
Argument 1
*Structural risk shift* – Speaker 1: AI tools decouple design from physical containment, moving bio‑risk upstream to the design stage and demanding new oversight mechanisms.
EXPLANATION
AI‑enabled biodesign tools allow creation of biological agents without the need for traditional lab containment, shifting the primary risk from downstream physical safeguards to the upstream design phase. Governance therefore must focus on monitoring and controlling design activities rather than only facility inspections.
EVIDENCE
Speaker 1 explains that historically risk governance in life sciences was tied to physical infrastructure such as lab facilities and material-transfer controls ([7]), but AI biodesign tools have altered this paradigm ([8]). He cites RAND’s identification of more than 1,500 biodesign tools that are transforming scientific practice ([9]) and notes that AI now makes it easier to engineer proteins, optimise DNA sequences, and model pathogen interactions, effectively decoupling these activities from physical containment measures ([10-12]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The shift of biosecurity risk upstream due to AI biodesign tools and the existence of over 1,500 such tools is documented in the discussion of the changing risk landscape in life sciences [S3].
MAJOR DISCUSSION POINT
Structural risk shift
Argument 2
*Decentralized oversight* – Speaker 1: Centralised authority in Delhi is insufficient; oversight must be distributed to institutional biosafety and information‑security offices.
EXPLANATION
A single national authority cannot keep pace with the rapidly evolving AI‑driven bio‑risk landscape; oversight should be spread across labs, biosafety officers, and information‑security units to create a network of checks and balances. This decentralised model aims to provide more adaptive and timely supervision.
EVIDENCE
He argues that a lone authority in Delhi cannot manage the required oversight, calling for more decentralised checks and balances ([24-26]). He proposes empowering information-security and biosafety offices and establishing a “way of prevention” that combines multiple measures rather than relying on a single one ([27-31]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Decentralised oversight mechanisms with empowered local biosafety officers and institutional review panels are advocated as necessary alternatives to top-down approaches [S3].
MAJOR DISCUSSION POINT
Decentralized oversight
Argument 3
*Invest in capacity building for AI‑enabled biosafety* – Training more scientists and security professionals in AI‑driven bio‑security, chemical security and nuclear security is essential for a resilient ecosystem.
EXPLANATION
A skilled workforce can recognise emerging AI‑generated threats and apply appropriate safeguards, reducing reliance on ad‑hoc measures.
EVIDENCE
Speaker 1 notes the need to train more people on AI-enabled science, chemical security, AI nuclear security and related fields, emphasizing capacity building for the Indian ecosystem and similar contexts [16-18].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Capacity-building programmes for biosafety officers, ethics and research-security personnel are highlighted, and global capacity disparities are noted as a priority for investment in developing regions [S3], [S22], [S23].
MAJOR DISCUSSION POINT
Capacity building
Argument 4
*Integrate AI evaluation into existing biosafety systems* – Embedding AI risk assessments within current biosafety and bio‑security offices strengthens institutional readiness for new AI‑driven threats.
EXPLANATION
By aligning AI evaluation with established information‑security and biosafety structures, institutions can respond more quickly to novel risks.
EVIDENCE
Speaker 1 calls for integrating AI evaluation into biosafety systems and strengthening institutional readiness, asking how information-security and biosafety offices can be better prepared for AI risks [18-20].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Integration of AI safety assessment with existing biosafety and resource-security systems is recommended to avoid fragmentation [S3].
MAJOR DISCUSSION POINT
AI‑biosafety integration
Argument 5
*Adopt adaptive, continuous oversight mechanisms* – Traditional periodic, paper‑based inspections are insufficient; oversight must evolve in real time with rapid AI advances.
EXPLANATION
Continuous monitoring and adaptive checks enable regulators to keep pace with fast‑moving AI capabilities that can outstrip static review processes.
EVIDENCE
Speaker 1 argues for more adaptive oversight that goes beyond occasional inspections, matching the speed and scale of AI developments [23-24].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for flexible, adaptive regulatory mechanisms that keep pace with fast-moving AI developments is emphasized in UN Security Council discussions and in analyses of the timeline mismatch between AI progress and standards creation [S25], [S27].
MAJOR DISCUSSION POINT
Adaptive oversight
Argument 6
*Implement tiered risk classification for AI biodesign tools* – Not all AI‑generated biological tools pose the same danger; a graduated risk framework can focus scrutiny where it matters most.
EXPLANATION
Higher‑risk tools (e.g., those trained on virus data) receive stricter controls, while lower‑risk applications enjoy lighter oversight, optimising resource allocation.
EVIDENCE
Speaker 1 proposes a tiered risk classification, distinguishing high-risk biodesign tools from lower-risk ones such as those dealing with harmless animal data [142-146].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Risk-adaptive AI governance frameworks that propose tiered or graduated oversight for high-risk versus low-risk tools are discussed in the context of accelerating standards and risk-adaptive approaches [S27].
MAJOR DISCUSSION POINT
Risk tiering
Argument 7
*Embed AI safety assessment into grant review and create cross‑trained review panels* – Funding decisions should require AI safety checks, and dedicated panels with both AI and biosafety expertise can evaluate proposals holistically.
EXPLANATION
Linking safety evaluation to funding incentives ensures that developers consider risk mitigation early, while cross‑trained panels bring the necessary interdisciplinary perspective.
EVIDENCE
Speaker 1 mentions integrating AI evaluation modules into grant review processes and establishing cross-trained AI biosafety review panels from the bottom-up [147-148].
MAJOR DISCUSSION POINT
Safety‑by‑design in funding
Argument 8
*Leverage agentic AI to monitor misuse in vaccine development* – Advanced AI agents can automatically detect attempts to jailbreak or misuse biodesign platforms, providing a proactive safety layer.
EXPLANATION
By embedding monitoring AI within vaccine‑development pipelines, suspicious behaviour can be flagged before harmful outputs are generated.
EVIDENCE
Speaker 1 cites CEPI’s platform that uses agentic AI to check for jailbreak attempts during vaccine development activities [246-249].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A demonstration of an AI agent (Jenny AI) that automatically processes safety hazards and detects misuse illustrates the feasibility of proactive, agent-driven monitoring [S24].
MAJOR DISCUSSION POINT
Proactive AI‑based monitoring
Argument 9
*Maintain a digital‑to‑physical barrier* – Even with powerful AI tools, physical infrastructure constraints (labs, containment) remain a critical control point that should not be overlooked.
EXPLANATION
Ensuring that digital designs cannot be easily translated into physical pathogens without proper containment adds an extra layer of security.
EVIDENCE
Speaker 1 highlights the persistent gap between digital design and physical synthesis, noting that without appropriate physical infrastructure the risk of creating dangerous viruses is limited [250-251].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Physical infrastructure constraints are identified as a bottleneck and a vital control point for biosecurity, underscoring the importance of a digital-to-physical barrier [S36].
MAJOR DISCUSSION POINT
Digital‑to‑physical security
S
Speaker 2
10 arguments152 words per minute1873 words737 seconds
Argument 1
*Tiered access & contextual norms* – Speaker 2: Adopt differentiated, capability‑level governance (e.g., pre‑deployment assessments, KYC‑style credentialing) rather than blanket restrictions.
EXPLANATION
A nuanced, tiered‑access framework that applies contextual norms can balance openness with safety. Pre‑deployment assessments using structured rubrics and credential‑based access (similar to KYC) allow high‑risk tools to be controlled without stifling innovation.
EVIDENCE
Speaker 2 proposes a tiered-access model with contextual norms, referencing RAND Europe’s global risk index and its pre-deployment assessment rubrics ([43-45]). He stresses that once frontier models are released the danger cannot be withdrawn, making pre-deployment checks essential ([46-48]). He likens the approach to KYC, suggesting credentialed researchers for defensive work while keeping open-source tools available ([49-51]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A tiered-access model with pre-deployment assessments and credential-based controls is recommended in a global risk-adaptive AI governance framework [S29].
MAJOR DISCUSSION POINT
Tiered access & contextual norms
Argument 2
*Preserving open‑source benefits* – Speaker 2: Open‑source tools are essential for low‑resource settings; governance should not conflate openness with danger.
EXPLANATION
Open‑source biodesign tools are critical for innovation in low‑resource environments, and any governance model that treats open‑source as inherently risky would hinder progress. Policies should differentiate between the tool’s capabilities and its misuse potential.
EVIDENCE
He emphasizes that open-source tools are necessary for low-resource settings and warns against equating openness with danger, stating that open-source development is a vital innovation point ([53-56]).
MAJOR DISCUSSION POINT
Preserving open‑source benefits
Argument 3
*Periodic global monitoring* – Speaker 2: Propose a six‑monthly, government‑backed AI safety institute that conducts independent assessments and shares results through a credentialed network.
EXPLANATION
Regular, semi‑annual independent evaluations of AI systems, supported by governments and coordinated through a credentialed network, can keep risk assessments up‑to‑date. Automation with AI can increase the efficiency of this monitoring.
EVIDENCE
He cites RAND Europe’s recommendation for a six-monthly ritual of monitoring and risk assessment, involving governments and independent researchers, and suggests using AI to automate the process ([105-108]).
MAJOR DISCUSSION POINT
Periodic global monitoring
Argument 4
*Pre‑deployment assessment* – Speaker 2: Structured rubrics before release are a critical safeguard, especially for frontier models that can outperform expert virologists.
EXPLANATION
Assessing AI systems against structured criteria before deployment can prevent dangerous capabilities from being released unchecked. Sharing the assessment outcomes with a credentialed community ensures broader awareness while protecting sensitive information.
EVIDENCE
He highlights the importance of pre-deployment assessments with structured rubrics prior to releasing frontier models, noting that once released the danger cannot be withdrawn ([44-48]). He also mentions that assessment results should be shared across a credentialed network with tiered confidentiality rather than kept proprietary ([118-119]).
MAJOR DISCUSSION POINT
Pre‑deployment assessment
Argument 5
*Data‑standard harmonisation* – Speaker 2: Advocate for federated standards (e.g., HL7‑FHIR‑style) to enable interoperable biosurveillance across countries.
EXPLANATION
To avoid fragmentation, biosurveillance data should follow harmonised, federated standards that allow different jurisdictions to exchange information securely. An HL7‑FHIR‑like framework adapted for public‑health surveillance can provide the needed interoperability.
EVIDENCE
He points out the current lack of unified data standards for biosurveillance and proposes a federated interpretability framework similar to HL7-FHIR, adapted for public-health data ([226-230]).
MAJOR DISCUSSION POINT
Data‑standard harmonisation
Argument 6
*Pre‑negotiated safe‑harbor agreements* – Speaker 2: Legal frameworks must be established in advance to allow rapid cross‑border data sharing during public‑health emergencies.
EXPLANATION
Legal safe‑harbor provisions should be negotiated before crises occur so that data can be shared swiftly without legal hesitation. This pre‑emptive approach enables coordinated responses during emergencies.
EVIDENCE
He argues that safe-harbor agreements for cross-border data sharing need to be negotiated beforehand, otherwise countries may refuse data exchange during an outbreak ([230-234]).
MAJOR DISCUSSION POINT
Pre‑negotiated safe‑harbor agreements
Argument 7
*Clarify the intended role of an AI system before applying governance* – Understanding what a system is meant to do is a prerequisite for choosing appropriate oversight mechanisms.
EXPLANATION
A clear role definition helps differentiate between benign, assistive, or potentially dangerous applications, guiding the selection of safeguards.
EVIDENCE
Speaker 2 repeatedly stresses the need for a clear understanding of the AI system’s role as a key point before any governance discussion [84-91].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Fit-for-purpose AI governance literature stresses that defining a system’s intended role is a prerequisite for selecting suitable governance levers [S31].
MAJOR DISCUSSION POINT
Role‑based governance
Argument 8
*Employ non‑interactive, automated risk‑monitoring methods* – Automated assessments that do not require researchers to directly query dangerous models can provide meaningful safeguards without exposing users to risk.
EXPLANATION
Such non‑interactive methodologies reduce the chance of accidental misuse while still delivering valuable risk insights.
EVIDENCE
Speaker 2 describes a non-interactive methodology that avoids direct researcher interaction with dangerous systems, presenting it as an already meaningful safeguard [107-108].
MAJOR DISCUSSION POINT
Non‑interactive monitoring
Argument 9
*Anchor AI‑safety institutes within existing international frameworks* – Linking new AI safety bodies to the Biological Weapons Convention (BWC) or the World Health Organization (WHO) provides legitimacy and facilitates coordination.
EXPLANATION
Embedding AI safety institutions within established treaties ensures they operate under recognized legal mandates and benefit from existing verification mechanisms.
EVIDENCE
Speaker 2 notes that an AI safety institute should have anchoring around the Biological Weapons Convention or the WHO to strengthen its authority [116-118].
MAJOR DISCUSSION POINT
International anchoring
Argument 10
*Require substantial multilateral government investment for semi‑annual monitoring* – A six‑monthly risk‑assessment ritual cannot be sustained without dedicated funding from governments at the multilateral level.
EXPLANATION
Consistent financial support ensures the continuity, depth and credibility of periodic global monitoring activities.
EVIDENCE
Speaker 2 points out that the proposed six-monthly monitoring cadence would need a very significant investment from governments, emphasizing that it cannot proceed without such funding [111-112].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for adequately funded, flexible regulatory mechanisms to support continuous monitoring of AI systems are made in discussions about adaptive AI regulation [S25].
MAJOR DISCUSSION POINT
Funding for periodic monitoring
S
Speaker 3
8 arguments147 words per minute1665 words675 seconds
Argument 1
*Capacity gaps & AI readiness* – Speaker 3: Indian and Southeast Asian institutions vary widely in resources; AI readiness must be tailored to local contexts.
EXPLANATION
AI readiness differs dramatically across the Global South, with India ranking high globally but many Southeast Asian nations lagging. Governance and capacity‑building measures must reflect these heterogeneous resource levels and local needs.
EVIDENCE
Speaker 3 notes India’s strong AI ranking (third globally) contrasted with Indonesia’s lower rank (~49), highlighting the gap in AI readiness across the region ([62-66]). He stresses that solutions designed for Western contexts cannot be directly applied to the varied capacities of South-East Asian institutions ([64-66]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of global capacity disparities highlight the need for targeted capacity-building in developing regions, noting India’s high AI ranking versus lower-ranked Southeast Asian nations [S22], [S23].
MAJOR DISCUSSION POINT
Capacity gaps & AI readiness
Argument 2
*Socio‑cultural benchmarks* – Speaker 3: Existing safety benchmarks fail 20‑30 % of biological risk tests; assessments must incorporate regional socio‑cultural factors and participatory stakeholder input.
EXPLANATION
Current safety benchmarks for large language models perform poorly in biological risk scenarios, especially in the Global South. Incorporating socio‑cultural evaluations and stakeholder participation can produce more relevant safeguards.
EVIDENCE
He references a Southeast-Asia safety benchmark showing that leading LLMs fail 20-30 % of biological risk evaluations ([68-70]) and argues for additional sociocultural assessments that consider regional harms and involve end-users and stakeholders throughout the development lifecycle ([71-75]).
MAJOR DISCUSSION POINT
Socio‑cultural benchmarks
Argument 3
*Establish a Global South network for trustworthy AI* – A dedicated network can coordinate capacity‑building, standards‑setting and shared learning among low‑resource countries.
EXPLANATION
By pooling expertise and resources, the Global South can develop context‑appropriate governance models and avoid reliance on external solutions.
EVIDENCE
Speaker 3 announces the launch of a global-south network for trustworthy AI that will enable collaborative development and deployment in low-resource settings [164-165].
MAJOR DISCUSSION POINT
Regional collaboration
Argument 4
*Create an AI safety commons for the Global South* – A shared repository of safety tools, benchmarks and best‑practice guidelines will accelerate responsible AI deployment across diverse contexts.
EXPLANATION
The commons provides open access to evaluation resources, fostering transparency and collective improvement of safety standards.
EVIDENCE
Speaker 3 describes an upcoming AI safety commons for the Global South as part of the safe and trusted AI pillar, expected to be operational within one to two years [165-166].
MAJOR DISCUSSION POINT
Safety commons
Argument 5
*Develop an incident‑reporting framework tailored to Indian settings* – A context‑specific mechanism captures AI‑related incidents that might be missed by Western‑centric reporting systems.
EXPLANATION
Tailoring the taxonomy and reporting process to local realities improves data quality and enables timely response to emerging threats.
EVIDENCE
Speaker 3 mentions that CRI has created an incident-reporting mechanism and framework specifically designed for Indian contexts, differing from Western models [169-170].
MAJOR DISCUSSION POINT
Localized incident reporting
Argument 6
*Prioritise privacy protections for marginalized communities* – AI deployments must safeguard the data and identities of vulnerable groups to prevent disproportionate harms.
EXPLANATION
Embedding privacy safeguards ensures that AI‑driven surveillance or health tools do not exacerbate existing inequities.
EVIDENCE
Speaker 3 stresses the need to provide more privacy to people, especially those from marginalized communities, as part of responsible AI deployment [176-177].
MAJOR DISCUSSION POINT
Privacy for vulnerable groups
Argument 7
*Foster collaborative multi‑stakeholder governance* – Effective AI safety requires coordinated action among academia, industry, government and civil society.
EXPLANATION
Joint efforts break silos, align incentives and ensure that diverse perspectives shape policy and technical standards.
EVIDENCE
Speaker 3 calls for collaborative work between different stakeholders and notes that governments have already provided prompt knowledge through techno-legal frameworks and guidelines [177-179].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Proactive collaboration between industry, academia and governments is identified as essential to reduce capacity gaps and build resilient AI governance ecosystems [S22].
MAJOR DISCUSSION POINT
Multi‑stakeholder collaboration
Argument 8
*Align AI governance with emerging national guidelines such as METI’s* – Nationally published AI governance guidelines can serve as a template for regional adaptation and harmonisation.
EXPLANATION
Referencing METI’s recently released AI governance guidelines helps ensure consistency while allowing local tailoring.
EVIDENCE
Speaker 3 notes that governments have issued AI governance guidelines, specifically mentioning a recent METI publication that can inform other Southeast Asian countries [178-180].
MAJOR DISCUSSION POINT
National guideline alignment
A
Audience Member 1
1 argument167 words per minute100 words35 seconds
Argument 1
*Comprehensive harms taxonomy* – Audience Member 1: Calls for inclusion of psychological, cyber‑incident, socio‑economic, and environmental harms alongside physical risks in AI safety assessments.
EXPLANATION
A broader taxonomy that captures non‑physical harms—such as psychological, cyber‑incident, socio‑economic, environmental, and algorithmic impacts—provides a more complete picture of AI risks. This enables targeted mitigation strategies across diverse domains.
EVIDENCE
The participant describes CIRI’s work on a taxonomy that categorises harms into physical, psychological, cyber-incident, socio-economic, environmental, and algorithmic categories, and mentions a toolkit used with a hospital to assess healthcare workers’ perceptions of AI tools ([265-274]).
MAJOR DISCUSSION POINT
Comprehensive harms taxonomy
A
Audience Member 2
1 argument170 words per minute75 words26 seconds
Argument 1
*Model‑drift mitigation* – Audience Member 2: Highlights the need for continuous monitoring of distributional shifts over time to maintain model safety and performance.
EXPLANATION
AI models can degrade as data distributions change over time, so ongoing monitoring for temporal drift is essential to ensure continued safety and reliability. Detecting and addressing drift should be part of systematic model evaluation.
EVIDENCE
The audience member points out that model-drift monitoring should consider data moving out of distribution over time, describing this as part of a system-monitoring approach to safety ([286-288]).
MAJOR DISCUSSION POINT
Model‑drift mitigation
A
Audience Member 3
2 arguments190 words per minute78 words24 seconds
Argument 1
*Empowered biosafety officers* – Audience Member 3: Suggests a layered response where institutional biosafety officers report upward to central leadership for a holistic view.
EXPLANATION
Decentralising incident response by empowering biosafety officers at labs and institutions, while establishing clear channels for reporting to national leadership, creates a coordinated yet flexible oversight system.
EVIDENCE
He proposes empowering biosafety officers and institutional biosafety committees, building capacity for them, and creating mechanisms for incident reporting up to top leadership for an overview of the situation across institutes ([295-299]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Decentralised oversight that empowers local biosafety officers and creates reporting channels to central leadership is advocated as a key element of effective biosecurity governance [S3].
MAJOR DISCUSSION POINT
Empowered biosafety officers
Argument 2
*Clear agency roles* – Audience Member 3: Uses Singapore’s multi‑agency model (NEA, MOH, Communicable Disease Agency, Prepare Agency) as an example of coordinated response that other nations could emulate.
EXPLANATION
A clear delineation of responsibilities among agencies—such as Singapore’s National Environmental Agency, Ministry of Health, Communicable Disease Agency, and Prepare Agency—ensures swift and organized action during health crises. Replicating such role clarity can improve cross‑border coordination.
EVIDENCE
He describes Singapore’s structure where distinct agencies handle specific tasks (e.g., NEA for wastewater surveillance) and notes that this clear allocation of duties serves as a model for coordinated incident response ([301-309]).
MAJOR DISCUSSION POINT
Clear agency roles
M
Moderator
7 arguments125 words per minute969 words462 seconds
Argument 1
*Define the appropriate governance lens* – The discussion should first decide whether AI‑biosecurity issues are best addressed through data‑governance, model‑design controls, or verification/compliance mechanisms.
EXPLANATION
Choosing the right angle determines which policies, standards and oversight tools will be most effective for managing emerging risks.
EVIDENCE
The moderator opens the session by asking whether the problem should be framed as a data-governance issue, a model-design problem, or a verification/compliance challenge [1].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need to frame AI-biosecurity governance as data governance, model design, or compliance is reflected in fit-for-purpose governance analyses that explore appropriate lenses for AI risk management [S30], [S31].
MAJOR DISCUSSION POINT
Governance framing
Argument 2
*Balance open‑science benefits with safeguards* – Open scientific collaboration must be preserved while preventing the destabilising diffusion of high‑risk AI capabilities.
EXPLANATION
Open science accelerates innovation and capacity building, especially in low‑resource settings, but unrestricted release of powerful tools can create security threats.
EVIDENCE
The moderator asks how to keep the advantages of open science while avoiding the spread of dangerous capabilities [34-35].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Discussions on AI regulation emphasize balancing the benefits of open science with the need to mitigate the diffusion of dangerous capabilities [S30].
MAJOR DISCUSSION POINT
Open‑science vs. risk mitigation
Argument 3
*Make independent evaluation and red‑team­ing a norm* – Systematic, independent technical assessments should become a permanent part of the global scientific infrastructure for AI systems that generate biological outputs.
EXPLANATION
Regular red‑team exercises and independent audits can surface hidden vulnerabilities before they are exploited, ensuring a baseline of safety worldwide.
EVIDENCE
The moderator explicitly asks whether independent evaluation and red-team­ing should become a norm for bio-security-relevant AI systems [82-83].
MAJOR DISCUSSION POINT
Institutionalising independent evaluation
Argument 4
*Ensure safety measures are rigorous yet feasible* – Governance frameworks must strike a balance between scientific rigor and the practical constraints of diverse research ecosystems.
EXPLANATION
Overly burdensome requirements could hinder research, while lax standards leave gaps; policies need to be adaptable to varying institutional capacities.
EVIDENCE
The moderator asks Suryesh how to keep safety measures rigorous but feasible within the research ecosystems he knows well [120-121].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Adaptive regulatory approaches that consider practical constraints of varied research ecosystems are highlighted as necessary for feasible yet rigorous AI safety measures [S25].
MAJOR DISCUSSION POINT
Feasibility of safety regimes
Argument 5
*Empower emerging scientific powers to shape governance* – Countries of the Global South should lead the design of AI governance rather than merely importing Western frameworks.
EXPLANATION
Local contexts, resource constraints and unique innovation pathways require home‑grown policies that can be shared with other emerging economies.
EVIDENCE
The moderator prompts Geetha to discuss whether emerging scientific powers can shape governance and what leadership would look like [159-160].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of emerging scientific powers taking a leading role in AI governance and capacity-building is underscored in discussions of global capacity disparities [S22].
MAJOR DISCUSSION POINT
Leadership of emerging powers
Argument 6
*Integrate AI safety into existing programs without fragmentation* – AI must be embedded in legacy biosurveillance and public‑health systems in a coordinated way to avoid siloed evaluations.
EXPLANATION
Co‑designing safety, interoperability and evaluation standards across new AI‑enabled tools and established infrastructures prevents gaps and duplication.
EVIDENCE
The moderator asks how to ensure safety, evaluation and interoperability across emerging and legacy systems without fragmentation [204-208].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Integration of AI safety with existing biosafety and resource-security programs is recommended to prevent fragmentation and ensure coordinated oversight [S3].
MAJOR DISCUSSION POINT
Integration with legacy systems
Argument 7
*Adopt a systemic, institution‑level approach to safety evaluation* – Auditing algorithms alone is insufficient; the surrounding institutions and operational practices must also be assessed.
EXPLANATION
A holistic view that includes institutional policies, capacity and incentive structures yields more reliable risk mitigation than isolated model checks.
EVIDENCE
In the closing summary the moderator stresses that safety evaluation must be systemic and institution-focused, warning against auditing algorithms while ignoring the institutions that operationalise them [255-259].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A systemic, institution-focused safety evaluation that includes verification and institutional practices is advocated by technical standards bodies [S32].
MAJOR DISCUSSION POINT
Systemic safety evaluation
Agreements
Agreement Points
The rapid emergence of AI‑enabled biodesign tools shifts bio‑risk upstream from physical containment to the design phase, requiring new oversight mechanisms.
Speakers: Speaker 1, Speaker 2
*Structural risk shift* – Speaker 1: AI tools decouple design from physical containment, moving bio‑risk upstream to the design side and demanding new oversight mechanisms. *Pre‑deployment assessment* – Speaker 2: Structured rubrics before release are a critical safeguard, especially for frontier models that can outperform experts.
Both speakers agree that AI-driven biodesign changes the risk landscape by moving the critical control point to the design stage and that pre-deployment safety checks are essential to manage this shift [10-12][44-48].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with emerging governance principles that call for embedding safety and responsibility from the design stage of advanced AI systems [S46][S47] and reflects concerns about dual-use biotechnologies that require upstream risk assessment [S59].
Oversight should be decentralised and empower local biosafety, biosecurity and information‑security offices rather than rely on a single central authority.
Speakers: Speaker 1, Audience Member 3, Moderator
*Decentralized oversight* – Speaker 1: A single authority in Delhi cannot manage everything; checks and balances must be spread to labs and offices. *Empowered biosafety officers* – Audience Member 3: Institutional biosafety officers should be empowered and have clear reporting channels to central leadership. *Governance framing* – Moderator: The discussion must decide the appropriate governance lens (data, model design, verification) which implies choosing the right institutional architecture.
All three stress that a distributed network of empowered local units is needed for timely, adaptive governance of AI-bio risks, with mechanisms to aggregate information centrally [24-27][295-299][1].
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for decentralized oversight with empowered local biosafety officers have been articulated in recent workshops on AI and biosafety [S49] and are echoed in discussions about strengthening national-level capacities.
Governance should be tiered or differentiated according to the capability and risk level of AI tools, avoiding blanket restrictions.
Speakers: Speaker 1, Speaker 2
*Tiered risk classification* – Speaker 1: Not everything is highly risky; high‑risk biodesign tools should be treated differently from low‑risk ones. *Tiered access & contextual norms* – Speaker 2: Adopt differentiated, capability‑level governance (pre‑deployment assessments, KYC‑style credentialing) rather than blanket bans.
Both speakers advocate a graduated approach that matches oversight intensity to the specific danger posed by a tool, emphasizing flexibility over one-size-fits-all bans [142-146][41-45].
POLICY CONTEXT (KNOWLEDGE BASE)
Risk-based, tiered regulatory approaches are highlighted in India’s AI policy framework, which balances experimentation with systemic risk mitigation [S54], and are consistent with broader recommendations to match oversight intensity to AI capability [S55].
Continuous, adaptive monitoring (e.g., semi‑annual reviews) is needed to keep pace with fast‑moving AI capabilities.
Speakers: Speaker 1, Speaker 2
*Adaptive, continuous oversight* – Speaker 1: Traditional periodic inspections are insufficient; oversight must evolve in real time. *Periodic global monitoring* – Speaker 2: Proposes a six‑monthly, government‑backed AI safety institute to conduct independent assessments.
Both agree that static, infrequent reviews cannot match AI’s speed; a regular, possibly semi-annual, monitoring cadence is required, supported by adequate funding [23-24][105-108].
POLICY CONTEXT (KNOWLEDGE BASE)
Adaptive monitoring and regular policy reviews are recommended to cope with the rapid pace of AI innovation, as noted in UN Secretary-General strategy discussions and adaptive leadership reports [S55][S57][S62].
AI safety evaluation should be embedded within existing biosafety, grant‑review and incident‑reporting processes rather than treated as a separate activity.
Speakers: Speaker 1, Speaker 2, Speaker 3
*Integrate AI evaluation into biosafety systems* – Speaker 1: Align AI risk assessment with current biosafety offices. *Pre‑deployment assessment & credential network* – Speaker 2: Results of assessments should be shared across a credentialed network and integrated into grant decisions. *Incident‑reporting framework tailored to Indian settings* – Speaker 3: Developed a context‑specific incident‑reporting mechanism.
All three stress that AI safety checks must be woven into existing institutional workflows-biosafety offices, funding reviews, and incident-reporting pipelines-to ensure coherence and effectiveness [18-20][118-119][169-170].
POLICY CONTEXT (KNOWLEDGE BASE)
Embedding AI safety into existing review and reporting mechanisms is advocated in recent AI governance workshops that stress integration rather than retrofitting safety checklists [S46][S47].
Capacity gaps in the Global South require tailored AI readiness programmes, training, and collaborative networks.
Speakers: Speaker 1, Speaker 3, Moderator
*Invest in capacity building* – Speaker 1: Train more people in AI‑enabled biosafety, chemical security, etc. *Capacity gaps & AI readiness* – Speaker 3: Highlight heterogeneity of resources across South‑East Asian institutions and the need for locally‑relevant solutions. *Empower emerging scientific powers* – Moderator: Emerging powers should shape governance rather than merely import Western frameworks.
There is consensus that building technical and policy capacity in low-resource settings, and creating regional networks, is essential for effective AI-biosecurity governance [15-18][62-66][159-160].
POLICY CONTEXT (KNOWLEDGE BASE)
Addressing capacity gaps through tailored programmes and collaborative networks has been highlighted in multi-stakeholder development forums and in discussions on open-source tools for low-resource settings [S51][S65].
Multi‑stakeholder collaboration and shared standards (including data‑standard harmonisation) are crucial to avoid fragmentation across borders.
Speakers: Speaker 2, Speaker 3, Moderator
*Data‑standard harmonisation* – Speaker 2: Proposes federated standards (HL7‑FHIR‑style) for interoperable biosurveillance. *Collaborative multi‑stakeholder governance* – Speaker 3: Calls for joint work among academia, industry, government, civil society. *Integrate AI safety with legacy systems without fragmentation* – Moderator: Emphasises need for coordinated safety across new AI tools and existing programmes.
All three underline that common technical standards and collaborative governance structures are needed to prevent siloed, fragmented responses to AI-driven bio-risks [226-230][177-179][204-208].
POLICY CONTEXT (KNOWLEDGE BASE)
The importance of multi-stakeholder collaboration and harmonised standards to prevent fragmentation is a recurring theme in IGF and policy network sessions on internet fragmentation and AI governance [S50][S51][S53][S58].
Similar Viewpoints
Both see the need for formal, pre‑release safety checks that are tied to funding and research workflows, ensuring that risky capabilities are vetted before they reach the lab or market [44-48][147-148].
Speakers: Speaker 1, Speaker 2
*Pre‑deployment assessment* – Speaker 2 (structured rubrics before release). *Integrate AI evaluation into grant/research processes* – Speaker 1 (AI modules in grant review, cross‑trained panels).
Both propose institutional mechanisms at the regional or global level that regularly assess AI safety and share findings across a trusted community [164-165][105-108].
Speakers: Speaker 2, Speaker 3
*Global‑South network for trustworthy AI* – Speaker 3 (launching a network). *Periodic global monitoring* – Speaker 2 (six‑monthly institute).
Consensus that biosafety officers should be given authority, training, and clear reporting channels to central leadership to create an effective, layered response system [24-27][295-299].
Speakers: Speaker 1, Audience Member 3
*Empower biosafety officers* – Speaker 1 (decentralised checks, empowerment). *Empowered biosafety officers* – Audience Member 3 (layered response, reporting upward).
Unexpected Consensus
Inclusion of psychological and broader non‑physical harms in AI safety taxonomies.
Speakers: Audience Member 1, Speaker 3
*Comprehensive harms taxonomy* – Audience Member 1 (physical, psychological, cyber‑incident, socio‑economic, environmental). *Socio‑cultural benchmarks* – Speaker 3 (need for assessments beyond technical performance, including human‑centric impacts).
While the panel largely focused on bio-security and technical risk, both the audience member and Speaker 3 highlighted the importance of psychological and socio-cultural harms, extending the safety conversation beyond the expected biological scope [265-274][68-70].
POLICY CONTEXT (KNOWLEDGE BASE)
Recent AI safety discussions have expanded taxonomies to cover mental health and other non-physical harms, reflecting findings on psychological impacts of AI use [S64] and calls for contextual safety definitions involving diverse stakeholders [S63].
Recognition that open‑source tools are essential for low‑resource settings and should not be automatically restricted.
Speakers: Speaker 2, Speaker 1
*Preserving open‑source benefits* – Speaker 2 (open‑source critical for low‑resource innovation). *Digital‑to‑physical barrier* – Speaker 1 (even with open tools, physical infrastructure limits risk).
Speaker 1 did not explicitly champion open-source, yet his point about the digital-to-physical barrier implicitly supports the idea that open tools alone do not create immediate danger, aligning with Speaker 2’s stance that open-source should be preserved for developing contexts [250-251][53-56].
POLICY CONTEXT (KNOWLEDGE BASE)
Open-source AI is identified as a catalyst for innovation and global partnership, especially for low-resource environments, arguing against blanket restrictions [S65].
Overall Assessment

The panel shows strong convergence on several core themes: the upstream shift of bio‑risk due to AI, the need for decentralised and capacity‑building‑focused oversight, tiered and adaptive governance, and the integration of AI safety into existing institutional processes. Participants from different backgrounds (bio‑security, AI policy, regional capacity building) largely reinforce each other’s proposals rather than contradict them.

High consensus – most speakers align on the structural nature of the problem and on concrete policy levers (decentralised checks, tiered risk regimes, continuous monitoring, capacity building, and collaborative standards). This broad agreement suggests that future work can move quickly toward implementing multi‑layered, region‑specific governance frameworks without needing to resolve major conceptual disputes.

Differences
Different Viewpoints
Centralisation vs decentralisation of oversight mechanisms
Speakers: Speaker 1, Speaker 2
*Decentralized oversight* – Speaker 1: “If there is one authority sitting somewhere in Delhi and trying to do everything, that’s not going to work… How do we decentralize these kind of oversight systems to some extent?” [24-26] *Anchor AI-safety institutes within existing international frameworks* – Speaker 2: “…implement this AI safety or security institute model… It is technically credentialed. It’s independent, but also has a very… formal relationship with the government… the institution to have some kind of anchoring around biological weapons convention or the WHO…” [113-118][116-118]
Speaker 1 argues that a single national authority cannot keep pace with AI-driven bio-risk and calls for a network of empowered institutional biosafety and information-security offices (decentralised checks and balances) [24-26][27-31]. Speaker 2 proposes creating a dedicated AI safety institute that is formally linked to governments and anchored to international treaties such as the BWC or WHO, implying a more centralised, globally coordinated body [113-118][116-118].
POLICY CONTEXT (KNOWLEDGE BASE)
The debate mirrors recent calls for decentralized oversight with empowered local offices versus centralized international bodies, as discussed in biosafety workshops and AI safety institute forums [S49][S58].
Frequency and nature of oversight (continuous adaptive vs semi‑annual ritual)
Speakers: Speaker 1, Speaker 2
*Adopt adaptive, continuous oversight mechanisms* – Speaker 1: “We need something which is far more adaptive and quick… Traditional periodic paper-based inspections are insufficient…” [23-24][135-138] *Periodic global monitoring* – Speaker 2: “We recommended that governments and also independent researchers do this six-monthly ritual of monitoring and also assessment of risk on a continuous basis…” [105-108][111-112]
Speaker 1 stresses that oversight must evolve in real time with rapid AI advances, moving beyond occasional paper-based inspections to adaptive, continuous checks [23-24][135-138]. Speaker 2 suggests a concrete six-monthly monitoring cadence, supported by multilateral funding and automation, as the primary mechanism for ongoing risk assessment [105-108][111-112].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy circles are divided on oversight cadence, with some advocating continuous adaptive monitoring and others favoring periodic semi-annual reviews, reflecting differing interpretations of adaptive governance recommendations [S55][S57][S62].
Unexpected Differences
International anchoring of AI‑safety bodies vs national‑level decentralised approach
Speakers: Speaker 1, Speaker 2
*Decentralized oversight* – Speaker 1: emphasises local empowerment without reference to international treaties [24-26][27-31] *Anchor AI-safety institutes within existing international frameworks* – Speaker 2: explicitly calls for linking the institute to the Biological Weapons Convention or WHO [116-118]
Speaker 1’s vision stays within national institutional reforms, whereas Speaker 2 introduces an unexpected layer of international treaty-based anchoring, revealing a divergence in the perceived locus of legitimacy and authority for AI-biosecurity governance. This contrast was not anticipated given the predominantly national focus of the earlier discussion. [24-26][116-118]
POLICY CONTEXT (KNOWLEDGE BASE)
Tensions between establishing international AI-safety institutions and maintaining national-level decentralized oversight have been highlighted in discussions on global AI safety institutes and the need for coordinated yet locally responsive frameworks [S58][S49].
Overall Assessment

The panel shows strong consensus on the need for enhanced AI‑biosecurity governance, capacity building, and multi‑stakeholder collaboration. The principal disagreements centre on the architecture of oversight (decentralised national networks vs a centralised international institute) and on the cadence of monitoring (continuous adaptive mechanisms vs a fixed six‑monthly ritual). These divergences reflect differing assumptions about feasibility, resource allocation, and legitimacy, but they do not undermine the shared recognition of risk.

Moderate – while all participants agree on the problem and the overarching goal of safer AI‑enabled biology, they diverge on structural and procedural solutions. The implications are that any policy outcome will need to reconcile decentralised national capacities with some form of coordinated, possibly internationally‑anchored, monitoring framework, and must balance the desire for real‑time adaptability with the practicality of periodic reviews.

Partial Agreements
All three speakers agree that robust safety evaluation is essential, but they differ on the implementation route: Speaker 1 favours embedding AI risk checks within existing institutional biosafety and grant‑review processes; Speaker 2 favours a semi‑annual, credentialed, globally coordinated monitoring institute; Speaker 3 pushes for a regional South‑South network and a shared safety commons to provide capacity‑building and evaluation tools. [18-20][105-108][164-166]
Speakers: Speaker 1, Speaker 2, Speaker 3
*Integrate AI evaluation into existing biosafety systems* – Speaker 1: “Integrating AI evaluation into biosafety system, strengthening the institutional readiness…” [18-20] *Periodic global monitoring* – Speaker 2: “…six-monthly ritual of monitoring and also assessment of risk…” [105-108] *Establish a Global South network for trustworthy AI* – Speaker 3: “We are going to launch a global south network for trustworthy AI…” [164-166]
Takeaways
Key takeaways
AI‑enabled biodesign tools shift bio‑risk upstream from physical labs to the design phase, requiring new governance structures. Centralised oversight (e.g., a single authority in Delhi) is insufficient; oversight must be decentralized to institutional biosafety, information‑security offices, and regional bodies. Open science can be preserved by using tiered, capability‑level access controls and contextual norms rather than blanket bans, while retaining the benefits of open‑source tools for low‑resource settings. There are significant AI readiness and capacity gaps in India and other Global South countries; governance must be tailored to local socio‑cultural contexts and resource levels. Existing safety benchmarks often fail for biological applications; participatory, region‑specific assessments are needed. Independent, periodic (e.g., six‑monthly) evaluation and red‑team exercises, supported by a dedicated AI safety institute, are essential for continuous risk monitoring. Pre‑deployment assessments using structured rubrics are a critical safeguard before releasing frontier models. Cross‑border biosurveillance suffers from fragmented data standards and legal regimes; harmonised federated standards and pre‑negotiated safe‑harbor agreements are required. A comprehensive taxonomy of harms should include physical, psychological, cyber‑incident, socio‑economic, and environmental impacts. Model performance can degrade over time due to data drift; continuous monitoring and adaptation are necessary. Effective incident‑response frameworks need empowered biosafety officers at the institutional level reporting to coordinated central leadership, with clear agency roles as exemplified by Singapore.
Resolutions and action items
Develop and deploy tiered access mechanisms and contextual norms for high‑risk AI biodesign tools (pre‑deployment assessment, KYC‑style credentialing). Create a six‑monthly, government‑backed AI safety institute to conduct independent evaluations and share findings through a credentialed network. Establish AI safety commons for the Global South to provide shared evaluation resources and benchmarks. Launch a Global South network for trustworthy AI and an incident‑reporting framework tailored to Indian and regional contexts. Integrate AI evaluation modules into grant‑review processes and form cross‑trained AI biosafety review panels at institutions. Promote capacity‑building programmes for biosafety officers, information‑security staff, and other stakeholders in the Global South. Adopt federated data‑standard frameworks (e.g., HL7‑FHIR‑style) for biosurveillance interoperability across countries. Negotiate pre‑emptive legal safe‑harbor agreements to enable rapid cross‑border data sharing during public‑health emergencies. Implement continuous model‑monitoring pipelines to detect and mitigate temporal data drift. Encourage decentralized yet coordinated incident‑response structures, drawing on multi‑agency models such as Singapore’s.
Unresolved issues
Specific mechanisms and governance models for decentralising oversight while maintaining effective central coordination remain undefined. Funding models and international collaboration structures for the proposed AI safety institute and Global South safety commons are not settled. How to enforce tiered access and credentialing without stifling legitimate research, especially in low‑resource environments, needs further clarification. The process for creating and maintaining region‑specific socio‑cultural safety benchmarks and participatory assessment frameworks is still open. Legal pathways to establish pre‑negotiated safe‑harbor agreements across diverse jurisdictions have not been detailed. Strategies for monitoring and regulating DIY or small‑scale commercial biodesign activities outside formal oversight structures are not resolved. Methods to ensure consistent incident reporting and data sharing between institutions and central authorities are still under discussion.
Suggested compromises
Adopt differentiated, capability‑level governance (tiered access, contextual norms) instead of blanket restrictions on AI tools. Combine decentralized institutional oversight with a coordinated central leadership layer for incident aggregation and response. Allow open‑source development while applying pre‑deployment assessments and credentialed access for high‑risk capabilities. Balance rapid, adaptive safety measures with existing periodic review processes by introducing faster, AI‑assisted monitoring cycles. Integrate both technical (model‑level) and socio‑technical (institutional, cultural) assessments to capture the full risk spectrum.
Thought Provoking Comments
AI biodesign tools are decoupling risk from physical lab containment and moving the risk upstream to the design phase, fundamentally changing the biosafety landscape.
Highlights a structural shift where AI enables biological design without traditional physical safeguards, creating new upstream vulnerabilities that existing governance models may not address.
Set the stage for the discussion on the need for new oversight mechanisms, prompting later speakers to propose decentralized checks, pre‑deployment assessments, and capability‑aware safeguards.
Speaker: Speaker 1
We should adopt a tiered access and contextual norms approach—using pre‑deployment assessments and KYC‑style credentialing—to differentiate between defensive research and unrestricted open‑source tools.
Introduces a concrete, nuanced governance framework that balances openness with security, moving beyond binary yes/no answers.
Shifted the conversation from abstract risk to actionable policy ideas, leading the moderator to ask about institutional gaps and influencing later suggestions about differentiated capability‑level governance.
Speaker: Speaker 2
AI readiness varies dramatically across regions; Southeast Asian countries need sociocultural benchmarks and small‑language‑model solutions tailored to low‑resource settings rather than importing Western‑centric frameworks.
Points out the mismatch between global AI safety standards and local capacities, emphasizing the importance of culturally aware evaluation and participatory design.
Redirected the dialogue toward equity and capacity‑building, prompting discussions on localized incident reporting, AI safety commons for the Global South, and the need for adaptable frameworks.
Speaker: Speaker 3
A six‑monthly, independent, credentialed institute—modeled after the IAEA—should conduct continuous risk monitoring and share assessment results through a tiered‑confidentiality network.
Proposes an institutional model that institutionalizes red‑teamings and continuous oversight, linking technical evaluation with multilateral governance structures.
Introduced the idea of a formal, recurring global safety ritual, influencing later remarks about building AI safety institutes in India and the need for sustained governmental investment.
Speaker: Speaker 2
Safety measures must move upstream, include tiered risk classification for biodesign tools, and integrate AI evaluation into grant reviews and domestic evaluation capacity, while also leveraging tech‑sovereignty to control data flows.
Combines practical steps (grant‑review integration, cross‑trained panels) with strategic concepts (tech sovereignty), bridging policy and technical domains.
Deepened the conversation about implementation, leading to concrete suggestions such as AI safety institutes, incident‑reporting frameworks, and the need for proportionate, capability‑aware safeguards.
Speaker: Speaker 1
Fragmentation in biosurveillance arises from incompatible data standards and lack of legal safe‑harbors; we need federated standards (e.g., HL7‑FHIR‑like), pre‑negotiated cross‑border data‑sharing agreements, and shared evaluation criteria.
Identifies a concrete technical‑legal bottleneck that hampers coordinated pandemic response and links it to AI safety, offering a clear roadmap for harmonization.
Steered the discussion toward interoperability challenges, prompting audience questions about temporal data drift and reinforcing the theme of cross‑border collaboration.
Speaker: Speaker 2
Incident response must be decentralized yet integrated: empower biosafety officers at the lab level, provide clear reporting channels to central leadership, and ensure top‑down visibility of grassroots incidents.
Synthesizes earlier points about decentralization with a practical governance chain, addressing both prevention and rapid response.
Served as a concluding turning point, aligning the panel around a shared vision of layered oversight and influencing the final audience question about a “web of prevention and incident response framework.”
Speaker: Speaker 1
Overall Assessment

The discussion evolved from recognizing a fundamental shift in biosafety risk—AI moving threat creation upstream—to debating concrete governance mechanisms that balance openness with security. Early insights about upstream risk and tiered access reframed the conversation, prompting participants to surface regional capacity gaps, propose institutionalized monitoring bodies, and stress the need for interoperable data standards. These pivotal comments redirected the dialogue from abstract concerns to actionable, context‑sensitive solutions, ultimately converging on a shared vision of decentralized yet coordinated oversight that can be adapted by emerging scientific powers in the Global South.

Follow-up Questions
How do we preserve the benefits of open science while preventing the destabilizing diffusion of high‑risk AI capabilities?
Balancing openness with security is crucial to retain scientific collaboration without enabling misuse of powerful biodesign tools.
Speaker: Moderator (directed to Speaker 2)
What are the most immediate gaps in evaluating systems, technical capability, regulatory and coordination from a policy perspective?
Identifying priority policy gaps helps focus resources on the most pressing weaknesses in AI‑biosecurity governance.
Speaker: Moderator (directed to Speaker 3)
Should independent evaluation and red‑team­ing of AI systems that generate biological outputs become a norm and part of the global scientific specialist infrastructure? If so, how would we implement it?
Establishing systematic, independent oversight could provide continuous risk monitoring and build trust across nations.
Speaker: Moderator (directed to Speaker 2)
How can we ensure safety measures remain rigorous and feasible within heterogeneous research ecosystems, especially in low‑resource settings?
Practical, adaptable safety frameworks are needed to work across institutions with varying resources and expertise.
Speaker: Moderator (directed to Speaker 1)
Can emerging scientific powers in the Global South shape AI governance, and what would leadership look like in scientific AI ecosystems?
Understanding the role of middle‑income countries can inform inclusive, context‑aware governance models.
Speaker: Moderator (directed to Speaker 3)
Should safety focus be primarily at the model level, or should broader socio‑technical readiness and misuse considerations be emphasized?
Determining the appropriate scope of safety assessment influences how risks are identified and mitigated.
Speaker: Moderator (directed to Speaker 1)
How do we ensure safety, evaluation, and interoperability across legacy and emerging AI systems without fragmentation?
Coordinated standards prevent siloed efforts and enable seamless integration of new AI tools with existing public‑health infrastructure.
Speaker: Moderator (directed to Speaker 2)
What work is being done on defining and categorising non‑physical harms (psychological, socio‑economic, etc.) in AI safety?
Expanding harm taxonomies beyond physical risks is essential for comprehensive AI safety assessments.
Speaker: Audience Member 1 (directed to Speaker 3)
How will temporal data drift affect model performance, and how can we mitigate it?
Models may degrade over time; systematic monitoring and adaptation are needed to maintain safety and reliability.
Speaker: Audience Member 2 (directed to Speaker 3)
What would a successful web of prevention and incident‑response framework look like, and who are exemplars in this space?
A clear, coordinated response architecture is vital for rapid containment of biosecurity incidents across borders.
Speaker: Audience Member 3 (directed to Speakers 1 and 2)
Research needed on decentralized checks and balances / oversight mechanisms for AI bio‑risk.
Centralized authority may be ineffective; exploring decentralized models could improve responsiveness and coverage.
Speaker: Speaker 1
Research needed on tiered access and contextual norms for AI biodesign tools.
Differentiated governance can allow legitimate research while restricting malicious use.
Speaker: Speaker 2
Research needed on AI readiness benchmarks and sociocultural safety evaluations for Southeast Asia.
Current models trained on Western data underperform in regional contexts; tailored benchmarks are required.
Speaker: Speaker 3
Research needed on institutionalizing six‑monthly independent monitoring via an AI safety institute linked to international bodies.
Regular, credentialed assessments could provide continuous oversight but require multilateral investment and governance structures.
Speaker: Speaker 2
Research needed on designing proportionate, capability‑aware safeguards that are adaptive and quick for low‑resource labs.
Traditional periodic, paper‑based audits are too slow for fast‑moving AI developments.
Speaker: Speaker 1
Research needed on building incident‑reporting frameworks and taxonomies tailored to Indian and Global‑South contexts.
Context‑specific reporting captures diverse harms and improves response in varied regulatory environments.
Speaker: Speaker 3
Research needed on creating a Global South network for trustworthy AI and an AI safety commons.
Shared resources and standards can accelerate capacity building across developing nations.
Speaker: Speaker 3
Research needed on harmonising data standards and establishing legal safe harbours for cross‑border biosurveillance data sharing.
Standardised, legally protected data exchange is critical for effective regional outbreak detection and response.
Speaker: Speaker 2
Research needed on enhancing AI literacy and capacity‑building in marginalized communities.
Equitable understanding of AI risks ensures that vulnerable groups are not disproportionately affected.
Speaker: Speaker 3
Research needed on integrating AI evaluation modules into grant review processes and establishing cross‑trained biosafety review panels.
Embedding safety checks early in funding decisions can pre‑empt risky deployments.
Speaker: Speaker 1
Research needed on applying tech‑sovereignty measures to AI safety and security.
Domestic control over AI tools may reduce reliance on external platforms and improve national security.
Speaker: Speaker 1
Research needed on developing a comprehensive taxonomy for psychological and other non‑physical harms, and tools to assess perceptions among healthcare workers.
Understanding user perception and psychological impact informs targeted training and risk mitigation.
Speaker: Speaker 3
Research needed on systematic monitoring of model drift and distribution shift as part of safety monitoring.
Continuous detection of data drift ensures models remain accurate and safe over time.
Speaker: Speaker 3

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.