Free Science at Risk? / Davos 2025

24 Jan 2025 08:00h - 08:45h

Session at a Glance

Summary

This panel discussion focused on the complex issue of research security and international collaboration in science. The participants, representing various academic and research institutions, explored the benefits and risks of global scientific cooperation in an era of increasing technological advancement and national security concerns.


The discussion highlighted the importance of international collaboration for scientific progress and innovation, with participants noting that such cooperation leads to more impactful research and cross-cultural understanding. However, they also acknowledged the growing concerns from governments about potential risks to national security and economic competitiveness.


A key point of debate was the balance between open science and security considerations. Participants discussed the challenges of regulating emerging technologies like AI, gene editing, and robotics, which have dual-use potential. They emphasized the need for self-regulation within the scientific community and the importance of involving scientists in decision-making processes about research security.


The conversation also touched on the changing landscape of research, with high-risk work now occurring in both government labs and private sector companies. This shift has blurred traditional boundaries and complicated the regulation of international collaborations.


Participants debated the effectiveness of country-specific restrictions versus technology-based limitations on collaboration. They also discussed the role of intellectual property protection in fostering innovation while safeguarding national interests.


The panel concluded by emphasizing the need for a balanced approach that incorporates scientific expertise, intelligence insights, and democratic oversight in making decisions about research security and international collaboration. They stressed the importance of educating young scientists about these complex issues to prepare them for future leadership roles in the scientific community.


Keypoints

Major discussion points:


– The benefits and risks of international scientific collaboration


– The changing landscape of high-risk research between government, academia, and private industry


– The challenges of regulating emerging technologies and dual-use research


– The role of scientists in self-regulation vs. government oversight


– Balancing open science and innovation with national security concerns


Overall purpose:


The goal of this discussion was to explore the complex issues surrounding research security and international scientific collaboration in an era of rapid technological advancement and geopolitical tensions. The panel aimed to examine different perspectives on how to balance the benefits of open science with potential national security and economic risks.


Tone:


The overall tone was thoughtful and nuanced, with panelists acknowledging the complexity of the issues. There was a mix of optimism about the benefits of collaboration and innovation, along with caution about potential risks. The tone became slightly more pointed when discussing specific policies or approaches, but remained largely collegial. Towards the end, there was a sense of urgency in finding practical ways to address these challenges.


Speakers

– Michael Spence: President and Provost of University College London


– Kimberly Budil: Laboratory Director of the Lawrence Livermore National Laboratory


– Jonathan Brennan-Badal: Chief Executive of Opentron’s Lab Works


– Maria Leptin: President of the European Research Council


– Teruo Fujii: President of the University of Tokyo


– Audience: Unnamed audience members asking questions


Additional speakers:


– Dana: AI strategy at Dr. Liban (audience member who asked a question)


Full session report

International Scientific Collaboration: Benefits and Challenges


This panel discussion brought together leaders from academia, research institutions, and industry to explore the complex landscape of research security and international scientific collaboration. The conversation highlighted both the immense benefits and potential risks associated with global scientific cooperation in an era of rapid technological advancement and heightened national security concerns.


Benefits of International Collaboration


Michael Spence emphasised that international cooperation advances science and increases research impact. Jonathan Brennan-Badal highlighted how open-source collaboration drives innovation in business, while Kimberly Budil noted that an open science ecosystem is important even for national security work. Maria Leptin discussed the European Research Council’s approach to funding excellence-driven research, which has led to significant IP generation and scientific advancements.


Changing Landscape of Research and Conflict


Kimberly Budil highlighted the shifting landscape of high-risk research, noting that many areas of science historically dominated by government research labs have now moved into the private sector, citing the commercialisation of space and the development of AI as examples. This shift has complicated the regulation and oversight of such research. Budil also emphasized the changing nature of conflict, particularly in the cyber realm, which presents new challenges for research security.


Balancing Openness and Security


The discussion frequently returned to the challenge of balancing open science with security concerns. Kimberly Budil argued that while the speed of innovation is a key advantage, some restrictions on collaboration remain valuable. In contrast, Jonathan Brennan-Badal advocated for a more open approach, suggesting that businesses should focus on innovation rather than IP protection as a strategy.


Maria Leptin cautioned against country-specific restrictions on collaboration, arguing that they can be counterproductive. She specifically mentioned the China Initiative as an example of a problematic approach to research security that had negative impacts on the scientific community. Instead, she advocated for constant reassessment of policies to address evolving challenges. Teruo Fujii stressed the need to manage multiple interests in scientific collaboration and mentioned the G7 special interest group on research integrity and security as a forum for sharing best practices among partner countries.


Regulating Emerging Technologies


The panel grappled with the challenges of regulating emerging technologies, particularly those with dual-use potential. Jonathan Brennan-Badal provided a thought-provoking example of an AI model developed at Carnegie Mellon University that could potentially be used to synthesise dangerous chemical compounds, highlighting the risks associated with advanced technologies. This led to a broader discussion about the difficulty of regulating AI due to its long development history and potential benefits.


Kimberly Budil emphasised the need for thoughtful discussion on managing emerging technologies, suggesting that self-regulation alone may not be sufficient. She also stressed the importance of public sector engagement with the private sector on these issues. Jonathan Brennan-Badal agreed, noting that consumer tech companies have a responsibility to ensure appropriate use of their technologies.


Role of Scientists in Decision-Making and Education


Michael Spence argued that scientific expertise should have more weight in decision-making processes related to research security. This view was echoed by other panellists who stressed the importance of involving scientists in policy discussions about international collaboration and security concerns.


The panel agreed on the importance of educating young scientists about research ethics and security. Teruo Fujii emphasised the need to teach responsibility and ethics to young scientists, while Maria Leptin suggested that practical experience helps develop judgment on sharing research. Jonathan Brennan-Badal added that discussing potential misuse with other practitioners is crucial.


Unresolved Issues and Future Directions


The discussion highlighted several unresolved issues, including how to effectively regulate emerging technologies like AI without stifling innovation, where to draw boundaries between open and restricted research, and how to manage dual-use technologies. The panellists suggested some potential compromises, such as focusing collaboration primarily with allies and partners that share similar values and implementing thoughtful discussions about potential risks rather than outright bans on research.


Audience questions further explored topics such as educating young scientists and collaborating on AI research projects, with panelists emphasizing the importance of ongoing dialogue and education in the scientific community.


Conclusion


The panel concluded by emphasising the need for a balanced approach that incorporates scientific expertise, intelligence insights, and democratic oversight in making decisions about research security and international collaboration. They stressed the importance of ongoing dialogue and flexible policies that can adapt to the rapidly changing landscape of global scientific research.


Session Transcript

Michael Spence: Hi, I’m Michael Spence. I’m president and provost of University College London, and I’m joined today by Kimberley Budil, the associate dean of the University College London. who is the Laboratory Director of the Lawrence Livermore National Laboratory in the U.S., by Jonathan Brennan-Badal, the Chief Executive of Opentron’s Lab Works in the USA, by Maria Leptin, the President of the European Research Council in Brussels, and by Teruo Fuji, the President of the University of Tokyo in Japan. It’s a talk about research security, an issue that’s becoming increasingly important, not only to researchers in universities and research labs, but also, as Jonathan will tell us, to people in business as well. So I suppose as we start the session, we might take it as axiomatic that international collaboration in science is a good thing, and I suppose it’s a good thing because we assume that human talent is reasonably evenly distributed throughout the human population, and that therefore the capacity to find the best people working on the most interesting problems, and to work with them, is really important for the advancement of science. But we also know it’s not only important for the advancement of science and the capacity to make sure that we have teams that are really able, it’s also important for impact. We know that scientific work that involves international collaborations is much more likely to be cited, six or seven times more likely to be cited, and therefore to have more impact in the advancement of knowledge. We know too that it’s important in creating cross-cultural understanding. Someone was talking in a session earlier this week about the way in which CERN helped to bring together the French and German research communities after the Second World War, and of course there’s some science, astrophysics, that you just can’t do without large international teams of one sort or another. Increasingly, governments are suspicious of international collaboration in research and attempting to limit it in ways that’s not been the case before. Before we explore both why that might be the case and when it’s helpful and when it’s not helpful, John, I wonder if you could comment on the benefits of international collaboration from a business point of view. Is it just in research laboratories and industry that we’re so obsessed with international collaboration or is it important for businesses as well?


Jonathan Brennan-Badal: It’s absolutely essential and to your point that there’s excellent talent worldwide, whether in companies based in various countries around the world, I think Opentrons is a fantastic example of that. As a open source robotics platform, we’re used in over 60 countries around the world and as a result have scientific collaborators that build on top of our platform in those 60 plus countries around the world. If it weren’t for those types of collaborations, we would not be anywhere near as robust of a platform. I think when people try and limit or take a myopic approach that one can do everything oneself, you really constrain the possibilities of your business or scientific research.


Michael Spence: Businesses are increasingly doing a lot of high risk research. We’ve been talking much this week, for example, about AI and the potential for good but also the potential harms. High risk research, Kimberley, has traditionally been done in places like yours. Does it change the landscape that businesses are doing that kind of work?


Kimberly Budil: It does. It’s been a really interesting transition. Many areas of science. that were historically the purview of government research labs have now moved smartly into the private sector. I would say the commercialization of space is one example, an area where government’s typically dominated. That’s changed the landscape there very dramatically in what’s possible. And similarly, AI, where you have a very powerful technology that’s being developed in the private sector. And our goal is really to build new kinds of public-private partnerships, so you always have someone with that public-facing ethos participating in the research ecosystem. But it also means that the boundaries are no longer as clear, so I run a multi-programmatic national security laboratory, where we do both highly classified research and very open science. So I have teams of people who’ve helped put five elements on the periodic table, as an example, and also do national security work for our national defense. And so participating in that open science community is important. We use that as a way to bring people into our laboratory, to test our skills, to advance our capabilities, to ensure we’re working with the best people in the world, that we have access to the best ideas and the best technologies. Then we bring those to bear in the national security space. It’s hard to imagine how we could do good work in that environment without access to that open ecosystem. So now there’s a third pillar, private sector engagement on those research areas.


Michael Spence: So hang on, we’ve moved from a world where high-risk national security kind of work happens behind closed doors in government laboratories to high-risk being in all sorts of places. But you say there’s also a kind of symbiosis between open science and national security work demonstrated by a place like yours.


Kimberly Budil: Yes, and I think, for me, what’s really important is to have people who understand both the good potential and the harmful potential of technologies engaging in that ecosystem. It’s very hard to know at the beginning of a technology pathway. exactly where it will go over time. So making judgments up front to really rein in R&D or to restrict research in very stringent ways is difficult up front. You may miss many of the potential benefits. And I think the conversation around AI is a great example of this. How do you regulate a technology like that? We know that there are potential applications that could be very dangerous. But the potential benefits are extraordinary. So they need to coexist. And you want people involved in that ecosystem who are always keeping an eye on the public good and who understand the threat space deeply so that they can raise flags when issues or concerns begin arising in the research ecosystem.


Michael Spence: But it’s tricky, isn’t it? Because dangerous and national security are not necessarily the same thing. Research on the nuclear arsenal, that’s one thing. But research on all sorts of things is potentially dangerous.


Kimberly Budil: Well, I guess I would use as an example what the research community did in the wake of the invention of gene editing tools like CRISPR, where the community realized the power of those tools. And they came together to have a discussion around the ethics of how they should be applied in research laboratories. So that was a great conversation, where the community realized that as a body, they had an obligation to think about the implications of their technology and to sort of self-manage that environment. It doesn’t stop there being people in the world who might use that for bad intent. But it means the broad swath of the research community is acting in a responsible manner. And I think the research community has an obligation to always have that forefront as they investigate these very powerful technologies.


Michael Spence: And you could argue that for some of those kind of technologies, it’s really important that the conversation is precisely global, because we need to place, have global understandings of what sort of work is appropriate and what sort of work is not appropriate. But we’ve jumped pretty quickly to high-risk technologies and national security. There’s a suspicion, John, that some governments at the moment are trying to limit international research collaboration for other reasons, in particular to ensure their own economic competitiveness. Now, whether or not that’s a moral thing to do, is it a wise thing to do as public policy from the point of view of a business?


Jonathan Brennan-Badal: From a business perspective, there’s fundamental kind of self-policing when collaborating with another business or partner. You always have the assumption that that other partner could work in a way that’s to your disadvantage, because at the end of the day, you know, that partnership might not last forever and they might, you know, cut you out of the ecosystem. And so, as a result, companies are very kind of thoughtful and circumspect, such that they’re entering into collaborations because they believe that they will create value and be able to capture some of that value. And so, fundamentally, if there’s less collaboration occurring, particularly between businesses and other businesses, that means fundamentally both parties are worse off. And if you are taking kind of view there should be less collaboration, what you really are saying is that everybody should be poorer, there should be less, you know, scientific, you know, discovery. And that might be a trade-off that, you know, certain individuals or governments think is appropriate, but we should be really kind of frank about that being the outcome.


Michael Spence: And how important is it to you as a business in thinking about collaborating with people in a different country? how strong or weak the intellectual property system of that foreign country is.


Jonathan Brennan-Badal: Absolutely. We think of that as just a risk factor that ultimately needs to be mitigated. Most pieces of IP can be engineered around, and all IP really serves for is to maybe slow down a competitor moderately. In very few cases, it actually truly kind of blocks someone out that’s really truly kind of motivated to have a solution in that market. And so what I find is the most successful businesses are those that are kind of invariant to the IP landscape. As a forcing function with my business, we actually open source our entire platform. We actually make it easy for competitors to potentially copy us. Despite that, no companies have been successful in replicating our platform, and we are kind of the market leader in the areas that we serve, because instead we focus on innovation. By the time a competitor copies us, we’ve already come out with the next generation. And in a similar fashion, by focusing on collaboration, where you have thousands of applications being developed in collaboration with thousands of research universities around the world, those are the types of things that are hard to replicate versus discrete individual pieces of IP. And so I really kind of encourage companies and governments to kind of think about less around IP protection, really about how do you drive your business model or drive companies within your country to be focused on innovation, because that ultimately is more productive and more sustainable over the long term.


Michael Spence: So collaboration is a good thing. It actually helps our national security research to have open science that’s internationally competitive. drives innovation, we shouldn’t limit it probably just for potential economic gain and we should focus on innovation, but there remains a national security question. There’s always been a national security question in international collaboration. It’s just gotten more complicated because the landscape of research has changed as between governments and business, as between governments and private actors. It’s gotten more complicated because of the natures of the technologies that we’re developing. Governments are increasingly thinking about how you regulate international collaboration. Maria, some governments have thought about this from the lens of who you’re collaborating with. We’ll collaborate with some people, but we won’t collaborate with others. Others have thought about it in relation to the nature of the technology primarily. Is one or other of those approaches better or do you have to balance both?


Maria Leptin: We certainly have an example not to follow and that was the China initiative in the US, which was fortunately recognised as being quite silly and having all those unintended consequences of cutting yourself off and of damaging research, creating fear that is unnecessary. Scientists, researchers, the good ones, the ones who do push the frontiers of science are by nature risk-takers. They’re also by nature competitive, so they want to win. So there is an element of not letting your potential collaborators, who are also competitors, know everything you’re doing. Now governments are not necessarily risk-takers. Governments work by consensus and consensus doesn’t allow risk-taking. So we have a dichotomy there. Anyway, the ERC funds fundamental research, basic research, and there, as you said, we often don’t know ahead of time what will be dangerous, and we also have to distinguish, of course, as also has been said, between economic security and actual dual research. And in some fields, the fundamental research is immediately potentially exploitable for bad purposes, you know, AI, of course, mathematics, any algorithms, anything, we don’t know what use it’s been put, could be put to. And AI was developed completely openly over 50, 60, 70 years, it’s not new, and nobody even knew that they might want to restrict it, fortunately, because otherwise we wouldn’t be where we are. So what I guess I’m saying is that there is some self-restriction from the scientists themselves, and that’s a good thing, and scientists are citizens, so they themselves care about security and safety, which is a distinction we haven’t been made, well, you made it, between the safety of genetic engineering. There will be rogue cases, like the designer baby in China, which led to an outcry. So there is self-regulation at play anyway. The best guidelines for stem cell research and human embryonic research come from the International Society for Stem Cell Research, by scientists, self-formulated guidelines. So I think, and that’s what I always say, and not everybody agrees, scientists are pretty good at self-regulation, and governments and bureaucrats are… not necessarily the best to know where the risks are and where they’re hurting their own national interests by putting in walls.


Michael Spence: But that’s an issue in the current situation, isn’t it, Terreau? Because scientists understand the technology and understand the risks, but often the decisions about what limitations there might be are not being made by scientists, they’re being made by cautious civil servants of one kind or another. Civil servants are marvelous people, this is not a general slur, but by cautious civil servants who don’t want to lose their job, who don’t want to be the person who let the, and does that have a chilling effect on scientific collaboration, do you think? Where should the decisions be made?


Teruo Fujii: Right, so I think that’s, so we need to, so that’s why we are having this kind of forum to discuss about it. And first of all, as already discussed, this danger type of thing should be self-regulated by scientists, that’s for sure. And then for the economy side or business interest, so also, for example, in Europe, there’s a RRI type of principle that even for the innovation, so you will need to think about this responsibility of the science or technology itself. But then now this, I mean, national security or economic security has come in, so the thing is how we could manage all this new, I mean, interest coming into this, I mean, scientific or technology world. So then that’s why we basically, so my viewpoint is, so. we need to basically manage all these things. So it’s not like a kind of having threshold and then cut off everything under this threshold or those kind of things. But as you said, so it is some also interaction with this civil servant or whatever, the administration, administrative people in the government or in funding agencies. But we still need to, I mean, discuss and share the kind of same interest altogether and to manage all of these issues.


Michael Spence: Yeah, Maria.


Maria Leptin: So the, like I say, fundament, the ERC funds exclusively excellence driven research, curiosity driven research with no strings attached. Nevertheless, 40% of that research actually generates, is cited in patents, so it generates IP. And what the EU regulates, so if that was regulated at the start and said, that work must not involve international collaboration, I’m sure most of it does because most fundamental research does, it wouldn’t get done, that IP would not be generated. So the EU regulates at the point of saying, if that is shared specifically for economic gain with countries outside, then it needs to be looked at. It’s still not ideal, but it can be done at a very late stage.


Michael Spence: So we’ve had a strain in this conversation about scientific self-regulation and that scientists understand the technology, scientists understand the risk, there ought to be scientific regulation. But of course, there’s this whole other narrative in the security community that scientists might be great at their science. but they’re kind of boffins and they don’t understand the deep risks and if only you could see the things that I’ve seen behind the walk everybody says …


Maria Leptin: Not true we’ve heard the Asilomar we’ve heard the stem cells AI it’s from the AI community that the that the the moratoriums and so on


Michael Spence: Kimberley you look skeptical?


Kimberly Budil: I’m not skeptical but I think it is so researchers in my institution are different because they reside on both sides of that boundary sure so they are both deeply immersed in the threat space and understanding the implications of technology and what’s happening around the world the way technologies are being used and in some cases weaponized in ways that you know are relatively new think about what conflict looks like today right the way conflict start today is in the cyber realm it’s not in the kinetic realm so tools and and the instruments of conflict are very different so we live in that world and we take a view of any new research area through that lens so that’s just how I’ve been in this business a very long time that’s how I think about the world so there are going to be places where we understand that a technology has an application or an implication that may not make it classified but will shape who we want to work with so as an example one of the strong themes in the US has been research partnerships with allies and partners in areas where there is concern about the technology really trying to again get garner the benefits of the international collaborative environment working with the best researchers but having some focus on which groups we choose to work with to ensure that we share the same values we share the same norms we understand the implications of the technology and can have the kind of productive research partnership you need to…


Michael Spence: let me let me take you back to a couple of basic points first one about the nature of what goes on in my institution when we do international collaboration it leads to open-source publication. It always leads to open-source publication. We don’t do the research unless it leads to open-source publication. So the bad guys, whoever they might be, they only have to wait six months to read about it in Nature. So what’s the issue? When gunpowder was invented it wasn’t that only one country had gunpowder, it was that we all then spent hundreds of years trying to build a better gun.


Kimberly Budil: So the point was made, I’m sorry, speed and innovation are always the edge. It’s true in any system of control of information. No system is perfect and information invariably moves across those boundaries over time and there are smart people all around the world and there’s no barrier to people thinking along the same lines that we thought along five years ago, 10 years ago, 20 years ago, 30 years ago. That applies in every field of national security research. But that still means that that barrier is valuable. Again, it’s not an attempt to stop research. We do a huge amount of publishing from our research because that’s really important. That is how the scientific community works. But how we publish, what we publish, what areas we work in, who we work with, which reside in the open, which don’t reside in the open, is a different question today than it might have been 10 years ago.


Michael Spence: But isn’t the speed getting shorter in any case and therefore isn’t it a fool’s errand to try and take advantage of that speed?


Kimberly Budil: I think we still have to try. I think the pace of research progress is changing in very fundamental ways and speeding up.


Michael Spence: And what about areas, we’re assuming that our technology is better than whoever the bad guys might be. these technology, but in lots of areas it’s just not. We don’t collaborate internationally as a charity, we collaborate internationally because the people with whom we collaborate are doing fabulous work.


Kimberly Budil: Absolutely, so do we. But I just think you have to acknowledge, and we can’t be naive about this, that there are better and worse ways to manage these emerging technologies, and we have to be thoughtful about that process. Again, I’m not a big fan of saying don’t do it because there might be a bad outcome. That’s always the case with any new technology. But having a much more thoughtful discussion up front is, I think, a useful tool. And this is a message to the public sector that our usual sort of slow-moving, deliberative, bureaucratic processes aren’t fit for purpose in this environment. We do need to move faster.


Michael Spence: Yeah, that’s true. John, you’ve been trying to come in on that.


Jonathan Brennan-Badal: And I want to kind of highlight, one, that a lot of these consumer concerns, I think companies have a significant responsibility, not just governments or scientists, on ensuring that their products are used appropriately. I want to give one kind of recent example with our platform. So we have a robotics platform. One of our customers, a lab at Carnegie Mellon, built an AI model using all publicly available resources. So things like ChatGPT, publicly available data sets, access to the internet, and use that to enable our robot to synthesize a very wide range of chemical compounds. All you need to do is provide the prompt of, I want to make x. And this system will iteratively figure out with our robot how to do that. Turns out that you can also ask it to make mustard gas, and it will try and do that as well. And although that project was public in nature. So certainly, Opentrons as a company, Carnegie Mellon, were very thoughtful to ensure that certain restrictions were put in place. I think a company like OpenAI or others have similar kind of responsibilities. What I think this also highlights is that something like mustard gas is not cutting edge technology by any stretch of the imagination. But what is different and kind of potentially scary about this as an example is that it’s much easier now to create harm with this. In this kind of example, you don’t need to have a chemistry background. All you need to be able to is write a sentence asking a system to make mustard gas, and it will tell you how. And if you have access to a certain robotics tool and certain kind of reagents, you can manufacture that. And so it’s, I think, extremely important, particularly as you have very fast kind of changing industries like AI, where people haven’t quite figured out how to put the right kind of restrictions in kind of place. We’ll find that I think there’s many more kind of security risks, not just at the cutting edge, but less highly dangerous things that are now made more accessible. And that’s something I really want to kind of bring attention to.


Michael Spence: So we’ve moved from a world where governments did a particular kind of research in the desert, and everybody had a security clearance, and everybody else could more or less do what they want, to a world where the boundaries are much more porous. So the traditional security maxim of small yard, high fences doesn’t really seem to be as helpful as it once was. A way in which the yard has been growing is through the concept of dual use. Is there any scientific technology or any scientific knowledge that cannot be used for harm? I mean, isn’t that…


Jonathan Brennan-Badal: I mean, everything can be used. I mean, a pencil can be used for harm, right?


Michael Spence: Isn’t that the point of the tree of the knowledge of good and evil? That almost any technology can be used for harm. And sure, at one end there are obvious military applications. But some of the current conversations are about things like understanding the fundamentals of biology. And yes, that could lead to developments in biological weapons. But it’s not intrinsically harmful, is it, Therion?


Teruo Fujii: Yeah, so my point is, whatever the kind of interest that we need to see, so this is a matter of how to avoid or how to regulate this, I mean, unintended disclosure of the information. So that is in the form of, that could be in the form of publication, but also in the form of maybe at the level of intellectual property. But I mean, so we are all basically embracing all this open access of the knowledge that we are creating. But at the same time, it can be like a business, for example, interest, that will be, I mean, like avoiding all this unintended disclosure for the first round. So if you like to realize some machines or some system for the sake of your business, then you will need to have some time to, I mean, keep it secret, for example. So like this way, and then that is also a matter of, I mean, economic security or national security that we need to, I mean, follow such kind of interest. So that is the part of the problem that we have now.


Michael Spence: So the boundaries are getting much more blurry and somebody has to draw them. So far we’ve had sort of scientists, civil servants and spooks. Maria suggested earlier that the China initiative was not a good way of doing it. Is there any system in the world that is currently doing it well? Because of course, on the one hand, you have the real potential for the chilling of open science, which is bad for scientific progress. On the other, you do, as we’ve identified, have real risks. So the real question is who draws the boundary and how, and how do you bring together scientific knowledge and, as it were, broader intelligence? And is there any community that is a political community that you think is doing that particularly well? And I’m sure you can’t talk about that, Kimberley, but can anybody else?


Teruo Fujii: I just can raise one example. So in the frame of G7, so that we have this special interest group on research integrity and security, research integrity and security, and there, so we are to discuss about how we could, for example, share all this good practice amongst these countries, I mean, of the G7 countries. So like this way, we basically, I mean, with the potential, I mean, international partners, so we… need to share how we could, I mean, so to share all these good practices that we can share, then that will help a lot about how we could, for example, partnering with the other institutions outside the country, so. Then, so that is why this kind of, I mean, places where we could discuss about this issue is so important.


Michael Spence: Maria.


Maria Leptin: I think the question is, are there good systems is too general because it’s not clear good systems for what, there is a difference. I mean, there’s this horrible concept of technological readiness levels. And of course, I hate the term, but it is in a way useful because, you know, yes, a pencil can be used as a weapon. And I never understand the rules for what you are allowed to take onto airplanes and whatnot. Anyway, so I don’t think there is a general thing. What we do have is the knowledge that it’s bloody difficult and the democratic process, we have to involve citizens. And there are areas where it’s not clear, where there isn’t a good answer. Abortion is probably the most, the best known, and where you constantly reassess. But I think that kind of sensitivity to constantly reassessing is necessary. I’m not sure the principle has changed. As you say, gunpowder, same thing. That’s an old problem. It’s faster now.


Michael Spence: And what about country neutrality? Because I suppose there are two issues here, isn’t there? One is the chilling effect on international collaboration and science. The other is the repeat. public narrative that the same guys are always the bad guys but we also want to collaborate with them in in one way or another. Some countries have adopted a position of country neutrality to try and avoid that second political dilemma but of course it then places greater burdens on the system because you have to get approval to do anything with anybody. How do we avoid that second cycle?


Kimberly Budil: I think we’re conflating a number of things that I think are important to disaggregate. First of all a pencil is a lot less dangerous than mustard gas so you know how you think about where you should draw boundaries really does depend on the scale and the implication of the technology you’re talking about and I think that is important. I think the research community needs to be much more thoughtful about that and I think many of the efforts in the U.S. to raise awareness in the research community about both integrity and security and I think research integrity was a big part of the conversation in the U.S. There were clearly conflicts of interest and conflicts of commitment that were unearthed and that you know researchers should not have been operating in those ways. I think that’s that’s one thing to think of. We don’t have really in many areas a shared set of international norms and frameworks that every country is signed up to so I think it’s fair to say in certain areas we will work with some countries and we will not work with other countries because they don’t have the right kind of IP protections in place or they don’t you know respect the same ethical boundaries we do for the research. Again it’s not a statement that they don’t have great researchers in that environment but we have an obligation to be thoughtful about how we pursue some of these technologies.


Michael Spence: But John says from a business point of view innovation is more important than IP and with those conversations about dangerous technologies surely they’re also internal. conversations. I’m just as frightened about what the Californian tech companies might do with AI as I am about what anybody else might do about AI. Ought I not to be?


Kimberly Budil: So this is my personal feeling about the AI situation. I don’t think that the tech companies are seeking to do evil, but the likelihood that that could be one of the potential outcomes is real. And so having the public sector engaged with the private sector is important, not to slow their research, but to be involved in the ecosystem and understand where the technology is going and to be able to raise flags when their areas of concern may arise. That’s not saying stop. That’s not even saying that when a flag is raised, that there is a clear and present danger from the technology, just that awareness needs to be raised. And while I agree that innovation is better, I think we have IP controls for a reason, right? There’s no reason to believe that patent protections and other things are not a good tool to use and that countries should not try to use those protections for economic advantage. I mean, there is, you know, we do have national interests. Each country has their own national interests. And these are systems that have been in place for a long time to allow people to balance. We spent taxpayer money. In our case, we’re a taxpayer funded organization. We have an obligation to try to bring the fruits of that research to the benefit of the citizens of the United States. That’s not to say those technologies won’t benefit the world and won’t be internationally promulgated. I think the most striking example for me is the research at our lab and with other laboratories in the U.S. led to the technology that’s today commercialized as extreme ultraviolet lithography, which is licensed by a Dutch company, right? So, you know, the IP moves around the world. over time, but that doesn’t mean that we didn’t put IP protections in place and licensing opportunities in place when the technology was brought forward.


Michael Spence: So I have the impossible task. I’m about to open for a couple of questions. I have the impossible task. I was told I needed to have some practical outcomes from this. And I have one, which I think is a remarkable achievement. And I think the practical outcome for me is, okay, so we all admit that there are difficult questions of judgment here. Difficult questions of judgment because the risks are increasing, because the number of actors involved in high-risk technologies, because the speed of technological innovation is a strategic advantage for a country, because the technologies that we are developing, many of them have high-risk applications of one kind or another, and they’re developing at a speed at which we don’t have the ability to exactly determine where that might be. And therefore it’s appropriate that there be some limitation on international scientific collaboration. But the question is, who makes the decision? And I think the really interesting thing that has come from this conversation is precisely the balance of scientific expertise and intelligence. And as I look around the systems I know best, the current balance of power in that decision-making process is not with scientific expertise. And I think that’s where there is a practical take-out for governments. How do you give equal weight to the self-regulating expertise of the scientific community and the perfectly legitimate role of balancing risk of democratically elected leaders and the civil servants who serve them? And that’s not a bad litmus test to take to the different systems and ask, is there enough science in here? So there’s my practical take-out. I think we have time for two questions and I’m sorry for having let it run on. Yes.


Audience: So thank you so much for the really interesting panel. I have a question that might be looking at a smaller scale but had these larger ripple effects to the questions and discussion that we’re having today. So I’m a bit of a younger scientist, like I’m starting off with my PhD. But my question is, with young scientists who will eventually be moving up and making these decisions and having these conversations and these larger impacts, how can these thoughtful and ethical practices be taught to them or discussed with them? So eventually when they do come to these larger decisions, they know how to encounter them or how to collaborate with others.


Michael Spence: Great question. Teruo, how do you do it at Tokyo?


Teruo Fujii: Right, so before, I mean, so we’re now discussing about this national, I mean, security and so on. But I mean, much before that, I mean, so you will also, the young scientist needs to know about all these responsibility as scientist. And that is so basically the science itself and also use of scientific knowledge is a very important point that we need to be responsible for, right? So that’s the kind of first, so very basic, I mean. ground. And then at the same time, if you are, for example, doing some research work which will be of some commercial interest, then so you will also need to be aware of the importance of this, I mean, handling intellectual property and also all this, I mean, commercial interest. So then on top of it, not really on top of it, but we are now sharing the different scope that is also from the viewpoint of national security or those interests. So that like this way, in any case, we will need to be aware of the importance of, as scientists, the importance of what we are doing and also responsibility for the knowledge that we are creating.


Michael Spence: Maria, how do we do it in Europe?


Maria Leptin: Well, I think this was exactly the right response because if I look at my own lab and people going to conferences, they would come to me and say, Maria, but do you think I can talk about these last results? Are we going to get scooped? And yeah, of course, that’s a risk. But if you don’t go to the conference, if you don’t present your new data, you’re not going to get feedback. So automatically, if you go into any of the careers of those people who are sitting up here, you’ll be facing that in this microcosm of what the world has to face. So I think experience tells you. And you see, when none of us have, there is no clear answer. It really is a case by case, difficult decision every single time. And I told them, don’t talk to that person. They have screwed us over in the past. Talk to everyone else.


Michael Spence: Which is in microcosm, a national question at large. One more question. Yes, the man here with the stripy shirt. Oh, sorry.


Audience: Hi, I’m Dana. of AI strategy at Dr. Liban, which is one of the largest health care platform in Europe. I was wondering, because the deployment of state-of-the-art technologies goes faster and faster in AI, so we are super interested to work with research. But we are wondering, what kind of research project should we keep internally? And what kind of research project can we collaborate with external partners? So in your opinion, would you have some recommendations about, especially in AI research, on what kind of projects could we collaborate with external partners?


Michael Spence: So you’d like to respond to that?


Jonathan Brennan-Badal: I mean, I can give, you know, so that we’re kind of at time, I’ll be really quick. I think the number one thing is just talk to other people. Talk to other kind of practitioners and really press them on, what would be a way you could misuse this in your kind of context? And I think that happens far too kind of little and can catch a significant majority of risks. And making sure that your organization really actually cares about that and is willing to delay a release to address some of those issues. And that can be really, really hard if you’re worried about getting scooped or trying to hit some quarterly results. But you have to be a responsible member of humanity. And so that would be kind of my main encouragement.


Michael Spence: So we have reached time. I thought that I would end on a lighter note. These are important questions. And they need scientific input. And they need intelligence input. I have to say, I’m a little more skeptical. about the value of stopping international collaboration for economic advantage and intellectual property is my field. Two fun facts. The modern intellectual patent system arguably stems from the Statute of Monopolies, which is a, although there are arguments about earlier Venetian statutes, which was a way for the English to steal inventions from the French. And of course, the whole of the 19th century, the United States did not protect any foreign inventors or authors because it said it was a developing country and it couldn’t afford to. So national security is important. Economic competition, I’m a whole lot less sure about. But thank you very much for the panel. I was asked to produce a spicy conversation and I think we certainly got that. But one, two, that enriched my thinking about this area. So I hope people have found it helpful. Thank you. Thank you.


M

Michael Spence

Speech speed

131 words per minute

Speech length

2203 words

Speech time

1001 seconds

International collaboration advances science and increases impact

Explanation

Michael Spence argues that international collaboration in science is beneficial. It allows finding the best people to work on interesting problems and leads to more impactful scientific work.


Evidence

Scientific work involving international collaborations is six or seven times more likely to be cited.


Major Discussion Point

Benefits and Challenges of International Scientific Collaboration


Agreed with

– Jonathan Brennan-Badal
– Kimberly Budil
– Maria Leptin

Agreed on

International collaboration is beneficial for scientific progress


Scientific expertise should have more weight in decision-making

Explanation

Michael Spence argues that scientific expertise should have more weight in decision-making processes regarding research collaboration and security. He suggests that the current balance of power in decision-making does not sufficiently include scientific expertise.


Major Discussion Point

Regulating International Research Collaboration


J

Jonathan Brennan-Badal

Speech speed

138 words per minute

Speech length

1073 words

Speech time

464 seconds

Open source collaboration drives innovation in business

Explanation

Jonathan Brennan-Badal emphasizes that open source collaboration is essential for business innovation. He argues that limiting collaboration constrains the possibilities for both business and scientific research.


Evidence

Opentrons’ open source robotics platform is used in over 60 countries, leading to robust scientific collaborations.


Major Discussion Point

Benefits and Challenges of International Scientific Collaboration


Agreed with

– Michael Spence
– Kimberly Budil
– Maria Leptin

Agreed on

International collaboration is beneficial for scientific progress


Differed with

– Kimberly Budil

Differed on

Role of IP protections in international collaboration


Consumer tech companies have responsibility to ensure appropriate use

Explanation

Jonathan Brennan-Badal argues that companies have a significant responsibility to ensure their products are used appropriately. This is particularly important in fast-changing industries like AI where proper restrictions may not be in place yet.


Evidence

Example of a Carnegie Mellon lab using Opentrons’ platform to create an AI model capable of synthesizing chemical compounds, including potentially harmful ones like mustard gas.


Major Discussion Point

Balancing Open Science and National Security


Discussing potential misuse with other practitioners is important

Explanation

Jonathan Brennan-Badal suggests that discussing potential misuse of technology with other practitioners is crucial. This approach can help catch a significant majority of risks and ensure responsible development.


Major Discussion Point

Educating Young Scientists on Research Ethics and Security


K

Kimberly Budil

Speech speed

173 words per minute

Speech length

1688 words

Speech time

582 seconds

Collaboration with allies helps balance openness and security concerns

Explanation

Kimberly Budil argues that collaborating with allies and partners in research areas of concern can help balance the benefits of international collaboration with security considerations. This approach ensures shared values and norms in research partnerships.


Major Discussion Point

Benefits and Challenges of International Scientific Collaboration


Agreed with

– Maria Leptin
– Teruo Fujii

Agreed on

Need for balanced approach to research security and openness


Open science ecosystem important for national security work

Explanation

Kimberly Budil emphasizes the importance of the open science ecosystem for national security work. She argues that participating in open science is crucial for advancing capabilities and ensuring access to the best ideas and technologies.


Evidence

Example of teams at Lawrence Livermore National Laboratory working on both classified research and open science projects like adding elements to the periodic table.


Major Discussion Point

Balancing Open Science and National Security


Agreed with

– Michael Spence
– Jonathan Brennan-Badal
– Maria Leptin

Agreed on

International collaboration is beneficial for scientific progress


Speed of innovation is key advantage, but some restrictions still valuable

Explanation

Kimberly Budil argues that while speed and innovation are crucial advantages, some restrictions on information sharing are still valuable. She emphasizes the need for thoughtful management of emerging technologies.


Major Discussion Point

Balancing Open Science and National Security


Public sector engagement with private sector on emerging tech is important

Explanation

Kimberly Budil stresses the importance of public sector engagement with the private sector on emerging technologies. This involvement allows for understanding where the technology is going and raising flags when areas of concern arise.


Evidence

Discussion of AI development in the private sector and the need for public sector involvement.


Major Discussion Point

Balancing Open Science and National Security


IP protections serve a purpose for national interests

Explanation

Kimberly Budil argues that intellectual property protections serve a purpose for national interests. She emphasizes the obligation to bring the fruits of taxpayer-funded research to the benefit of citizens.


Evidence

Example of extreme ultraviolet lithography technology developed at Lawrence Livermore National Laboratory and later licensed by a Dutch company.


Major Discussion Point

Balancing Open Science and National Security


Differed with

– Jonathan Brennan-Badal

Differed on

Role of IP protections in international collaboration


Need thoughtful discussion on managing emerging technologies

Explanation

Kimberly Budil emphasizes the need for thoughtful discussion on managing emerging technologies. She argues that while not stopping research, it’s important to be aware of potential implications and raise flags when concerns arise.


Major Discussion Point

Regulating International Research Collaboration


Differed with

– Maria Leptin

Differed on

Effectiveness of self-regulation by scientists


M

Maria Leptin

Speech speed

130 words per minute

Speech length

839 words

Speech time

385 seconds

Self-regulation by scientists is effective for managing risks

Explanation

Maria Leptin argues that scientists are generally good at self-regulation and understanding the risks associated with their work. She suggests that scientists, as citizens, care about security and safety.


Evidence

Examples of self-regulation in stem cell research and human embryonic research through guidelines formulated by the International Society for Stem Cell Research.


Major Discussion Point

Benefits and Challenges of International Scientific Collaboration


Agreed with

– Michael Spence
– Jonathan Brennan-Badal
– Kimberly Budil

Agreed on

International collaboration is beneficial for scientific progress


Differed with

– Kimberly Budil

Differed on

Effectiveness of self-regulation by scientists


Country-specific restrictions can be counterproductive

Explanation

Maria Leptin argues that country-specific restrictions on scientific collaboration can be counterproductive. She cites the China initiative in the US as an example of a policy that was recognized as harmful to research and creating unnecessary fear.


Evidence

The China initiative in the US, which was later recognized as problematic.


Major Discussion Point

Regulating International Research Collaboration


Constant reassessment of policies is necessary

Explanation

Maria Leptin emphasizes the need for constant reassessment of policies regarding research security and collaboration. She argues that sensitivity to reassessing is necessary due to the complex and evolving nature of these issues.


Major Discussion Point

Regulating International Research Collaboration


Agreed with

– Kimberly Budil
– Teruo Fujii

Agreed on

Need for balanced approach to research security and openness


Practical experience helps develop judgment on sharing research

Explanation

Maria Leptin suggests that practical experience in research helps scientists develop judgment on sharing their work. She argues that facing these decisions in a microcosm prepares researchers for larger-scale considerations.


Evidence

Example of researchers deciding whether to present new results at conferences, balancing the risk of being scooped against the benefits of feedback.


Major Discussion Point

Educating Young Scientists on Research Ethics and Security


T

Teruo Fujii

Speech speed

135 words per minute

Speech length

733 words

Speech time

325 seconds

Need to manage multiple interests in scientific collaboration

Explanation

Teruo Fujii argues for the need to manage multiple interests in scientific collaboration. He emphasizes the importance of balancing scientific, economic, and security interests in research partnerships.


Major Discussion Point

Benefits and Challenges of International Scientific Collaboration


Agreed with

– Kimberly Budil
– Maria Leptin

Agreed on

Need for balanced approach to research security and openness


Sharing best practices among partner countries is helpful

Explanation

Teruo Fujii suggests that sharing best practices among partner countries can help address research integrity and security concerns. He argues that this approach can improve international research partnerships.


Evidence

Example of the G7 special interest group on research integrity and security, which discusses sharing good practices among member countries.


Major Discussion Point

Regulating International Research Collaboration


Teaching responsibility and ethics to young scientists is crucial

Explanation

Teruo Fujii emphasizes the importance of teaching responsibility and ethics to young scientists. He argues that understanding the implications of scientific work and the responsibility that comes with it is fundamental.


Major Discussion Point

Educating Young Scientists on Research Ethics and Security


Agreements

Agreement Points

International collaboration is beneficial for scientific progress

speakers

– Michael Spence
– Jonathan Brennan-Badal
– Kimberly Budil
– Maria Leptin

arguments

International collaboration advances science and increases impact


Open source collaboration drives innovation in business


Open science ecosystem important for national security work


Self-regulation by scientists is effective for managing risks


summary

The speakers agree that international collaboration in science and research is crucial for advancing knowledge, driving innovation, and increasing impact. They emphasize the importance of open science and collaboration, even in sensitive areas like national security.


Need for balanced approach to research security and openness

speakers

– Kimberly Budil
– Maria Leptin
– Teruo Fujii

arguments

Collaboration with allies helps balance openness and security concerns


Constant reassessment of policies is necessary


Need to manage multiple interests in scientific collaboration


summary

The speakers agree on the need for a balanced approach to research security and openness. They emphasize the importance of managing multiple interests, collaborating with trusted partners, and constantly reassessing policies to address evolving challenges.


Similar Viewpoints

Both speakers emphasize the importance of responsible development and use of emerging technologies, highlighting the need for collaboration between the private and public sectors to address potential risks and ensure appropriate use.

speakers

– Jonathan Brennan-Badal
– Kimberly Budil

arguments

Consumer tech companies have responsibility to ensure appropriate use


Public sector engagement with private sector on emerging tech is important


Both speakers stress the importance of educating and preparing young scientists to make ethical decisions about research sharing and collaboration, emphasizing the role of practical experience and formal education in developing these skills.

speakers

– Maria Leptin
– Teruo Fujii

arguments

Practical experience helps develop judgment on sharing research


Teaching responsibility and ethics to young scientists is crucial


Unexpected Consensus

Importance of self-regulation in scientific community

speakers

– Maria Leptin
– Kimberly Budil

arguments

Self-regulation by scientists is effective for managing risks


Need thoughtful discussion on managing emerging technologies


explanation

Despite coming from different perspectives (European Research Council and a national security laboratory), both speakers agree on the importance of self-regulation and thoughtful discussion within the scientific community to manage risks associated with emerging technologies. This consensus is unexpected given the potential tension between open science and national security concerns.


Overall Assessment

Summary

The speakers generally agree on the importance of international scientific collaboration, the need for a balanced approach to research security and openness, and the significance of responsible development and use of emerging technologies. There is also consensus on the importance of educating young scientists about research ethics and security.


Consensus level

The level of consensus among the speakers is moderately high, with agreement on fundamental principles but some differences in emphasis and approach. This consensus suggests that there is a shared understanding of the challenges and opportunities in international scientific collaboration, which could facilitate the development of effective policies and practices to balance openness and security in research.


Differences

Different Viewpoints

Role of IP protections in international collaboration

speakers

– Jonathan Brennan-Badal
– Kimberly Budil

arguments

Open source collaboration drives innovation in business


IP protections serve a purpose for national interests


summary

Jonathan Brennan-Badal emphasizes the importance of open source collaboration and innovation over IP protection, while Kimberly Budil argues that IP protections are valuable for national interests and bringing benefits to citizens.


Effectiveness of self-regulation by scientists

speakers

– Maria Leptin
– Kimberly Budil

arguments

Self-regulation by scientists is effective for managing risks


Need thoughtful discussion on managing emerging technologies


summary

Maria Leptin argues that scientists are generally good at self-regulation, while Kimberly Budil emphasizes the need for more thoughtful discussion and management of emerging technologies, implying that self-regulation alone may not be sufficient.


Unexpected Differences

Importance of IP protection for businesses

speakers

– Jonathan Brennan-Badal
– Kimberly Budil

arguments

Open source collaboration drives innovation in business


IP protections serve a purpose for national interests


explanation

It’s unexpected to see a business representative (Brennan-Badal) arguing for less emphasis on IP protection, while a government lab representative (Budil) advocates for stronger IP protections. This reversal of expected positions highlights the complexity of the issue in the modern research landscape.


Overall Assessment

summary

The main areas of disagreement revolve around the balance between open collaboration and security concerns, the effectiveness of self-regulation versus government oversight, and the role of IP protections in fostering innovation and protecting national interests.


difference_level

The level of disagreement among the speakers is moderate. While there are clear differences in perspectives, particularly regarding IP protection and the extent of necessary regulation, there is also a shared recognition of the importance of international collaboration and the need to balance openness with security concerns. These differences reflect the complex nature of managing international scientific collaboration in an era of rapid technological advancement and heightened security concerns. The implications of these disagreements suggest that finding a universally accepted approach to regulating international research collaboration will be challenging and may require ongoing dialogue and flexible policies that can adapt to evolving circumstances.


Partial Agreements

Partial Agreements

Both speakers agree on the importance of open science, but differ on the extent to which it should be regulated. Budil emphasizes the need for some restrictions and thoughtful management, while Leptin leans more towards self-regulation by scientists.

speakers

– Kimberly Budil
– Maria Leptin

arguments

Open science ecosystem important for national security work


Self-regulation by scientists is effective for managing risks


Similar Viewpoints

Both speakers emphasize the importance of responsible development and use of emerging technologies, highlighting the need for collaboration between the private and public sectors to address potential risks and ensure appropriate use.

speakers

– Jonathan Brennan-Badal
– Kimberly Budil

arguments

Consumer tech companies have responsibility to ensure appropriate use


Public sector engagement with private sector on emerging tech is important


Both speakers stress the importance of educating and preparing young scientists to make ethical decisions about research sharing and collaboration, emphasizing the role of practical experience and formal education in developing these skills.

speakers

– Maria Leptin
– Teruo Fujii

arguments

Practical experience helps develop judgment on sharing research


Teaching responsibility and ethics to young scientists is crucial


Takeaways

Key Takeaways

International scientific collaboration is valuable for advancing science and innovation, but poses challenges for national security and economic competitiveness


The landscape of high-risk research has changed, with more actors involved and faster technological progress, complicating regulation


There’s a need to balance open science with security concerns, but overly restrictive policies can hinder innovation


Scientists and the research community play an important role in self-regulation and ethical considerations


Decision-making on research security should involve both scientific expertise and broader intelligence/security perspectives


Resolutions and Action Items

Governments should reconsider the balance of power in decision-making processes to give more weight to scientific expertise alongside security considerations


Unresolved Issues

How to effectively regulate emerging technologies like AI without stifling innovation


Where exactly to draw boundaries between open and restricted research


How to determine which countries to collaborate with on sensitive research topics


How to manage dual-use technologies that have both beneficial and potentially harmful applications


Suggested Compromises

Focusing on innovation rather than IP protection as a business strategy


Collaborating primarily with allies and partners that share similar values and norms


Implementing thoughtful discussions and awareness-raising about potential risks, rather than outright bans on research


Balancing public sector engagement with private sector innovation in emerging technologies


Thought Provoking Comments

Many areas of science that were historically the purview of government research labs have now moved smartly into the private sector. I would say the commercialization of space is one example, an area where government’s typically dominated. That’s changed the landscape there very dramatically in what’s possible. And similarly, AI, where you have a very powerful technology that’s being developed in the private sector.

speaker

Kimberly Budil


reason

This comment highlights a fundamental shift in how high-risk research is being conducted, moving from government labs to the private sector. It introduces complexity to the discussion by pointing out that the landscape of research and innovation has changed dramatically.


impact

This comment shifted the conversation to focus on the changing dynamics between government, academia, and private industry in conducting high-risk research. It led to further discussion about the challenges of regulating and overseeing such research when it’s no longer confined to government labs.


From a business perspective, there’s fundamental kind of self-policing when collaborating with another business or partner. You always have the assumption that that other partner could work in a way that’s to your disadvantage, because at the end of the day, you know, that partnership might not last forever and they might, you know, cut you out of the ecosystem.

speaker

Jonathan Brennan-Badal


reason

This comment provides insight into how businesses approach collaboration and risk management, introducing a perspective that hadn’t been considered in the discussion up to that point.


impact

It broadened the conversation beyond just scientific and governmental concerns to include business considerations. This led to further discussion about the role of intellectual property and how businesses balance collaboration with protecting their interests.


Scientists are pretty good at self-regulation, and governments and bureaucrats are… not necessarily the best to know where the risks are and where they’re hurting their own national interests by putting in walls.

speaker

Maria Leptin


reason

This comment challenges the assumption that government regulation is always necessary or effective in managing scientific risks. It introduces the idea that scientists themselves might be better equipped to regulate their work.


impact

This comment sparked a debate about the balance between scientific self-regulation and government oversight. It led to further discussion about how to incorporate scientific expertise into policy decisions about research security.


One of our customers, a lab at Carnegie Mellon, built an AI model using all publicly available resources. So things like ChatGPT, publicly available data sets, access to the internet, and use that to enable our robot to synthesize a very wide range of chemical compounds. All you need to do is provide the prompt of, I want to make x. And this system will iteratively figure out with our robot how to do that. Turns out that you can also ask it to make mustard gas, and it will try and do that as well.

speaker

Jonathan Brennan-Badal


reason

This comment provides a concrete and alarming example of how AI and robotics can be used to create dangerous substances, highlighting the dual-use nature of many technologies.


impact

This comment shifted the conversation to focus more specifically on the risks associated with AI and other emerging technologies. It led to a discussion about the responsibilities of companies in ensuring their technologies are not misused.


I think we’re conflating a number of things that I think are important to disaggregate. First of all a pencil is a lot less dangerous than mustard gas so you know how you think about where you should draw boundaries really does depend on the scale and the implication of the technology you’re talking about and I think that is important.

speaker

Kimberly Budil


reason

This comment brings nuance to the discussion about regulating technology, pointing out that not all technologies pose equal risks and that regulation should be proportional to the potential harm.


impact

This comment helped to refine the discussion about technology regulation, moving it away from all-or-nothing approaches and towards more nuanced, risk-based considerations.


Overall Assessment

These key comments shaped the discussion by broadening its scope from purely scientific considerations to include business, policy, and ethical dimensions. They highlighted the complexity of managing research security in a world where high-risk research is increasingly conducted in the private sector and where emerging technologies like AI pose new and unpredictable risks. The comments also sparked debate about the appropriate balance between scientific self-regulation and government oversight, and emphasized the need for nuanced, risk-based approaches to technology regulation. Overall, these comments deepened the conversation and led to a more comprehensive exploration of the challenges surrounding international scientific collaboration and research security.


Follow-up Questions

How can we balance scientific expertise and intelligence/security concerns in decision-making about international scientific collaboration?

speaker

Michael Spence


explanation

This was identified as a key practical outcome of the discussion, highlighting the need for a more balanced approach in determining limitations on international collaboration.


How can thoughtful and ethical practices be taught to young scientists to prepare them for making decisions about research security and collaboration?

speaker

Audience member (PhD student)


explanation

This question addresses the need to educate the next generation of scientists on the complex issues surrounding research security and international collaboration.


In AI research, what kind of projects can be safely collaborated on with external partners versus kept internal?

speaker

Audience member (Dana from Dr. Liban)


explanation

This question reflects the growing concern about balancing innovation and security in rapidly advancing fields like AI, particularly in sensitive sectors like healthcare.


How can we develop shared international norms and frameworks for research integrity and security?

speaker

Kimberly Budil


explanation

This was suggested as an important area for further development to guide international collaborations and ensure consistent ethical standards across countries.


How can we improve the speed of bureaucratic processes in the public sector to better keep pace with rapid technological advancements?

speaker

Kimberly Budil


explanation

This was identified as a necessary area for improvement to ensure effective oversight and regulation of emerging technologies.


How can businesses and researchers better anticipate and mitigate potential misuses of their technologies?

speaker

Jonathan Brennan-Badal


explanation

This was suggested as an important area for ongoing consideration, particularly in light of the example given about AI-assisted chemical synthesis.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.