The Purpose of Science / DAVOS 2025
23 Jan 2025 08:00h - 08:45h
The Purpose of Science / DAVOS 2025
Session at a Glance
Summary
This panel discussion at Davos 2025 explored the purpose of science and the impact of AI on scientific research and society. The panelists, including experts in biology, epidemiology, AI, and physics, agreed that the fundamental purpose of science is to satisfy human curiosity and understand the world around us. They discussed how AI is accelerating scientific discoveries across various fields, from protein folding to mathematical proofs.
The conversation highlighted the potential of AI to solve complex problems in areas like energy, climate change, and medicine. Panelists emphasized the importance of developing AI as a tool that humans can control, rather than as a replacement for human intelligence. They addressed concerns about job displacement due to AI, with most expressing optimism that humans will adapt and continue to play crucial roles in scientific endeavors.
The discussion touched on the balance between academic and corporate research in AI development, noting successful collaborations between universities and companies. Panelists stressed the need for safety standards and ethical considerations in AI development, drawing parallels to regulations in other industries.
Looking to the future, the panel envisioned a world where AI augments human capabilities in science rather than replacing scientists entirely. They expressed hope that by 2035, AI tools would be highly advanced but still under human control, enabling faster progress in addressing global challenges while maintaining human agency in scientific pursuits.
Overall, the discussion painted an optimistic picture of AI’s role in advancing science, while acknowledging the need for careful management and ethical considerations as the technology continues to evolve.
Keypoints
Major discussion points:
– The purpose of science is to satisfy human curiosity and understand how the world works
– AI is accelerating scientific discovery in fields like protein folding, drug development, and materials science
– There are both benefits and potential risks to AI research being done increasingly by private companies rather than academia
– AI tools could dramatically enhance scientists’ capabilities, but also raise questions about human control and job displacement
– Experts are generally optimistic about AI’s potential to solve major challenges, while emphasizing the need for safety standards
Overall purpose:
The goal of this discussion was to explore how AI is influencing and will continue to impact the future of scientific research and the role of human scientists.
Tone:
The tone was largely optimistic and excited about AI’s potential to accelerate scientific progress. Speakers emphasized the opportunities while acknowledging some risks. There was a consistent tone of scientific curiosity and forward-looking optimism throughout the conversation.
Speakers
– Michael Hengartner: Moderator
– Ramin Hasani: Co-founder and CEO of Liquid AI, MIT startup
– Maria Leptin: President of the European Research Council, developmental biologist
– Max Tegmark: Professor at MIT, researcher on cosmology and AI, author
– Richard Hatchett: Chief Executive Officer of the Coalition for Epidemic Preparedness Innovations (CEPI)
Additional speakers:
– None identified
Full session report
The Purpose and Future of Science in the Age of AI
This panel discussion at Davos brought together experts in biology, epidemiology, AI, and physics to explore the purpose of science and the impact of artificial intelligence on scientific research and society. The conversation, moderated by Michael Hengartner, featured insights from Ramin Hasani, Maria Leptin, Max Tegmark, and Richard Hatchett.
Fundamental Purpose of Science
The panellists reached a consensus on the fundamental purpose of science. Maria Leptin eloquently articulated that science exists “to fulfil human curiosity, to satisfy human curiosity, and to let us understand how the world around us works, whether it is the natural world, the physical world, the world of life, or the world of the mind, of the humanities and the arts.” This definition set the tone for the entire discussion, providing a framework for considering the role of AI in scientific research.
Max Tegmark echoed this sentiment, emphasising that science empowers humanity and expands our understanding of the universe. He offered a concise “4.5-second summary” of what it means to be a scientist: “We’d rather have questions we can’t answer than answers we can’t question.” This encapsulates the spirit of scientific inquiry and the ongoing pursuit of knowledge.
Richard Hatchett, CEO of the Coalition for Epidemic Preparedness Innovations (CEPI), added a practical dimension, highlighting how scientific discoveries lead to technological advances that benefit humanity. He provided a poignant example from the COVID-19 pandemic, demonstrating the direct link between basic scientific research and real-world applications that save lives.
AI’s Role in Accelerating Scientific Discovery
The panellists agreed that AI has the potential to significantly accelerate scientific discovery across various fields. Richard Hatchett pointed out AI’s capabilities in areas such as protein folding and drug development, citing the example of an AI-designed COVID vaccine developed by SK Bioscience in collaboration with the Institute for Protein Design. Max Tegmark highlighted AI’s revolutionary potential in proving mathematical theorems and verifying code.
Ramin Hasani provided insights into his work on Liquid AI and a worm-inspired AI system. He explained how these innovations aim to create more interpretable and efficient AI models, potentially leading to breakthroughs in solving complex mathematical and physical problems.
Balancing Public and Private Research
The conversation addressed the balance between academic and corporate research in AI development. Maria Leptin suggested that competition between public and private research can be beneficial, while also discussing the European Research Council’s approach to funding research projects. Richard Hatchett noted successful collaborations between universities and companies, highlighting the potential for synergy between academic institutions and industry in driving innovation and efficiency in scientific research.
The panel also explored the advantages and disadvantages of fundamental research being conducted in company settings. They acknowledged the resources and efficiency that private companies can bring to research but also emphasized the importance of maintaining academic freedom and open access to scientific knowledge.
Future Impact of AI on Science and Society
A significant portion of the discussion focused on whether AI might replace scientists or make their jobs obsolete. The panelists generally agreed that AI would augment human capabilities in science rather than replacing scientists entirely. Max Tegmark emphasized that AI tools should be controllable and serve human needs rather than replace humans.
Ramin Hasani expressed optimism about the gradual and manageable transition to AI-augmented science, stating, “I feel like along the way, we’re going to figure out how to get there. I think this change, this process has always been like that with any technology. I feel like we’re going to be able to get to that point where we will be comfortable living with superintelligence also amongst us.”
Richard Hatchett highlighted AI’s potential to help solve major global challenges like pandemics more quickly. Maria Leptin expressed confidence that human emotional needs and curiosity will ensure scientists remain relevant in the future.
Safety and Ethical Considerations
The panellists stressed the need for safety standards and ethical considerations in AI development. Max Tegmark proposed treating AI “like any other industry and have basic safety standards,” drawing parallels to regulations in other sectors. This suggestion offered a practical approach to addressing concerns about AI safety and control, prompting consideration of how to implement effective AI governance.
Looking to the Future
As the discussion concluded, the panelists shared their views on the future of science and AI in the next decade. They expressed hope that AI tools would be highly advanced but still under human control, enabling faster progress in addressing global challenges while maintaining human agency in scientific pursuits.
The conversation highlighted the potential of AI to solve complex problems in areas like energy, climate change, and medicine. However, it also emphasised the importance of developing AI as a tool that humans can control, rather than as a replacement for human intelligence. The panellists’ shared optimism about the continued relevance and importance of human scientists in the future was particularly noteworthy, given the rapid advancements in AI technology.
Conclusion
The panel discussion painted an optimistic picture of AI’s role in advancing science while acknowledging the need for careful management and ethical considerations. It emphasized that while AI presents tremendous opportunities for scientific advancement, its integration into research and society must be approached thoughtfully, with a focus on maintaining human control, fostering collaboration between public and private sectors, and ensuring that the fundamental purpose of science – to satisfy human curiosity and understand our world – remains at the forefront of technological progress.
Session Transcript
Michael Hengartner: Good morning, dear ladies and gentlemen, dear online participants. Welcome to day four of Davos 2025, the World Economic Forum. This morning’s session will focus on a question that some of you might almost find trivial, others perhaps find fundamental to who we are as humans. The question is, what is the purpose of science? We live in an era of fantastic technological developments. Developments which can dramatically improve the way we live, the way we interact with each other, the way we interact with our environment. Now, most of these innovations are usually based on some solid understanding of science, be it the physical laws of nature that regulate how things work around us, of an understanding of the biological processes in our bodies, of the ways chemicals and materials react and interact with each other. These are the type of things that scientists try to understand. Now, we won’t be able to answer a broad question like, what is the purpose of science in 45 minutes, obviously. And so we instead focus on a very specific question, a question that pretty much every other industry is also asking itself these days, namely, how will AI influence the future of science? And the future of the scientists performing the scientific inquiry? To explore this issue, I have with me four illustrious experts. To my left, we have Maria Leptin. She’s the president of the European Research Council, a world famous developmental biologist. And has long experience in supporting fundamental research in a broad range of areas from natural sciences to the social sciences to the humanities. Next to her sits Richard Hatchett. He’s the chief executive officer of the Coalition for Epidemic Preparedness and Innovations. CEPI. We’ll hear more about CEPI in a few minutes. Third, we have Ramin Hasani. He’s the co-founder and the chief executive officer of Liquid AI. Liquid AI is an MIT startup that uses new types of algorithms to generate highly efficient foundational models for AI. And finally, last but not least, Max Tegmark. He’s a professor at MIT doing research on cosmology and AI. He is also an acclaimed author of best-selling books such as Life 3.0. Let me start with a question to each one of my panelists, and then we’ll jump into a lively discussion. Maria, what AI can do today is impressive. We all agree. But obviously, it’s been a long time coming. Scientists have been working on AI research for decades before we had the breakthroughs that we see today. And it really took perseverance and belief that this would lead somewhere to be able to come to where we are today. I would like to know from you, being president of the ERC, how does the ERC choose which research projects it funds? And how do you manage to convince governments to invest billions every year into projects without knowing whether there will ever be a useful application of these research projects in the short, in the medium, or even in the long term?
Maria Leptin: Thank you, Mike. First of all, I want to comment on a remark you just made for which you didn’t prepare me, but which is very relevant to what I will say. And that is that you said we cannot answer what is the purpose of science in 45 minutes. I disagree profoundly. I think we can answer that question in 45 seconds.
Michael Hengartner: Go ahead.
Maria Leptin: The purpose of science, and I mean here science in the Germanic sense of Wissenschaft, of knowledge generation. The purpose of science is to fulfill human curiosity, to satisfy human curiosity, and to let us understand how the world around us works, whether it is the natural world, the physical world, the world of life, or the world of the mind, of the humanities and the arts. So that is the purpose of science. Of course, science has many other uses, and that’s perhaps what you meant, but I think we have to keep in mind that that’s what the purpose of science is. That’s where it comes from. So now, and this matters for the ERC, because the European Research Council funds research across the entire breadth of academic scholarship. And how do we choose? First of all, nothing is chosen top-down. Everything is chosen bottom-up, because what the ERC tries to do is fund research at the very frontier of knowledge, and there’s only one group of people who know where the frontier of knowledge is, and that’s the scientists themselves. So we trust the scientists to come up with the ideas that will break through that frontier of knowledge, and that’s what we look for. And the way we do that is to use panels of experts from throughout the world who are told only look for the most exciting project. What was the second part of the question that was directed at me?
Michael Hengartner: How come government trusts you, how the scientists who actually do well with these billions? I know that not all government officials, elected officials, think that way.
Maria Leptin: No, exactly.
Michael Hengartner: We heard in the news this morning that a certain new president on the other side of the Atlantic has issued orders that might seriously impair the ability of NIH to function.
Maria Leptin: Yeah, so he’s just… Cancelled all trips, cancelled all study sections, it’s really terrible news. Yes, so we do need to convince, we have to understand that tax dollars or euros, that the population is entitled to want their tax to be put to useful purposes. But of course research has useful purposes and we see that everywhere. For instance, AI, which we’ll be talking about a lot, was not discovered when CHAT-GPT came out a couple of years ago. The ERC has been funding research on AI ever since its inception 17 years ago, and before that other public funders have been funding these weird ivory tower mathematicians, informatics, profs, etc. physicists to do research on this stuff that was really way out there and that nobody understood. So that is the lesson. The scientists on the ground know what are the big questions, their curiosity drives them to dedicate their lives, their hours, their time, to figuring out new knowledge that may or may not be useful later on. And so that’s what we have to tell the politicians, and we do, and they understand it. It was very much, very well understood during COVID when warp speed didn’t lead to the COVID vaccine, but research that had been funded previously for other reasons, and the researchers themselves, they said, I want to dedicate my research to finding a vaccine, and they did within nine months. So I think we have enough examples for that now. And what’s clear is that we don’t know what’s going to hit us next year in five years’ time, in ten years’ time. And in fact, a very good AI researcher said, if Europe is behind now, let’s not focus on catching up. Let’s focus on discovering now what will allow us to be ahead again in five or ten years’ time.
Michael Hengartner: Thank you very much Maria. You mentioned COVID, I think that’s a perfect segue into Richard. Richard runs CEPI. One couldn’t claim CEPI is a child of Davos. Richard, why don’t you please tell us how CEPI was founded, what it aims to do, and how it benefits from foundational, fundamental research, and how do you put that research to good use for direct purposes?
Richard Hatchett: Sure. No, and Maria, thank you for the setup. That was a great segue. CEPI is Coalition for Epidemic Preparedness Innovations, is an organization that was actually established at Davos in 2017. Its mission effectively is to develop vaccines against epidemics and pandemic diseases through public-private partnerships. And importantly, and this is coupling a moral mission with the scientific mission, to ensure access to those vaccines for the populations that need them. A great way to show the direct through line from basic science to application to distribution of that for the benefit of humanity, I think, is the development of the COVID vaccines. The fundamental insight that enabled the rapid development of a number of successful COVID vaccines was the understanding of a fusion protein on the surface of the respiratory syncytial virus, and how that protein binds with human cells as the virus enters the cells to cause disease. And that was elaborated in 2013. Actually at NIH, in an intramural laboratory at NIH. And it was realized that the understanding of how to stabilize that protein on the virus so that the immune system could see it and respond to it, that actually became the basis of the currently licensed. RSV vaccines, but it also unlocked an understanding of many different families of virus and offered essentially a blueprint to developing effective vaccines against those viruses. The NIH, the Vaccine Research Center at NIH, was working with Moderna on a MERS vaccine. MERS is another coronavirus, I think all of you will remember it, closely related to the SARS-CoV-2 virus, and they had essentially solved the problem of how to adapt the understanding from the RSV fusion protein to the MERS spike protein. And when the sequences for SARS-CoV-2 were released, which I think was on January 10th or 11th, 2020, as I’m remembering, Moderna was able, working with the NIH scientists, was able to adapt the new sequence information, what they knew about the MERS spike protein, and effectively they designed the first generation Moderna vaccine in about 36 hours. And so when we were at Davos in 2020, we actually announced our first contracts to support vaccine development. This was only 12 days after the release of the sequences for the, of the SARS-CoV-2 virus. And what we were, and Moderna was one of our partners, and what we were funding with Moderna 12 days after the sequences were released was the production of clinical trial material to move that vaccine rapidly into clinical development. And so that discovery about the stabilization of the protein back in 2013 directly translated into the rapid response. And let me maybe just extend the story just a little bit. What was critical for the rapid development of multiple vaccines was that the U.S. government managed the intellectual property around that fundamental design as a global public good. They non-exclusively licensed it to Moderna, and then they non-exclusively licensed it to others who were then able to develop successful vaccines. So it’s a great story of the immediate translation of direct insights deriving from basic science into products and applications that have served humanity and helped us speed the ending of the acute period of the pandemic.
Michael Hengartner: Thank you very much, Richard, for this fantastic story. Again, a fantastic segue into Ramin. Ramin, you’re also involved in the translation of new insights into products and services for humanity. You’re the co-founder and the CEO of the MIT startup Liquid AI. Perhaps you can tell us what Liquid AI does and why you think that your company has any chance to succeed against competitors who invest billions every year into large language models.
Ramin Hasani: Yeah, great. Yeah, so we are a foundation model company. We’re building on a completely different substrate for AI. So we thought, let’s get back to the scientific kind of roots of what we’re trying to solve. So, and see where are the challenges of these foundation models, like systems that are very general and they’re not specified like for a certain task. They’re not specialized for a certain task, but they can do general kind of functionalities. You’ve seen examples of that, like chat GPTs of the world and stuff. So when I was doing my PhD, I started studying a little worm. Okay, this worm is called C. elegans. This worm is one of the most popular kind of biological organisms in the world. And I started studying the nervous system of the worm to build mathematical kind of models of the nervous system to see if it can get to a new type of AI system, a brain inspired AI system. There’s a lot of reasons why we wanted to do that. More control on the math, like the type of AI system that we are building. It’s not very, and we have a lot of understanding. The worm itself gave us a lot of biological insights into like how. It’s brain functions and it has won so far four Nobel Prizes for us, you know, this worm. And it is a very, its body is transparent. The worm is called C. elegans. It has 302 neurons in its nervous system, so it’s a very simple system that we can understand. The reason why this worm is important is because it shares 75% similarities to the human genome. And that is, like, I thought that, okay, so if you discover kind of foundational kind of properties of nervous systems and how neurons exchange information with each other there, then we could evolve these systems into much more sophisticated kind of systems. So the journey started there. We built a worm-inspired AI, you know, and we made them drive cars, fly drones, and, you know, interact in the environment. And now we evolved them. We decided to actually take the journey one step ahead and build liquid AI. I called the model, the mathematical model that came out of this thing, liquid neural networks. These are liquid for adaptability. One of the properties, some mathematical operations that we learned from there was describing like how the system can stay adaptable. And the type of math that enables us to build these type of intelligent systems, first of all, it allows us to build a very powerful system with a fraction of the cost and energy cost of today’s AI systems. We managed to do a couple of breakthroughs with my colleagues, you know, from MIT and Stanford and Vienna University of Technology, actually. And now today we are building like these very sustainable versions of chat GPT that you can use them on a device because they use a fraction of the cost that those systems need to host. Like, let me just show you one of these things.
Michael Hengartner: No sales pitch on the panel.
Ramin Hasani: No, no, it’s not going to be sales. So this is basically one of the products of the company. So it’s basically a two gigabyte. an interface so you can connect this. There’s a three billion parameter neural network on top of it. You connect it to a laptop and you can immediately start using intelligence. So I can give you generative AI capabilities. The reason I believe that we have a lot of reasons we think that we can stay relevant, is because we are taking an orthogonal approach because these type of systems can now be deployed not on the Cloud, and it doesn’t need data centers to be hosted. It can be used with the mediums that we already have in place. So that’s our go-to-market strategy as well. So I feel like we have a chance.
Michael Hengartner: Thank you very much, Ramin. As a worm biologist myself, I find this very inspiring. All right. We move on to our last speaker, Max. You’re not only a world-acclaimed cosmologist, you also have been working on AI, and you thought a lot about AI and how it might influence humanity in the future. Tell us a little bit about your book, Life 3.0, and what our future might look like in a world that has pervasive and powerful AI.
Ramin Hasani: Yeah. So first of all, I got so excited by your words about science there, that I couldn’t help myself applaud even though we weren’t supposed to. The 45-second summary. I completely agree that science is all about curiosity. That’s what drives it. My 4.5-second summary of what it means to be a scientist would be that we’d rather have questions we can’t answer than answers we can’t question. I think that the human journey in science has been the most empowering, inspiring story I know. We gradually, by giving into our curiosity, realized that we were the masters of underestimation. We’ve been on this planet for hundreds of thousands of years. For most of that time, you know, they were very intelligent cavemen and cavewomen who looked up at the skies at night and came up with all sorts of myths about what these shiny white little dots were, but the more intellectual ones of them probably felt a bit of melancholy that they were never really going to figure out what this was, and things would never really change. You always had to worry about starving to death that winter if you didn’t luck out in your hunt and so on, and then it turned out that we had underestimated not only the size of the physical world, realizing that everything we thought existed was just a small part of a much grander structure, a planet, a solar system, a galaxy, a galaxy cluster, this amazing universe we’re in, but we had also, even more importantly, underestimated our own ability to understand it. Now we have, with our curiosity, figured out not only figured out what those little shiny dots are in the sky, but also these beautiful connections such that we are actually made of stardust. The oxygen in your body was made in the core of a hot star, and those of you wearing jewelry, the gold was made in a supernova explosion, you know, these are beautiful, beautiful things that we’ve started to understand, and then the real shocker was that it wasn’t just empowering in that we could understand more, but this understanding transformed into technology that enabled us to actually become more like the captains of our own ship. Our life expectancy rose from 29 up until what it is now, and I find this incredibly inspiring. Where does that go in the future? Our curiosity has no limits. We started by figuring out a lot of these things about the outside world, and then gradually started wondering, okay, well, what are we?
Max Tegmark: How do we work? We first figured out how muscles work and promptly built things which were much stronger and much faster, and we got the Industrial Revolution. That caused some job displacement and social turmoil, but ended up, I think, in a much better place because we shifted from working with our muscles to working with our brains. But the curiosity, of course, didn’t stop there. And then we started wondering, what is this intelligence thing? How does that work? And the single most powerful idea that gave us the AI revolution, of course, was the insight that intelligence is all about information processing. And our brains are biological computers, but it doesn’t matter whether the intelligence is processed by carbon atoms in neurons in brains or by silicon atoms in tech we build. Even though our computers are limited by the size of our mother’s birth canal, there are no such limits in stuff we build out there in the labs, and we can easily build machines that are vastly smarter than us. We haven’t yet succeeded in making machines which can do everything better than us. That’s what’s typically locally called AGI, let alone superintelligence, which might be as much smarter than us as we are than SE elegance. But, you know, Sam Altman from OpenAI thinks it’s gonna happen this year. Even Yann LeCun, who used to think it was decades away, is now saying it’s not centuries or decades, it’s years. So I think we’re probably gonna be able to build AGI during the Trump presidency at some point, if we choose to. And this raises this very important question that you asked me, where do we go from here? And do we just wanna repeat what we did in the Industrial Revolution, and every time we figure out how something works for us, we replace it? Do we really just wanna build human replacements so that we are no longer needed for anything, so that we no longer are in charge of… I think that would be very uninspiring. We are an ambitious species. I think when I go talk to people here at Davos, it’s so obvious that what pretty much everybody really wants is AI tools that we can control, and tools that help us accomplish more of what we want, prevent the pandemics, figure out all these other wonderful things. So, I think we’re very rapidly heading towards a fork in the road. It’s a super important challenge because as Alan Turing said even in 1951, the default outcome if we just build a new AI species, which can do everything better than us, is that the smarter species tends to control the weaker ones. That’s why the tigers are in cages in the zoos, and we are outside. The new species AI, I know very few people who actually are excited about building a replacement species. Tools is what we want. A lot of people lose sleep over this and say, oh my gosh, how are we going to figure out how to control this stuff before it inevitably comes? I’m an optimist. I think there’s a very easy fix for this. Very, very easy fix. If we simply treat AI like any other industry and have basic safety standards, if the problem solves itself, and I’ll finish by just very briefly saying how that works. Suppose Maria walks into the FDA or the EMA and says, hey, I have this new super cool cancer drug. It’s going to cure everything. It’s inevitable that I’m going to release it next year. I hope you guys figure out how to make it safe in the meantime. Now, you would be laughed out of the office. They’d be like, Maria, where’s your clinical trial? Oh, you don’t have one yet? Okay, come back when you do. Next customer please. Bing. If we make it like that for AI as well, then what will automatically happen is, the entry number one on the safety standard list will be the company has to demonstrate that this can be controlled. Which means it’s not the new species, it means it’s a useful tool. My definition of a tool is something you can control. You probably like to drive a powerful car, you don’t like to drive an uncontrollable car. So then, just as happens in biotech, there’ll be massive incentives for companies to innovate. Top biotech companies put a lot of their smartest people and a lot of their money into exactly meeting the safety standards of society. There’s a race to the top, we’ll see a race to the top in the AI industry, where we’ll get a golden age of AI tools that cross the board. And then, just to end on an optimistic note, a fantastic example of an AI tool is what got the Nobel Prize now for protein folding. You can talk to Demis Hassabis yourself, because he’s here this week. This is my inspiring future, where we use AI basically to help us solve all the problems that have stumped us so far, and create a future that is our future still, where we’re still in charge.
Michael Hengartner: Thank you very much, Max. For the audience, scientists are by nature optimists. If we weren’t optimists, we’d be out of the job of being a scientist very, very rapidly, because most of our experience don’t work, and so you need to be a hopeless optimist to try and try again. All right, let’s dive deeper now into AI and scientific research. I have a few questions. I was going to encourage you to interject and jump in, but you’ve proven you can do that very well already. So let’s go. Maria, one of the interesting things about AI research, I find, is how much of it is done in company settings. Here, historically, at least in the 20th century, much research was funded by government, done in academic institutions. But Max already mentioned the chemistry prize for Nobel last year was given in part to scientists working at DeepMind, a company owned by Google. What are, in your opinion, the advantages and perhaps the disadvantages of fundamental research that has a high impact on society being done within the context of companies?
Maria Leptin: Well, I mean, the advantage is, of course, that the taxpayer doesn’t directly have to pay for it, but that the companies do it. And there is competition. So the companies, companies are actually far more efficient at knowing where to put their money. So I’m not so much worried about that. And like I said earlier, I believe, or maybe I said it on the train, I can’t remember what conversation one has where and when, and that was here. The competition is fine. And other countries are now ahead of Europe. So why not let the big companies do what they do with the money they have and the infrastructure they have, and let the scientists figure out the next big question. You know, scientists work at the forefront of research. I’ve said that again and again. That’s where we like to be. We like to use the tools. The tools are affordable. And play around with what’s there and let industry do the things that industry can do with their money. So I’m not actually concerned with it.
Michael Hengartner: Thank you.
Richard Hatchett: Michael, could I just directly end on that?
Michael Hengartner: Zippy, you’re highly dependent, in fact, on companies to be able to do what you want to be doing.
Richard Hatchett: Well, we are. And I wanted to, Max, you mentioned the DeepMind half of the Nobel Chemistry Prize. I’d like to mention the other half of the Nobel Chemistry Prize, which actually came out of an academic center, the Institute for Protein Design. The Nobel laureate was David Baker, who established the Institute for Protein Design. The example that I’m going to talk about illustrates both the promise and the potential for academic partners to work very productively with industry partners. During the pandemic, The Institute for Protein Design, the tool that David Baker won the prize for was a tool that allowed molecular design, basically using AI for molecular design to create proteins of certain shapes and structures, which is closely related to the AlphaFold work that DeepMind won for, which allowed you to predict protein structure. But we had been working with the Institute for Protein Design. We were aware of their RM diffusion tool, which an Institute for Protein Design is very committed to open access and to sharing those tools and to democratizing access to the tools, which is terrific because that’s going to speed up innovation. But we were able to work with them in a company in Korea, actually, SK Bioscience, to use the Institute for Protein Design tools to design a different approach to a COVID vaccine. And then SK took the information from IPD and basically developed what I’m told by the folks at IPD was effectively the first AI-designed medical product to receive approval for any indication ever, and it was a COVID vaccine. And so what you had in that example was a company bringing the kinds of capabilities that reside in companies, the operational, logistical, financial capabilities to do something and working with a partner that was committed to equitable access and to using the fundamental insights that AI provided to develop something to address an immediate problem, the pandemic. So it’s a great story, a great example, and David maybe gets a little bit less press. He’s maybe not quite as charismatic as Demis, but he’s done just fantastic work and he’s leading a great organization that understands how basic science can serve the world.
Michael Hengartner: Thank you, Richard. I think that’s a nice example of how AI can be used as a tool. to accelerate science. Any other areas where you think that AI will dramatically accelerate? Max mentioned all these things we couldn’t answer yet. Where do you see the big questions that we’ll be able to answer thanks to AI? Perhaps starting with Ramin and then going on to Max.
Ramin Hasani: Yeah, definitely. So as AI systems are becoming more general in purpose and more sophisticated, they can solve more complex problems for us. And yeah, so I mean, they can serve like my aspiration for building foundation models. Like it always has been to serve in science for us to understand, to address our curiosities. This has been like my motto in life, you know, why we want to do this. Because, and I feel like, you know, I’m an, as Demis actually yesterday mentioned, like cautious optimist, you know, about like how things are going forward. But what I want to say is that these systems are becoming like really good. And I’m extremely excited to solve the most complex mathematical problems, physical problem. Because if you solve that, we can solve the energy problem. We can solve the, you know, like societal problems that we have, you know, like a lot of, a lot of our biotech, you know, advances in biotech, like protein folding and other areas. So I think AI definitely have like that value that we’ve always like, we wished to arrive at a point where we can actually do this from specialized kind of fields. But I think a lot of collaboration, a lot of kind of interdisciplinary kind of things are getting also enabled by the fact that now we have access to general intelligence. Education is another portion that is going to be massively kind of changed and accelerated. I think these are basically some of the areas that I feel.
Max Tegmark: Great, so we’ve heard wonderful things about what AI can do in the life sciences and also in education. So I would be remiss if I didn’t put in the plug for what AI can do in the physical sciences and engineering and math. So, and this is something we work a lot on in my MIT. AI research group, so much exciting stuff happening in chemistry, with AI, in material sciences. If one day we can, if AI can discover a room temperature superconductor, it’ll be incredible for climate and energy. If AI can discover way better batteries, it’ll totally turbocharge what you can do with solar and wind, which is a free time variable. If you have, and we’re seeing a lot of things happening. We had a paper out recently where we used AI to study some systems and see if it could discover patterns and take it, and then convert them into actual physics equations. And we discovered a new law of, new conservation law in ozone chemistry. We published this, and we’re like, we have no idea what this is. And then some chemists picked this up and came up with a thorough explanation for what was going on and published it in the chemistry journal. It’s just so much fun, this. And in math, there is about 100,000 theorems that people have put in big databases and they have annual competitions to see if computers can prove some of them. And that’s gone from AI being able to prove about 2% of them to over 70% of them in just a few years. So it’s really, so just as AI has revolutionized the ability to output text and to produce images and videos, and now increasingly code, computer code, I am very confident that we’re gonna see AI revolutionizing the ability to prove stuff. And that is incredibly useful if you actually want AI tools that you can control, or in general, AI that just does what you want it to do. It’s a huge headache that even traditional computer software has bugs in it, right? Who remembers last summer when this CloudFlare update bricked thousands of Microsoft computers and canceled thousands of flights? Yeah. Your flight got canceled? Stuff like that. I think it’s quite possible we can completely eliminate things like that in just a few years. Like Amazon Web Services just completed a massive project where they didn’t prove stuff about math, but they proved stuff about code, specifically about the whole login system to Amazon Web Services. Now, they can brag about having the most secure Cloud in the world. They’ll never have to do a security update again because all the bugs that were there have been found. This is also super relevant to the long-term future of humanity with AI. Because I outlined my very optimistic vision there of us building AI tools. But sadly, that’s not the path we’re on. There are no safety standards at all right now for AI. If you have a sandwich shop in San Francisco, before you can sell your first sandwich, you have to get the health inspector in there and check that you don’t have too many rats in the kitchen and stuff. But if you have an AI company in San Francisco, and you’ve just developed a new species, smarter than human robots or smarter than human super intelligence, you can just release it to the world completely legally. There’s like nothing. Part of the reason for that is that we today don’t know how to prove that something is controllable. So here’s the optimism. If AI is so good at proving stuff about stuff, we can completely turbocharge the ability to prove things about code. What we can do now is if someone develops a powerful neural network that does some sort of tool that does something you really want, instead of deploying that neural network, which we really don’t understand how it works, you say to the neural network, okay, if you’re so smart, this algorithm you just figured out, you just machine learned the knowledge, just code it up in Python or C++ for us, please. You can do that with a human when you figure out how something works. You can write a program that does it. Then you say, hey, AI, also please come up with a proof that this meets my specs, that I can control you, that you’re not going to be hacked, et cetera. It might be, wait, how can I possibly trust this tool I got now? Because I don’t trust the AI that built it. I don’t understand how it works. The proof is way too long for me to read. I just want to leave you with a very optimistic fact actually. It’s just like it’s much harder to find a needle in a haystack than it is to prove that it is a needle after you found it. It’s much harder to figure out how to make an AI tool and the proof that it meets your spec than it is to verify that the proof is correct once you have it. The verification is so automatic that you can do it with 300 lines of Python code. If some mathematician you don’t trust as a human being, says they proved the Riemann hypothesis, you just show it to some other mathematicians and they can easily check it. So what I’m saying in summary is, progress in AI for proving things about programs, about algorithms can actually give us the ability to make super powerful AI that we can completely trust and control. That makes me excited.
Michael Hengartner: Thank you, Max. I’m looking at the time. I still have two questions. So the short answers and Max is not allowed to answer the next one. So Max already mentioned the last industrial revolutions. We always got disruption, job displacement. AI, the first industrial revolution in the creative industry in Hollywood, people are really worried about their jobs. Is AI going to take away my job as a writer, as an artist, and so forth? Should scientists perhaps also be worried? We are able to make AIs now that generate hypotheses. We have AIs that can analyze data, can summarize the data. What’s left for us to do? I mean, are we going to end up simply being technicians that do the experiments that AI told us to do, and that AI will after us analyze and publish for us? Or should we be more optimistic? I know that Max is optimistic. Maria, are you optimistic?
Maria Leptin: As you said, we’re all optimists as scientists, otherwise we wouldn’t be doing it. I’m actually, oh, thank you. I’m also optimistic, as I said. As a scientist, we have to be. And I’m even more optimistic now. So I don’t think I’m going to answer that question. I’m totally reassured that in 10 years’ time, we will be doing our science as we were now, only better and with even more fun.
Michael Hengartner: Richard, what do you think?
Richard Hatchett: Fundamentally, I’m optimistic. I mean, our ability to answer questions and to follow our curiosity to solutions that can serve us is, I mean, we’ve been doing this for hundreds of years. We’re pretty good at it. But I also think that we have to be careful. And we have to address these questions around control early so that we don’t find ourself in a position where something’s out of control. It’s a very different problem, but I do take a little bit of comfort from the management of nuclear technology. I mean, immediately after World War II, you had optimists who were predicting futures of limitless energy, free energy, paradisiacal society based on nuclear energy. And you had pessimists who were predicting that we were all heading straight to nuclear annihilation. And we found the control mechanisms. We didn’t reach the paradise, but we also didn’t go to hell. So I think that we will end up somewhere in the middle, but moving much faster than we’ve ever moved before.
Michael Hengartner: Once again, humanity will muddle through. Yeah. Yeah, Ramin?
Ramin Hasani: And I think the one thing that I would say is that I feel like our mind experience of reality is very continuous. Change is like a continuous kind of process for us. You know, we are experiencing time as a continuous matter. So when you think about, oh, when you become more uncertain about the future is when. you see discrete events. So for example, if you all of a sudden say, we have general purpose AI or AGI, the commercial there. So if you say, if you’re going to have AGI, and all of us are going to lose our jobs. When you think about it like that as an abrupt change, that increases uncertainty and worry for you. But if you actually think about it, how we are going to get there, it’s going to be a process for us to get there. I feel like along the way, we’re going to figure it out how to get there. I think this change, this process has always been like that with any technology. I feel like we’re going to be able to get to that point where we will be comfortable living with superintelligence also amongst us. The spirit of Davos, together we can make it happen.
Michael Hengartner: Fantastic. We have three minutes left. Last question, very brief answer. Max can answer first this time. So Niels Bohr apparently has said, it’s difficult to make predictions, particularly about the future. Let’s try it anyway. It’s Davos 2035, the WEF is handing again. What’s the role of science? What’s the role of scientists at the WEF in 10 years? Are we still here? Are we replaced by clever machines? Or are we perhaps even more important because politicians realized how important sound fact-based policymaking is? Max.
Max Tegmark: We’re still here and we’re excited. We have not built AGI or superintelligence, even though we could have because we remember the Americans saying curiosity killed the cat, and we decided not go that route. Instead, we figured out how to build AI tools that are just as smart as superintelligence would have been, but it’s more limited in the way that we can control it. So we feel that this is our future. We are still the captains of our own ship.
Michael Hengartner: Thank you.
Ramin Hasani: We’re going to be there for sure, but probably we are augmented as well a little bit.
Michael Hengartner: Interesting. Richard.
Richard Hatchett: Based on what I’ve seen over the last couple of years, I think there is a possibility that I won’t be here in 2035 because we have completed the mission that my organization was set out to accomplish, which was to protect the world from pandemics and epidemics. I think AI, I would have said, I would have predicted I wouldn’t be here in 20 years, that we’d put ourselves out of business in 20 years. AI may help us speed that up.
Michael Hengartner: Maria.
Maria Leptin: I don’t think we’ll put ourselves. I don’t think we’ll put ourselves out of business. Our own emotional needs won’t allow that. We want to be what we are. That hasn’t changed throughout evolution. That’s what got us here and that’s what will keep us here. We’ll use everything we have to help us and I think we’ll be enjoying our science as much then as we are now. Sadly, not as personally.
Michael Hengartner: All right, so we not only answered what AI is gonna do for science and for society in the coming years, we even answered what is the purpose of science. It’s great. It’s to figure out stuff. And with this, I close my last request to the WEF. Next time, please give me a professional pessimist so that we have a little bit more control. Thank you very much for having been with us. I wish you a great closing. That was so fun. Thank you.
Maria Leptin
Speech speed
148 words per minute
Speech length
949 words
Speech time
384 seconds
Science fulfills human curiosity and helps us understand the world
Explanation
Maria Leptin argues that the primary purpose of science is to satisfy human curiosity and gain understanding of the world around us. This includes the natural world, physical world, life, and the humanities and arts.
Evidence
She provides a 45-second definition of the purpose of science, emphasizing its role in fulfilling curiosity and understanding.
Major Discussion Point
The Purpose and Value of Science
Agreed with
– Max Tegmark
Agreed on
The purpose of science is to fulfill human curiosity and understand the world
Human emotional needs and curiosity will ensure scientists remain relevant
Explanation
Maria Leptin argues that human emotional needs and innate curiosity will keep scientists relevant in the future. She believes that these fundamental human traits have driven scientific progress throughout history and will continue to do so.
Major Discussion Point
The Future Impact of AI on Science and Society
Competition between public and private research can be beneficial
Explanation
Maria Leptin suggests that competition between public and private research sectors can be advantageous. She argues that companies are often more efficient at allocating resources and that this competition can drive innovation.
Evidence
She mentions that the ERC has been funding AI research for 17 years, long before recent breakthroughs, highlighting the role of public funding in long-term research.
Major Discussion Point
Balancing Public and Private Research
Richard Hatchett
Speech speed
147 words per minute
Speech length
1167 words
Speech time
474 seconds
Scientific discoveries lead to technological advances that benefit humanity
Explanation
Richard Hatchett illustrates how fundamental scientific insights can directly translate into practical applications that benefit humanity. He emphasizes the importance of basic research in enabling rapid responses to global challenges.
Evidence
He provides the example of COVID-19 vaccine development, which was based on fundamental research on protein stabilization from 2013.
Major Discussion Point
The Purpose and Value of Science
AI can accelerate scientific discovery in fields like protein folding and drug development
Explanation
Hatchett highlights the potential of AI to speed up scientific discoveries, particularly in life sciences. He suggests that AI tools can significantly enhance our ability to solve complex problems and develop new medical treatments.
Evidence
He mentions the Nobel Prize-winning work on protein folding by DeepMind and the Institute for Protein Design’s AI-designed COVID vaccine.
Major Discussion Point
The Role of AI in Scientific Research
Agreed with
– Max Tegmark
– Ramin Hasani
Agreed on
AI can accelerate scientific discovery and problem-solving
AI may help solve major global challenges like pandemics more quickly
Explanation
Hatchett expresses optimism that AI could accelerate the process of addressing global health challenges. He suggests that AI might help organizations like his complete their missions faster than previously anticipated.
Major Discussion Point
The Future Impact of AI on Science and Society
Collaboration between academic institutions and industry can lead to innovative solutions
Explanation
Richard Hatchett emphasizes the importance of collaboration between academic institutions and industry in driving innovation. He argues that combining the strengths of both sectors can lead to rapid development of solutions to global challenges.
Evidence
He provides the example of collaboration between the Institute for Protein Design, CEPI, and SK Bioscience in developing an AI-designed COVID vaccine.
Major Discussion Point
Balancing Public and Private Research
Max Tegmark
Speech speed
168 words per minute
Speech length
1901 words
Speech time
675 seconds
Science empowers humanity and expands our understanding of the universe
Explanation
Max Tegmark argues that science has been an empowering and inspiring journey for humanity. He emphasizes how scientific discoveries have expanded our understanding of the universe and our place in it, leading to technological advancements that have improved human life.
Evidence
He provides examples of how scientific understanding has grown from myths about stars to knowledge of the universe’s structure and our own origins as ‘stardust’.
Major Discussion Point
The Purpose and Value of Science
Agreed with
– Maria Leptin
Agreed on
The purpose of science is to fulfill human curiosity and understand the world
AI can revolutionize the ability to prove mathematical theorems and verify code
Explanation
Tegmark highlights the potential of AI to dramatically improve our ability to prove mathematical theorems and verify code. He suggests this could lead to more secure and reliable software and AI systems.
Evidence
He mentions AI’s progress in proving mathematical theorems, increasing from 2% to over 70% in a few years, and Amazon Web Services’ project to prove the security of their login system.
Major Discussion Point
The Role of AI in Scientific Research
Agreed with
– Richard Hatchett
– Ramin Hasani
Agreed on
AI can accelerate scientific discovery and problem-solving
Differed with
– Ramin Hasani
Differed on
The role of AI in scientific research
AI tools should be controllable and serve human needs rather than replace humans
Explanation
Tegmark advocates for developing AI as tools that humans can control, rather than as replacements for humans. He emphasizes the importance of safety standards and the need to prove that AI systems are controllable before deployment.
Evidence
He contrasts the lack of AI safety standards with strict regulations in other industries, such as food safety and drug development.
Major Discussion Point
The Future Impact of AI on Science and Society
Ramin Hasani
Speech speed
165 words per minute
Speech length
1577 words
Speech time
570 seconds
AI tools can help solve complex mathematical and physical problems
Explanation
Ramin Hasani argues that as AI systems become more sophisticated and general-purpose, they can tackle increasingly complex problems in mathematics and physics. He sees this as a way for AI to serve science and address human curiosity.
Evidence
He mentions the potential of AI to solve energy problems and societal issues through advances in mathematics and physics.
Major Discussion Point
The Role of AI in Scientific Research
Agreed with
– Richard Hatchett
– Max Tegmark
Agreed on
AI can accelerate scientific discovery and problem-solving
Differed with
– Max Tegmark
Differed on
The role of AI in scientific research
The transition to AI-augmented science will be gradual and manageable
Explanation
Hasani suggests that the integration of AI into scientific research and society will be a continuous process rather than an abrupt change. He believes this gradual transition will allow humans to adapt and become comfortable with increasingly intelligent AI systems.
Evidence
He draws a parallel with how humans have adapted to other technological changes in the past.
Major Discussion Point
The Future Impact of AI on Science and Society
Agreements
Agreement Points
The purpose of science is to fulfill human curiosity and understand the world
speakers
– Maria Leptin
– Max Tegmark
arguments
Science fulfills human curiosity and helps us understand the world
Science empowers humanity and expands our understanding of the universe
summary
Both speakers emphasize that science is driven by human curiosity and serves to expand our understanding of the world and universe around us.
AI can accelerate scientific discovery and problem-solving
speakers
– Richard Hatchett
– Max Tegmark
– Ramin Hasani
arguments
AI can accelerate scientific discovery in fields like protein folding and drug development
AI can revolutionize the ability to prove mathematical theorems and verify code
AI tools can help solve complex mathematical and physical problems
summary
The speakers agree that AI has the potential to significantly accelerate scientific discovery and problem-solving across various fields, from life sciences to mathematics and physics.
Similar Viewpoints
Both speakers see value in the interaction between public and private sectors in scientific research, emphasizing how this can drive innovation and efficiency.
speakers
– Richard Hatchett
– Maria Leptin
arguments
Collaboration between academic institutions and industry can lead to innovative solutions
Competition between public and private research can be beneficial
Both speakers express optimism about the integration of AI into science and society, emphasizing that it should be a controlled and gradual process that serves human needs rather than replacing humans.
speakers
– Max Tegmark
– Ramin Hasani
arguments
AI tools should be controllable and serve human needs rather than replace humans
The transition to AI-augmented science will be gradual and manageable
Unexpected Consensus
Optimism about the future role of scientists
speakers
– Maria Leptin
– Max Tegmark
– Ramin Hasani
– Richard Hatchett
arguments
Human emotional needs and curiosity will ensure scientists remain relevant
We are still the captains of our own ship
We’re going to be there for sure, but probably we are augmented as well a little bit
AI may help us speed that up
explanation
Despite concerns about AI potentially replacing human roles, all speakers expressed optimism about the continued relevance and importance of human scientists in the future, which is somewhat unexpected given the rapid advancements in AI.
Overall Assessment
Summary
The speakers generally agreed on the fundamental purpose of science, the potential of AI to accelerate scientific discovery, the importance of balancing public and private research, and the continued relevance of human scientists in the future.
Consensus level
There was a high level of consensus among the speakers, with shared optimism about the role of AI in science and the future of human scientists. This consensus suggests a positive outlook for the integration of AI in scientific research while maintaining the central role of human curiosity and creativity.
Differences
Different Viewpoints
The role of AI in scientific research
speakers
– Max Tegmark
– Ramin Hasani
arguments
AI can revolutionize the ability to prove mathematical theorems and verify code
AI tools can help solve complex mathematical and physical problems
summary
While both speakers agree on AI’s potential in scientific research, they emphasize different aspects. Tegmark focuses on AI’s ability to prove theorems and verify code, while Hasani highlights AI’s potential to solve complex problems in mathematics and physics.
Unexpected Differences
Overall Assessment
summary
The main areas of disagreement revolve around the specific roles and impacts of AI in scientific research and society.
difference_level
The level of disagreement among the speakers is relatively low. Most speakers share optimistic views about the potential of AI in science and society, with slight variations in their focus areas. This general alignment suggests a positive outlook for the future of AI in scientific research and its potential benefits for humanity.
Partial Agreements
Partial Agreements
Both speakers agree on the potential of AI to benefit humanity, but Tegmark emphasizes the need for controllable AI tools, while Hatchett focuses more on the potential speed of problem-solving without explicitly mentioning control measures.
speakers
– Max Tegmark
– Richard Hatchett
arguments
AI tools should be controllable and serve human needs rather than replace humans
AI may help solve major global challenges like pandemics more quickly
Similar Viewpoints
Both speakers see value in the interaction between public and private sectors in scientific research, emphasizing how this can drive innovation and efficiency.
speakers
– Richard Hatchett
– Maria Leptin
arguments
Collaboration between academic institutions and industry can lead to innovative solutions
Competition between public and private research can be beneficial
Both speakers express optimism about the integration of AI into science and society, emphasizing that it should be a controlled and gradual process that serves human needs rather than replacing humans.
speakers
– Max Tegmark
– Ramin Hasani
arguments
AI tools should be controllable and serve human needs rather than replace humans
The transition to AI-augmented science will be gradual and manageable
Takeaways
Key Takeaways
The purpose of science is to fulfill human curiosity and understand the world around us
AI has the potential to greatly accelerate scientific discovery across many fields
There are both benefits and challenges to scientific research being conducted in private companies vs public institutions
AI tools should augment and empower human scientists rather than replace them
Responsible development of AI with proper controls and safety standards is crucial
Resolutions and Action Items
None identified
Unresolved Issues
How to implement effective safety standards and controls for AI development
The long-term impact of AI on scientific jobs and the role of human scientists
How to balance public and private funding/control of transformative AI research
Suggested Compromises
Develop AI as controllable tools to augment human capabilities rather than as a replacement species
Implement safety standards for AI similar to those in other industries like pharmaceuticals
Foster collaboration between academic institutions and private companies on AI research
Thought Provoking Comments
The purpose of science is to fulfill human curiosity, to satisfy human curiosity, and to let us understand how the world around us works, whether it is the natural world, the physical world, the world of life, or the world of the mind, of the humanities and the arts.
speaker
Maria Leptin
reason
This concise definition of the purpose of science set the tone for the entire discussion and provided a framework for considering the role of AI in scientific research.
impact
It shifted the conversation from a narrow focus on AI applications to a broader consideration of how AI can serve humanity’s fundamental drive for knowledge and understanding.
The fundamental insight that enabled the rapid development of a number of successful COVID vaccines was the understanding of a fusion protein on the surface of the respiratory syncytial virus, and how that protein binds with human cells as the virus enters the cells to cause disease.
speaker
Richard Hatchett
reason
This example vividly illustrated the direct link between basic scientific research and real-world applications that save lives.
impact
It deepened the discussion by showing how fundamental research, often conducted years before a crisis, can be crucial in responding to urgent global challenges.
We started by figuring out a lot of these things about the outside world, and then gradually started wondering, okay, well, what are we? How do we work?
speaker
Max Tegmark
reason
This comment framed the development of AI as part of humanity’s broader journey of scientific discovery, connecting it to our historical quest for self-understanding.
impact
It broadened the perspective on AI development, encouraging participants to consider its place in the longer arc of scientific progress and human curiosity.
If we simply treat AI like any other industry and have basic safety standards, if the problem solves itself, and I’ll finish by just very briefly saying how that works.
speaker
Max Tegmark
reason
This suggestion offered a practical approach to addressing concerns about AI safety and control, drawing parallels with existing regulatory frameworks.
impact
It shifted the discussion from abstract concerns about AI risks to concrete policy proposals, prompting consideration of how to implement effective AI governance.
I feel like along the way, we’re going to figure it out how to get there. I think this change, this process has always been like that with any technology. I feel like we’re going to be able to get to that point where we will be comfortable living with superintelligence also amongst us.
speaker
Ramin Hasani
reason
This perspective offered a nuanced view of technological progress, emphasizing gradual adaptation rather than abrupt change.
impact
It introduced a more optimistic tone to the discussion about potential AI disruption, encouraging participants to consider how society might evolve alongside AI advancements.
Overall Assessment
These key comments shaped the discussion by broadening its scope from specific AI applications to the fundamental purpose of science and humanity’s relationship with technology. They encouraged a balanced consideration of AI’s potential benefits and risks, while also emphasizing the importance of thoughtful governance and gradual adaptation. The discussion moved from theoretical concepts to practical examples and policy considerations, ultimately fostering a nuanced and optimistic view of AI’s role in scientific progress and human society.
Follow-up Questions
How can we ensure AI remains a tool that humans control rather than becoming an uncontrollable new species?
speaker
Max Tegmark
explanation
This is crucial for maintaining human agency and avoiding potential existential risks from advanced AI.
How can we implement safety standards for AI development similar to those in other industries?
speaker
Max Tegmark
explanation
Establishing proper safety protocols is essential for responsible AI development and deployment.
How can AI accelerate progress in solving complex mathematical and physical problems?
speaker
Ramin Hasani
explanation
Leveraging AI for advanced problem-solving could lead to breakthroughs in energy, societal issues, and other critical areas.
How can AI enhance interdisciplinary collaboration and accelerate education?
speaker
Ramin Hasani
explanation
Exploring AI’s potential in these areas could significantly advance scientific progress and knowledge dissemination.
How can AI be used to prove the correctness and controllability of AI systems?
speaker
Max Tegmark
explanation
This is important for developing trustworthy and controllable AI tools, especially as systems become more advanced.
How will the scientific process and the role of scientists evolve as AI becomes more capable of generating hypotheses and analyzing data?
speaker
Michael Hengartner
explanation
Understanding this evolution is crucial for preparing the scientific community for future changes and challenges.
How can we balance the benefits of AI research conducted by private companies with the need for open access and public good?
speaker
Michael Hengartner and Maria Leptin
explanation
This balance is important for ensuring both rapid progress and equitable access to AI advancements.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.