The State of the model: What frontier AI means for AI Governance
10 Jul 2025 15:30h - 16:00h
The State of the model: What frontier AI means for AI Governance
Session at a glance
Summary
Daniela Rus, Director of MIT’s Computer Science and Artificial Intelligence Laboratory, delivered a comprehensive presentation on the transformative potential of artificial intelligence and the need for responsible AI stewardship. She began by expressing optimism about AI’s future impact, emphasizing that realizing its benefits depends on collaboration between technologists, business leaders, and policymakers to ensure AI serves the greater good. Rus illustrated AI’s current capabilities through examples from popular culture, showing how technologies like holographic generation, AI-designed robotic objects, and wearable assistants that translate sign language are already becoming reality.
She explained that today’s frontier AI models like GPT, Claude, and Gemini contain trillions of parameters and undergo energy-intensive training phases called pre-training and fine-tuning. These models are providing “superpowers” in areas such as speed, knowledge, insight, creativity, and empathy, with examples including 300% productivity improvements, accelerated drug discovery from years to 30 days, and AI-assisted artistic creation. However, Rus acknowledged significant technical challenges including data limitations, fragile robustness, and lack of interpretability in current AI systems.
To address these issues, she presented her team’s innovative solution: liquid networks inspired by the C. elegans worm’s 302-neuron brain structure. These networks use only 19 neurons compared to tens of thousands in traditional models, making them more energy-efficient, explainable, and adaptable while maintaining superior performance. Rus warned about AI’s potential for misuse by “super villains” who could exploit these tools for scams, deepfakes, and manipulation, emphasizing that the greatest challenges ahead are social rather than purely technical. She concluded by advocating for AI stewardship that balances innovation with responsibility, noting that AI automates tasks rather than entire professions and will likely create new job categories while transforming existing roles.
Keypoints
**Major Discussion Points:**
– **AI’s transformative potential and current capabilities**: Rus discusses how AI is already enabling remarkable advances, from holographic generation to wearable assistants that can translate sign language in real time, demonstrating that science fiction concepts are becoming reality much sooner than expected.
– **Technical challenges and innovative solutions**: She addresses current AI limitations including massive energy consumption, lack of interpretability, and the need for server farms, then presents her research on “liquid networks” inspired by C. elegans worms as a solution for creating smaller, more efficient, and explainable AI models.
– **AI risks and the potential for misuse**: Rus explores how the same AI superpowers that benefit society can be exploited by “super villains” for harmful purposes, including deepfake scams, automated phishing, and sophisticated fraud schemes that manipulate trust and reality.
– **The future of work and AI’s impact on employment**: She argues that AI automates tasks rather than entire professions, drawing parallels to how nurse practitioners were created to address doctor shortages, and suggests AI will create new job categories while transforming existing roles.
– **The need for responsible AI stewardship**: Rus emphasizes the importance of balancing innovation with careful governance, proposing AI stewardship as a framework for ensuring AI development serves humanity while addressing short-term, medium-term, and long-term risks.
**Overall Purpose:**
The discussion aims to present a balanced perspective on AI’s future impact, advocating for optimistic yet responsible development. Rus seeks to demonstrate both the extraordinary potential of AI technology and the critical need for thoughtful stewardship to ensure these powerful tools benefit society while minimizing potential harms.
**Overall Tone:**
The tone begins highly optimistic and enthusiastic, with Rus using engaging pop culture references and exciting examples of AI capabilities. It gradually becomes more measured and cautionary as she discusses risks and challenges, but maintains a fundamentally hopeful outlook. The tone emphasizes empowerment and responsibility rather than fear, positioning the audience as active participants in shaping AI’s future rather than passive recipients of its effects.
Speakers
– **Daniela Rus**: Director of the Computer Science and Artificial Intelligence Laboratory at MIT, AI researcher and expert in artificial intelligence, robotics, and machine learning
– **Moderator**: Role as event moderator/host (no additional details provided about expertise or background)
**Additional speakers:**
– **His Excellency, Mr. Alar Karis**: President of the Republic of Estonia (mentioned at the end but did not speak during the transcript)
Full session report
# Event Report: Professor Daniela Rus on AI’s Transformative Potential and Responsible Development
## Executive Summary
This report summarizes a presentation by Professor Daniela Rus, Director of MIT’s Computer Science and Artificial Intelligence Laboratory, on artificial intelligence’s current capabilities, technical challenges, and the need for responsible AI development. The presentation was interrupted before completion, with His Excellency Mr. Alar Karis, President of the Republic of Estonia, introduced at the end but not speaking.
Professor Rus structured her presentation around three main themes: demonstrating AI’s current transformative capabilities, addressing significant technical challenges including energy consumption, and introducing her concept of AI stewardship. She presented both optimistic examples of AI’s potential and realistic assessments of risks, concluding with early thoughts on employment impacts before the presentation was interrupted.
## Current AI Capabilities and Applications
### Demonstrating Present Reality
Professor Rus began by showcasing AI capabilities that are already operational rather than speculative. She demonstrated holographic meeting technology, robotic animation from static images, and wearable AI assistants capable of real-time sign language detection and translation. These examples illustrated what she termed AI’s ability to provide “superpowers” for situation awareness and health applications.
She emphasized that AI foundation models currently excel in several key areas: dramatically increasing speed of various processes, providing curated access to vast knowledge libraries, detecting patterns that escape human observation, serving as collaborative tools for creativity, and developing capabilities to understand human emotional states.
### Quantifiable Impact Examples
Professor Rus provided specific examples of AI’s current impact:
**Productivity Improvements**: She cited productivity gains of up to 300% in certain applications, demonstrating measurable benefits already being realized.
**Drug Discovery Acceleration**: AI has compressed drug discovery timelines from years to approximately 30 days, representing a fundamental transformation in pharmaceutical research.
**Legal Research Enhancement**: AI systems can process and analyze legal documents at unprecedented speed and scale.
**Health Monitoring**: Wearable AI can detect diseases like Parkinson’s long before traditional medical approaches, enabling early intervention.
**Creative Collaboration**: Rather than replacing human creativity, AI serves as a collaborative tool that enables new forms of artistic expression while preserving human agency.
## Technical Challenges and Limitations
### Current System Problems
Despite impressive capabilities, Professor Rus identified three critical limitations in current AI systems:
**Data Requirements**: Current AI systems require vast amounts of training data—between 1 to 15 terabytes—creating bottlenecks in development and raising questions about data quality and availability.
**Fragile Robustness**: AI systems often perform excellently within their training parameters but fail unpredictably when encountering novel situations, limiting their reliability in critical applications.
**Lack of Interpretability**: Current AI systems operate as “black boxes,” making it difficult to understand how they reach conclusions, which creates challenges for accountability and trust.
### The Energy Consumption Crisis
Professor Rus highlighted a critical sustainability challenge: AI’s massive energy consumption. She presented projections indicating that AI models will consume 12% of total US power demands by 2030. This stems from the computational intensity of training and running large AI models with trillions of parameters, requiring extensive server farms.
The energy challenge represents more than a technical problem—it poses fundamental questions about the sustainability and scalability of current AI approaches, potentially offsetting many benefits these systems provide.
## Liquid Networks: A Novel Solution
### Biological Inspiration
In response to these challenges, Professor Rus presented her team’s innovative solution: liquid networks inspired by the C. elegans worm’s neural structure. This approach departs from the “bigger is better” paradigm dominating AI development.
The C. elegans worm, with only 302 neurons compared to billions in human brains, successfully navigates its environment, finds food, avoids dangers, and reproduces. Professor Rus’s team developed liquid networks using only 19 neurons compared to tens of thousands in traditional models, yet demonstrating superior performance.
### Advantages of Liquid Networks
These liquid networks offer several crucial benefits:
– **Energy Efficiency**: Dramatically reduced computational requirements make systems more sustainable
– **Explainability**: Simplified structure enables understanding of how systems reach conclusions
– **Local Deployment**: Unlike current systems requiring cloud infrastructure, liquid networks can run on phones and enterprise computers
– **Improved Performance**: Better results with significantly fewer computational resources
Professor Rus mentioned that her team has spun out a company called Liquid AI to commercialize this technology.
## AI Risks and Potential Misuse
### The “Super Villains” Problem
Professor Rus introduced AI risks by emphasizing the dual-use nature of AI technology: “the same tools that help doctors diagnose diseases better and scientists accelerate discovery can also empower bad actors, I like to call them super villains, to cause real harm.”
### Specific Misuse Scenarios
She outlined several concerning applications by malicious actors:
**Automated Phishing**: AI can generate sophisticated, personalized phishing attempts more likely to succeed than traditional approaches.
**Deepfake Generation**: AI enables creation of convincing fake audio and video content for fraud, manipulation, or misinformation.
**CEO Impersonation Fraud**: Sophisticated scams using AI-generated voices to manipulate employees into transferring funds or revealing sensitive information.
**Manipulation of Trust**: AI’s ability to generate convincing content across multiple media types creates possibilities for large-scale manipulation and erosion of shared truth.
### Risk Categories
Professor Rus categorized AI risks across timeframes:
**Short-term**: Privacy breaches, immediate security vulnerabilities, and current misuse applications already manifesting.
**Medium-term**: Job displacement effects, erosion of professional expertise, and broader economic disruptions.
**Long-term**: Climate impacts from energy consumption, potential loss of human agency, and fundamental changes to society.
## AI Stewardship Framework
### Defining Stewardship
Professor Rus introduced AI stewardship as “our commitment to ensuring that AI evolves in ways that truly serve humanity and the planet.” She emphasized this framework rejects the false dichotomy between innovation and safety: “it’s not about slowing innovation, because innovation is very important. It’s about guiding it wisely.”
### Implementation Approach
The stewardship framework involves:
– Anticipating challenges proactively rather than merely reacting
– Minimizing harm through safeguards and oversight mechanisms
– Building resilient systems that are robust, accountable, and aligned with human values
– Maintaining human agency to ensure AI augments rather than replaces human decision-making
## Employment and Future of Work
### Task Automation vs. Job Replacement
Professor Rus reframed the AI employment debate by distinguishing between task-level and profession-level automation: “AI does not automate professions. AI automates tasks. In many professions, people do multiple tasks… And the tasks that can be automated are the data and the predictable physical work tasks.”
### The Nurse Practitioner Analogy
She illustrated how new roles can emerge from technological change by referencing nurse practitioner positions. When doctor shortages became apparent, the healthcare system created new roles combining nursing expertise with expanded responsibilities, rather than simply training more doctors.
### Demographic Considerations
Professor Rus introduced a counternarrative to job displacement fears by highlighting demographic trends: “We will run out of workers before we run out of jobs… the world population growth is slowest since the Industrial Revolution.” This suggests labor scarcity, rather than job scarcity, may be the primary challenge ahead.
## Presentation Interruption and Conclusion
The presentation was interrupted while Professor Rus was discussing employment implications of AI. At the conclusion, His Excellency Mr. Alar Karis, President of the Republic of Estonia, was introduced to the audience but did not speak.
Professor Rus’s presentation provided a balanced examination of AI’s current state and future trajectory, combining technical expertise with social awareness. Her emphasis on stewardship rather than control, task automation rather than wholesale job replacement, and guided innovation rather than unrestricted development offered a constructive framework for navigating AI’s transformative impact on society.
The discussion’s strength lay in its integration of concrete examples with realistic assessment of challenges, positioning stakeholders to make informed decisions about AI development while working toward outcomes that benefit humanity.
Session transcript
Daniela Rus: Good afternoon, everyone. In my role as the Director of the Computer Science and Artificial Intelligence Laboratory at MIT, I am often asked to talk about the impact of artificial intelligence. What will rapid advances in this technology mean for our lives, for our jobs, for our futures? Well, I believe that AI will make tremendous differences in our lives. AI will change the world in many ways, some of which we have only begun to imagine. And I’m very optimistic about what the future holds. But that depends on all of us. That depends on the technologies, the business leaders, the policymakers. It depends on all of us to come up with the right frameworks, the right ideas to make sure that the development of these ideas and of the deployment of AI products serve the greater good. And so that’s what I want to talk to you about today. Why I’m optimistic, how AI can help, and what kind of perils we need to account for. So let me begin with the good news, because there is a lot to talk about. So when I think about AI, I think about my favorite startup characters. And remember when Spock shows up as a hologram and the year is around 2300? Well, so here’s the thing. You don’t have to wait for 2300, because we already have AI tools that support rapid holographic generation. Just imagine hybrid meetings where your colleagues from remote locations appear as active holograms and they can interact with you more naturally. And do you remember when Mickey Mouse summons the broomstick in the Sorcerer’s Apprentice? Well, you don’t need magic to make that happen, because we have AI. In fact, research in AI for designing allows us to turn pictures into animated robotic objects like the one you see here, which was generated automatically from an image. And these physical AI systems can now be guided intuitively using AI and on-body sensors that monitor muscle activity, just like Mickey did. And research in human-machine interaction is enabling machines to adapt to people rather than the other way around, which is what you see here where our robot is enabling a worker to install a cable. So the wearables that we’re beginning to develop are beginning to look like the clothes that we wear. And so, for instance, this is a glove that is running an AI algorithm to detect and translate gestures associated with sign language, and it’s doing this in real time. So just think about this technology as not just part of our environment, but as something that we carry with us. So what if we could all have our own wearable assistants, like Iron Man? But superpowers that come with these assistants would be focused on improving our situation awareness, our health, and our everyday lives. And what if these technologies and these machines could help us transform into a safer society living on a healthier planet? Well, this is just a snapshot of the frontier AI-enabled future I imagine. And today’s frontier models, like GPT, Claude, and Gemini, are huge models that consist of on the order of trillions of parameters. And these models are built in two very energy-intensive phases called pre-training and fine-tuning. And the goal of pre-training is to have the model learn the knowledge that is embedded in the data. And the more knowledge we provide to the model, the better the model. And so large language models are typically trained between 1 and 15 terabytes of data, and they get trained to learn the weights that go in an architecture called the transformer architecture so that they can generate sentences. For example, after you train the model on all the internet data, you might ask it to complete the sentence, the UN was founded in, and expect the model to say 1945. So that’s the first stage. The second phase, fine-tuning, uses specialized data to adapt the model to be useful for downstream applications. So for example, you can fine-tune the model on all UN data, and then you can ask it to summarize the UN charter. So AI foundation models that are built in this very energy-intensive phase are transforming and reshaping our understanding of what is possible. They’re giving us superpowers. They are empowering us with speed, with knowledge, insight, creativity, foresight, mastery, empathy, and these superpowers are already being deployed in the world. So let me give you just a few examples. I know how many of you have seen the movie Limitless, and I love this movie. It features a failing novelist who ingests a wonder drug, and this wonder drug amplifies his brain power and allows him to create a masterpiece in just four days. Well, today we don’t have these magic drugs, but with AI, we get the equivalent superpower, because in the real world, we have AI that gives us speed, and in this chart, you see some examples where using AI is improving productivity by up to 300%. AI is also accelerating drug discovery in powerful ways, and so, for instance, researchers used AI to identify protein targets for a deadly cancer by analyzing their structure with AlphaFold and by generating potential drug candidates, and this process took about 30 days, whereas prior to AI, it would take years to generate candidates for a protein that was particularly nasty disease. AI can also help people with knowledge by providing curated access to entire libraries of information, so, for example, lawyers can retrieve the most relevant cases from a library, most relevant to their current case, without having to read every single page in that library. Insight goes beyond knowledge. It connects patterns to meaning and then meaning to action, and my MIT colleague, Dina Katabi, has been working on using AI to understand sleep, and by analyzing breathing patterns using movement via ambient radio signals, her system can detect diseases like Parkinson’s disease long before they are detected by traditional medicine, and this can allow us to turn health care from a reactive discipline into more proactive care. The last example I wanted to show you is on creativity, and I’m sharing here artist Rafik Anadol’s unsupervised exhibit at the Museum of Modern Art in New York. He uses AI as a collaborator, not as a replacement, and so he trained a model on 200,000 object exhibits from the MoMA collection, and then he connected the model also with live information that captured weather data and movement in the museum, and the result is this beautiful mesmerizing piece of art that we can see here. that in which really data is like the pigment and the algorithm used to generate it is like the paintbrush. And the way he describes it is the future really belongs to these new tools that can empower creators to do more. OK, so a lot of great things we can do here, but AI systems are not perfect. And all of this is unfolding in the face of persistent technical challenges. From data limitations and escalating model sizes to very fragile robustness and a lack of interpretability, today’s AI systems are as powerful as imperfect. And so as we scale their impact, it’s very important that we continue to think about how the systems work and what we can do to make them better. And so here’s my main comment about where we go with AI governance and stewardship. It’s important to support innovation because it is key toward progress. Now, some of the technical challenges already have promising technical solutions. And we have to continue to innovate and to build on new ideas to develop solutions for foundational AI but also for the applications of AI. And I want to give you an example of a solution designed to address a core challenge, the sheer size and computational demands of today’s models. So we’d like to have our AI solutions run on our phones, on our enterprise computers, on sensors, on small devices. And right now, we cannot. The AI that you run on your smartphone actually requires the cloud calls. And those are very expensive. And you really need to be in a region that is well covered. So we cannot use today’s AI solutions because they require server farms. And these AI solutions today have a huge energy cost. By 2030, AI is projected to use 12% of the US total power demands due to scaling. And an interesting observation, recent observation, is that pure scaling is beginning to show diminishing returns in performance. So how can we innovate? Well, one idea is to use inspiration from nature. And in our group, we are using inspiration from a worm called C. elegans to answer, how do we make better AI models? Now, in stark contrast to the billions of neurons in the human brain, the worm lives very happily on only 302 neurons. And biologists understand exactly how these neurons work. And so the idea is to use the math of the worm instead of traditional math to build a new class of AI we’re going to call liquid networks, which are small, energy efficient, and explainable as compared to today’s solutions. And so let me show you an example of how to think about today’s solutions versus more efficient solutions. So here is our self-driving car. It was trained using a traditional model to drive by looking at how people steer and accelerate based on what the world looks like around them. And here’s the dashboard of the solution. In the lower right corner, you see the map. In the upper left corner, the camera input stream. And this big box with blinking lights is the AI decision-making engine that tells the car how to steer and how to accelerate. And there are tens of thousands of artificial neurons in this decision-making engine. And it’s impossible to correlate how they fire with what the car does. And if you look at the lower left corner, you’ll see the attention map. This means where in the image this decision-making solution looks in order to tell the car what to do. And do you see how noisy it is? And also how this solution primarily looks at the bushes and the trees on the side of the road, which is not how people drive. So contrast that now to our worm-inspired liquid network for self-driving cars. Now we have only 19 neurons rather than tens of thousands. And that means we can explain how the solution works. And we also see that the attention map is focused on the road horizon and the sides of the road at the horizon, which is really how we drive. And so because these solutions are smaller, we can understand how they make decisions. And these solutions use much less energy. And so the question is, how is this performance possible? And so here’s the professor in me. I just want to take 15 seconds to explain to you that in most traditional AI solutions, the artificial neuron is a very simple on-off computation. And as compared to that, this is what happens in the liquid neuron. You don’t have to worry about the math. The point is that this math inspired by the brains in nature, by the brain of C. elegans, is more complex than on and off. And because of this math, we can get models that are compact, that are explainable, and that really understand the task, and moreover, that can adapt after training based on the inputs that they see. So this is an example of how investing in innovation can get us to better, safer AI. Here is an example of how these networks adapt. This is a video that shows training data for the task of finding things in the woods. And so we train all kinds of models to find things in the woods during the summer, and all the models learn the task. And now we try the models that were learned during the summer in the fall, and you see that the traditional solution gets confused by the background and cannot work. But the liquid solution is actually focused on the task rather than the context of the task, and it’s giving us reliable behavior. So the step forward is with investments in innovation. We have a new approach, a new idea for AI that gets machines to be adaptive to the tasks that they perform. And these solutions can actually be applied to any domain that has time series data and has huge improvements over traditional solutions. For example, 123% over transformers in physical real-world modeling. I know you can’t read the table, but I just wanted to show you that we tested it on a lot of different things. And based on these new ideas, we started a company called Liquid AI, which brings to the world these energy-efficient models. And these Liquid AI models really compare very favorably against existing models. They have much better performance. They continue to improve as we continue to develop them. They have lower memory needs, which means that they can be deployed on your phones and on your enterprise computers. And they also have much better performance. They’re much faster at both understanding the query and generating the response. These are complicated graphs, but basically, the liquid solution is the graph at the top. And as compared to many existing solutions, you can see that it performs much better. So with that, I want to show you that we can build more efficient and adaptable models without sacrificing performance. And this is an important step as AI scales across sectors. And the potential benefits are extraordinary. But with this power also comes the need for careful stewardship. And this is because AI’s potential to reshape industries and improve lives is also a capacity that can be misused. The same tools. that help doctors diagnose diseases better and scientists accelerate discovery can also empower bad actors, I like to call them super villains, to cause real harm. This is an example where someone tricked a chatbot to sell an expensive car for $1. A super villain armed with generative AI no longer needs a team of hackers or spammers because with automated phishing tools, they can generate thousands of personalized scam messages per minute. A super villain wielding AI’s knowledge superpower can manipulate reality itself. And for example, in Operation Overload, free AI tools were used to generate deepfake videos, fake news slides, and synthetic personas, each crafted to exploit psychological triggers and amplify division, which is a bad thing. AI Insight superpower allows super villains to exploit not just systems, but the very people and the trust that comes with the systems. And so in one case, scammers used AI-generated videos to impersonate a company’s CEO during a live call. And this impersonated CEO convinced an employee to transfer millions of dollars. And so this is really problematic. And it shows that the hardest problems ahead are not just technical, they’re also social. AI is reshaping our economies, it’s testing our democracies, it’s challenging our norms around trust, privacy, and the truth itself. But here’s the thing, unlike a sudden crisis like the pandemic, we know that these are issues. And so the researchers have a chance to think about them and respond with foresight and responsibility. But the challenge we face is unlike any other challenge before. AI is being rapidly integrated across sectors. It’s evolving continuously. And it’s reshaping multiple domains all at once. It pays, outstrips our ability to build effective governance. And this really demands new ideas for coordination and collective responsibility. And so solving the size problem, which liquid networks do, is an important step. It is one piece of the puzzle. To truly understand AI risk, we need to look beyond technical performance and ask how these systems impact individuals. How they impact businesses, how they impact societies, and how do we grapple with issues like trust, fairness, misuse, and power, and recognize the critical role that AI literacy has to play in solving these problems. So in my view, to navigate AI responsibly, we must understand the risks, not just in the present, but also across time. And so short-term risks like privacy breaches, misinformation, and cyber attacks are already here. And they demand immediate attention. Medium risks have more uncertainty. And so there’s a question about what will happen to jobs. There’s a question around erosion of expertise. And so we need to think about that. Long-term risks include big climate impacts, as well as loss of human agency. And so what can we do about these things? Well, managing AI risk isn’t about eliminating all this uncertainty. It’s about anticipating challenges. It’s about minimizing harm. And it’s about building systems that are resilient, accountable, and aligned with human values. And so in this sense, I would like to propose AI stewardship as our commitment to ensuring that AI evolves in ways that truly serve humanity and the planet. And so it’s not about slowing innovation, because innovation is very important. It’s about guiding it wisely. And we can guide it by addressing the technical challenges, by maximizing positive impacts, by minimizing potential harm, by aligning development, and by ensuring ethical deployments. Effective AI stewardship requires us to consider multiple dimensions. We must protect privacy while supporting innovation. We must ensure efficiency and provide transparency. And we must ensure fairness while doing all that. And so each of these goals pulls in a different direction. And the challenge is to kind of balance everything together. Stewardship isn’t just about laws and standards. Stewardship is also about people. And one of the most important questions, and one of the personal questions I always get when I talk about AI is, will AI steal my job? Will AI replace me? Or will AI empower me? And so let’s take a closer look. AI does not automate professions. AI automates tasks. In many professions, people do multiple tasks. They do data tasks, they do predictable work, but they also apply expertise, they manage others, they do unpredictable physical work. And the tasks that can be automated are the data and the predictable physical work tasks. And so we can begin to imagine AI as our personal assistant to whom we can offload some of our routine tasks. And so whether AI commodifies or complements expertise really depends on how we choose to use it. And an interesting case I would like to highlight for you is the case of nurse practitioners. So nurse practitioners were created when in the United States it became clear that we didn’t have enough doctors to support the population. And so with nurse practitioners, we created a new job category where the nurses were empowered to work on simple healthcare issues, routine healthcare issues, and they could do so without having to go through the rigor of an MD education. And so I like to imagine a future where AI could be deployed sort of like this in a regime where AI takes care of clear, well-delineated aspects of our everyday lives. As you think about jobs, an important point I want to highlight is that every new technology creates new jobs. And so in the year 2000, when we had a dot-com boom, nobody was imagining these new types of jobs that were created because we introduced the smartphone, cloud computing, and social media to the world. So I believe that AI will create a wide range of jobs and we are not even imagining what those jobs might be. The other point I want to make is that the world population growth is slowest since the Industrial Revolution. And so we will run out of workers before we run out of jobs. This shows population growth in the United States. So the rise of AI isn’t eliminating roles, it’s transforming them, it’s creating hybrid skill sets, it’s enabling new roles, and the roles are evolving. I’m ever so sorry to interrupt, but I need to make a very important announcement.
Moderator: Ladies and gentlemen, that was Daniela Roos. And ladies and gentlemen and excellencies, we are delighted to welcome His Excellency, Mr. Alar Karis, President of the Republic of Estonia. Thank you.
Daniela Rus
Speech speed
133 words per minute
Speech length
3378 words
Speech time
1520 seconds
AI will make tremendous differences in our lives and change the world in ways we have only begun to imagine
Explanation
Daniela Rus expresses strong optimism about AI’s transformative potential, arguing that artificial intelligence will fundamentally reshape human existence. She emphasizes that while we can anticipate some changes, many of AI’s impacts are still beyond our current imagination.
Evidence
References to science fiction examples like Star Trek holograms and Mickey Mouse’s Sorcerer’s Apprentice that are now becoming reality through AI
Major discussion point
AI’s Transformative Potential and Current Capabilities
Topics
Economic | Development | Sociocultural
AI already enables holographic generation, robotic animation from images, and intuitive human-machine interaction
Explanation
Rus demonstrates that AI capabilities once relegated to science fiction are now reality. She shows how AI can create holograms for meetings, animate robotic objects from pictures, and enable natural human-machine collaboration.
Evidence
Examples include holographic meeting participants, automated robotic object generation from images, and robots that adapt to help workers install cables using on-body sensors
Major discussion point
AI’s Transformative Potential and Current Capabilities
Topics
Infrastructure | Economic | Development
Wearable AI assistants can detect and translate sign language in real time and could provide superpowers for situation awareness and health
Explanation
Rus envisions AI-powered wearables that enhance human capabilities, particularly for accessibility and health monitoring. She suggests these devices could function like personal assistants, providing enhanced awareness and health insights.
Evidence
Demonstration of a glove running AI algorithms to detect and translate sign language gestures in real time, comparison to Iron Man’s assistive technology
Major discussion point
AI’s Transformative Potential and Current Capabilities
Topics
Human rights | Development | Infrastructure
AI foundation models give us superpowers including speed, knowledge, insight, creativity, and empathy that are already being deployed
Explanation
Rus argues that large language models trained on massive datasets provide humans with enhanced capabilities across multiple domains. These models, built through energy-intensive pre-training and fine-tuning phases, are already transforming what humans can accomplish.
Evidence
Description of models like GPT, Claude, and Gemini with trillions of parameters trained on 1-15 terabytes of data using transformer architecture
Major discussion point
AI’s Transformative Potential and Current Capabilities
Topics
Economic | Development | Sociocultural
AI improves productivity by up to 300% and accelerates drug discovery from years to 30 days
Explanation
Rus presents concrete evidence of AI’s impact on work efficiency and scientific research. She demonstrates how AI is already delivering measurable improvements in productivity and dramatically reducing timeframes for critical research like drug development.
Evidence
Chart showing productivity improvements up to 300%, example of researchers using AI with AlphaFold to identify protein targets for deadly cancer in 30 days versus years without AI
Major discussion point
AI’s Impact on Productivity and Innovation
Topics
Economic | Development
AI provides curated access to entire libraries of information and can detect diseases like Parkinson’s long before traditional medicine
Explanation
Rus illustrates how AI transforms information access and medical diagnosis by providing intelligent filtering of vast data sets and detecting subtle patterns invisible to traditional methods. This represents a shift from reactive to proactive healthcare.
Evidence
Example of lawyers retrieving relevant cases without reading entire libraries, MIT colleague Dina Katabi’s work using ambient radio signals to analyze breathing patterns and detect Parkinson’s disease before traditional diagnosis
Major discussion point
AI’s Impact on Productivity and Innovation
Topics
Economic | Development | Human rights
AI serves as a collaborator for artists, not a replacement, enabling new forms of creative expression
Explanation
Rus emphasizes AI’s role as a creative partner rather than a substitute for human creativity. She presents AI as a new tool that expands artistic possibilities while maintaining human agency in the creative process.
Evidence
Artist Rafik Anadol’s ‘Unsupervised’ exhibit at MoMA, where AI trained on 200,000 museum objects creates dynamic art responding to weather and movement data, with data as pigment and algorithms as paintbrush
Major discussion point
AI’s Impact on Productivity and Innovation
Topics
Sociocultural | Economic
Current AI systems face persistent challenges including data limitations, fragile robustness, and lack of interpretability
Explanation
Rus acknowledges significant technical limitations in current AI systems that must be addressed as AI scales. She emphasizes that despite AI’s power, these systems remain imperfect and require continued innovation to overcome fundamental weaknesses.
Evidence
Discussion of escalating model sizes, fragile robustness issues, and lack of interpretability in current AI systems
Major discussion point
Technical Challenges and Solutions in AI Development
Topics
Infrastructure | Legal and regulatory
AI models require massive computational resources and by 2030 will use 12% of US total power demands
Explanation
Rus highlights the unsustainable energy consumption of current AI systems and their dependence on server farms. She points out that this limits AI deployment to cloud-based solutions and creates significant environmental concerns.
Evidence
Projection that AI will consume 12% of US total power by 2030, observation that pure scaling shows diminishing returns, current AI requiring server farms rather than running on personal devices
Major discussion point
Technical Challenges and Solutions in AI Development
Topics
Infrastructure | Development | Economic
Liquid networks inspired by C. elegans worms offer a solution with only 19 neurons versus tens of thousands, providing explainable and energy-efficient AI
Explanation
Rus presents her research team’s bio-inspired approach to AI that dramatically reduces computational requirements while improving explainability. By mimicking the neural structure of simple organisms, they achieve better performance with far fewer resources.
Evidence
Comparison of self-driving car systems: traditional model with tens of thousands of neurons showing noisy attention maps focused on bushes versus liquid network with 19 neurons focused on road horizon like human drivers
Major discussion point
Technical Challenges and Solutions in AI Development
Topics
Infrastructure | Development | Economic
Liquid AI models demonstrate superior performance with lower memory needs and faster processing while being deployable on phones and enterprise computers
Explanation
Rus showcases the practical advantages of her liquid network approach, demonstrating that more efficient AI doesn’t require performance sacrifices. These models can run on personal devices rather than requiring cloud infrastructure.
Evidence
Performance data showing 123% improvement over transformers in physical real-world modeling, graphs demonstrating superior performance across multiple metrics, ability to deploy on phones and enterprise computers
Major discussion point
Technical Challenges and Solutions in AI Development
Topics
Infrastructure | Economic | Development
AI’s power to reshape industries and improve lives also creates capacity for misuse by bad actors or ‘super villains’
Explanation
Rus warns that the same AI capabilities that benefit society can be weaponized by malicious actors. She emphasizes that AI’s transformative power is inherently dual-use, requiring careful consideration of potential misuse scenarios.
Evidence
Example of someone tricking a chatbot to sell an expensive car for $1, demonstrating how AI systems can be manipulated
Major discussion point
AI Risks and Misuse Potential
Topics
Cybersecurity | Legal and regulatory | Human rights
AI enables automated phishing, deepfake generation, and sophisticated scams including CEO impersonation for financial fraud
Explanation
Rus details specific ways AI amplifies criminal capabilities, allowing individual bad actors to achieve what previously required teams of specialists. She shows how AI’s generative capabilities can be used to create convincing deceptions at scale.
Evidence
Automated phishing tools generating thousands of personalized scam messages per minute, Operation Overload using free AI tools for deepfakes and synthetic personas, scammers using AI-generated videos to impersonate CEOs and steal millions
Major discussion point
AI Risks and Misuse Potential
Topics
Cybersecurity | Economic | Human rights
AI risks span short-term issues like privacy breaches to long-term concerns about climate impact and loss of human agency
Explanation
Rus presents a comprehensive risk framework that categorizes AI challenges across different time horizons. She emphasizes that AI risks are not just immediate technical problems but include fundamental questions about human autonomy and environmental sustainability.
Evidence
Categorization of risks: short-term (privacy breaches, misinformation, cyber attacks), medium-term (job displacement, erosion of expertise), long-term (climate impacts, loss of human agency)
Major discussion point
AI Risks and Misuse Potential
Topics
Human rights | Development | Cybersecurity
AI stewardship requires ensuring AI evolves to serve humanity while balancing innovation with responsible guidance
Explanation
Rus proposes a framework for responsible AI development that doesn’t slow innovation but guides it wisely. She emphasizes that stewardship involves proactive management of AI’s evolution to align with human values and planetary wellbeing.
Evidence
Definition of stewardship as addressing technical challenges, maximizing positive impacts, minimizing harm, aligning development, and ensuring ethical deployment
Major discussion point
AI Stewardship and Governance Framework
Topics
Legal and regulatory | Human rights | Development
Effective stewardship must protect privacy while supporting innovation, ensure efficiency with transparency, and maintain fairness
Explanation
Rus identifies the complex balancing act required in AI governance, where multiple important goals pull in different directions. She emphasizes that effective stewardship requires managing these tensions rather than choosing one priority over others.
Evidence
Discussion of multiple competing dimensions that must be balanced simultaneously in AI stewardship
Major discussion point
AI Stewardship and Governance Framework
Topics
Human rights | Legal and regulatory | Economic
Managing AI risk involves anticipating challenges, minimizing harm, and building resilient systems aligned with human values
Explanation
Rus argues that AI risk management is not about eliminating uncertainty but about building adaptive capacity to handle challenges. She emphasizes the importance of proactive rather than reactive approaches to AI governance.
Evidence
Framework emphasizing anticipation, harm reduction, and value alignment rather than uncertainty elimination
Major discussion point
AI Stewardship and Governance Framework
Topics
Legal and regulatory | Human rights | Infrastructure
AI automates tasks rather than entire professions, potentially serving as personal assistants for routine work
Explanation
Rus reframes the job displacement debate by distinguishing between task automation and profession elimination. She suggests that AI will primarily handle routine tasks while humans continue to apply expertise, manage others, and handle unpredictable work.
Evidence
Analysis of profession components: data tasks, predictable work, expertise application, management, unpredictable physical work; nurse practitioner example showing how new roles can emerge to address workforce shortages
Major discussion point
AI’s Impact on Employment and Job Market
Topics
Economic | Development | Sociocultural
Every new technology creates new jobs, and AI will likely generate roles we cannot yet imagine
Explanation
Rus draws on historical precedent to argue that technological disruption typically creates more jobs than it eliminates. She suggests that AI will follow this pattern, generating entirely new categories of work that don’t exist today.
Evidence
Historical example of jobs created after the dot-com boom that nobody imagined in 2000, including roles enabled by smartphones, cloud computing, and social media
Major discussion point
AI’s Impact on Employment and Job Market
Topics
Economic | Development | Sociocultural
World population growth is at its slowest since the Industrial Revolution, meaning we will run out of workers before jobs
Explanation
Rus presents demographic data to argue that labor scarcity, not job scarcity, will be the primary employment challenge. She suggests that AI-driven productivity gains will be necessary to address workforce shortages rather than creating unemployment.
Evidence
Population growth data for the United States showing slowest growth since the Industrial Revolution
Major discussion point
AI’s Impact on Employment and Job Market
Topics
Economic | Development | Sociocultural
Moderator
Speech speed
142 words per minute
Speech length
31 words
Speech time
13 seconds
Introduction of His Excellency, Mr. Alar Karis, President of the Republic of Estonia
Explanation
The moderator interrupts the presentation to make an important protocol announcement welcoming a high-ranking government official. This represents standard diplomatic protocol for introducing heads of state at international events.
Evidence
Formal announcement welcoming the President of Estonia with appropriate diplomatic titles and courtesies
Major discussion point
Event Management and Protocol
Topics
Legal and regulatory
Agreements
Agreement points
Similar viewpoints
Unexpected consensus
Overall assessment
Summary
The transcript contains only one substantive speaker (Daniela Rus) presenting her views on AI, with minimal input from a moderator who only made a protocol announcement. Therefore, there are no areas of agreement, disagreement, or consensus to analyze between multiple speakers.
Consensus level
Not applicable – insufficient speakers for consensus analysis. The presentation represents a single expert’s comprehensive perspective on AI’s potential, challenges, and governance needs rather than a multi-stakeholder discussion or debate.
Differences
Different viewpoints
Unexpected differences
Overall assessment
Summary
No disagreements identified as this transcript contains primarily a single speaker (Daniela Rus) presenting her comprehensive views on AI without substantive counterarguments or alternative perspectives from other speakers
Disagreement level
No disagreement present – this is a monologue presentation rather than a debate or discussion with multiple viewpoints. The only other speaker is a moderator making a brief protocol announcement that does not engage with the AI topic substantively
Partial agreements
Partial agreements
Similar viewpoints
Takeaways
Key takeaways
AI will fundamentally transform society and has tremendous potential to improve lives through applications in healthcare, productivity, creativity, and human-machine interaction
Current AI systems face significant technical challenges including massive energy consumption (projected 12% of US power by 2030), lack of interpretability, and requirement for server farms rather than local deployment
Liquid networks inspired by C. elegans worms offer a promising solution with dramatically reduced computational requirements (19 neurons vs tens of thousands) while maintaining superior performance and explainability
AI poses dual-use risks where the same tools that benefit society can be weaponized by bad actors for fraud, misinformation, and manipulation
AI stewardship requires balancing innovation with responsible governance, addressing technical challenges while ensuring systems serve humanity’s best interests
AI will transform rather than eliminate jobs by automating specific tasks rather than entire professions, with new job categories likely to emerge
Demographic trends show slowing population growth means society will face worker shortages before job shortages, making AI augmentation beneficial rather than threatening
Resolutions and action items
Continue investing in innovation to develop better, safer AI solutions like liquid networks
Develop AI stewardship frameworks that protect privacy while supporting innovation, ensure efficiency with transparency, and maintain fairness
Focus on building AI systems that are resilient, accountable, and aligned with human values
Address AI literacy as a critical component for solving AI-related societal problems
Deploy AI as personal assistants to handle routine tasks while humans focus on expertise, management, and unpredictable work
Unresolved issues
How to effectively coordinate governance across rapidly evolving AI applications in multiple sectors simultaneously
Specific mechanisms for balancing the competing demands of privacy, innovation, efficiency, transparency, and fairness in AI development
Long-term risks including climate impacts and potential loss of human agency remain uncertain and unaddressed
The timeline and specific nature of new jobs that AI will create remains unclear
How to prevent AI misuse by bad actors while maintaining innovation and accessibility
Medium-term uncertainties around job displacement and erosion of expertise need further consideration
Suggested compromises
Use AI as a collaborative tool rather than replacement (similar to nurse practitioners model where AI handles routine tasks while humans manage complex issues)
Deploy liquid networks and similar efficient AI solutions that can run locally on devices rather than requiring energy-intensive server farms
Implement AI stewardship that guides rather than slows innovation, focusing on responsible development rather than restrictive regulation
Treat AI as personal assistants for routine work while preserving human roles in expertise, creativity, and complex decision-making
Thought provoking comments
AI does not automate professions. AI automates tasks. In many professions, people do multiple tasks… And the tasks that can be automated are the data and the predictable physical work tasks.
Speaker
Daniela Rus
Reason
This comment provides a crucial reframing of the AI job displacement debate by distinguishing between task-level and profession-level automation. It challenges the common binary thinking of ‘AI will replace jobs vs. AI won’t replace jobs’ and introduces a more nuanced understanding of how AI integration actually works in practice.
Impact
This insight shifted the discussion from fear-based concerns about job replacement to a more constructive framework for understanding AI as a complementary tool. It provided the foundation for her subsequent discussion about AI as personal assistants and the nurse practitioner analogy, fundamentally changing how the audience might think about their own job security.
The same tools that help doctors diagnose diseases better and scientists accelerate discovery can also empower bad actors, I like to call them super villains, to cause real harm.
Speaker
Daniela Rus
Reason
This comment introduces the concept of ‘dual-use’ technology in an accessible way, acknowledging that AI’s power is inherently neutral and can be wielded for both beneficial and harmful purposes. The use of ‘super villains’ makes complex cybersecurity and misuse concepts more relatable while maintaining the gravity of the threat.
Impact
This marked a critical turning point in the presentation, transitioning from the optimistic ‘superpowers’ narrative to a balanced discussion of risks. It established the framework for discussing AI governance and stewardship, showing that the same capabilities that make AI powerful for good also make it dangerous in the wrong hands.
In stark contrast to the billions of neurons in the human brain, the worm lives very happily on only 302 neurons… the idea is to use the math of the worm instead of traditional math to build a new class of AI we’re going to call liquid networks.
Speaker
Daniela Rus
Reason
This comment challenges the prevailing ‘bigger is better’ paradigm in AI development by proposing biomimetic solutions inspired by simple organisms. It’s counterintuitive because it suggests that studying one of the simplest nervous systems could solve some of AI’s most complex problems around efficiency and explainability.
Impact
This insight introduced a concrete technical solution to the scalability and energy problems she had outlined, demonstrating that innovation rather than just regulation could address AI’s challenges. It provided a tangible example of how research can lead to more sustainable AI, shifting the discussion from abstract problems to concrete solutions.
AI stewardship as our commitment to ensuring that AI evolves in ways that truly serve humanity and the planet… It’s not about slowing innovation, because innovation is very important. It’s about guiding it wisely.
Speaker
Daniela Rus
Reason
This comment reframes the AI governance debate by rejecting the false dichotomy between innovation and safety. It introduces ‘stewardship’ as a more nuanced approach than either unchecked development or restrictive regulation, emphasizing guidance rather than control.
Impact
This concept provided a unifying framework for the entire latter portion of her talk, showing how technical innovation (like liquid networks), risk management, and ethical deployment could work together. It moved the discussion beyond polarized positions to a more collaborative approach to AI development.
We will run out of workers before we run out of jobs… the world population growth is slowest since the Industrial Revolution.
Speaker
Daniela Rus
Reason
This demographic insight challenges the widespread anxiety about AI-driven unemployment by introducing a completely different perspective based on population trends. It’s thought-provoking because it suggests that labor scarcity, not job scarcity, may be the real challenge of the coming decades.
Impact
This comment provided a surprising counternarrative to job displacement fears, potentially reshaping how the audience thinks about AI’s role in the future economy. It suggested that AI might be necessary to address labor shortages rather than being a threat to employment, fundamentally altering the risk-benefit calculation around AI adoption.
Overall assessment
These key comments shaped the discussion by creating a sophisticated, multi-dimensional framework for understanding AI’s impact. Rus systematically moved the audience from wonder (AI superpowers) to concern (dual-use risks) to hope (technical solutions and stewardship). Her most impactful insights challenged binary thinking – showing that AI doesn’t simply replace jobs but transforms tasks, that governance isn’t about stopping innovation but guiding it, and that demographic trends may make AI adoption necessary rather than optional. The progression of these ideas created a narrative arc that acknowledged both AI’s transformative potential and its risks while providing concrete pathways forward through technical innovation and thoughtful stewardship.
Follow-up questions
How do we grapple with issues like trust, fairness, misuse, and power in AI systems?
Speaker
Daniela Rus
Explanation
This represents a fundamental challenge in AI governance that requires deeper investigation beyond technical performance to understand societal impacts
What will happen to jobs as AI continues to evolve?
Speaker
Daniela Rus
Explanation
This is identified as a medium-term risk with uncertainty that needs ongoing research as AI transforms rather than eliminates roles
How do we address the erosion of expertise as AI becomes more prevalent?
Speaker
Daniela Rus
Explanation
This is mentioned as a medium-term risk that requires further study to understand how AI impacts professional expertise and knowledge
What are the long-term climate impacts of AI scaling?
Speaker
Daniela Rus
Explanation
Given that AI is projected to use 12% of US total power demands by 2030, the long-term environmental consequences need further research
How do we prevent loss of human agency as AI systems become more powerful?
Speaker
Daniela Rus
Explanation
This is identified as a long-term risk that requires research into maintaining human control and decision-making in an AI-driven world
What new types of jobs will AI create that we haven’t yet imagined?
Speaker
Daniela Rus
Explanation
While acknowledging that new technologies create new jobs, the specific nature of AI-generated employment opportunities requires further exploration
How do we balance protecting privacy while supporting innovation in AI development?
Speaker
Daniela Rus
Explanation
This represents one of the multiple competing dimensions in AI stewardship that requires ongoing research to find optimal solutions
How do we ensure efficiency while providing transparency in AI systems?
Speaker
Daniela Rus
Explanation
This is another competing dimension in AI stewardship that needs further investigation to resolve the tension between these goals
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.