What Is Sci-Fi, What Is High-Tech? / Davos 2025

24 Jan 2025 08:00h - 08:45h

What Is Sci-Fi, What Is High-Tech? / Davos 2025

Session at a Glance

Summary

This panel discussion at the World Economic Forum explored how emerging technologies like autonomous vehicles, brain-computer interfaces, and robotics are transitioning from science fiction to reality. Experts in these fields discussed recent technological advances and their potential impacts on society.


Raquel Urtasun explained how improvements in AI, computing power, and sensors are enabling the development of fully autonomous self-driving trucks. She highlighted the use of advanced simulation to test and validate safety. Tom Oxley described progress in brain-computer interfaces that allow paralyzed patients to control devices with their thoughts, emphasizing the medical applications. Anthony Jules discussed collaborative robotics that work alongside humans in warehouses and manufacturing.


The panelists addressed ethical concerns and public trust issues surrounding these technologies. They emphasized the importance of safety, transparency, and demonstrating clear benefits to gain public acceptance. Questions of equitable access, potential misuse, and privacy implications were raised. The experts acknowledged these challenges while expressing optimism about the technologies’ potential to enhance human capabilities and solve important problems.


The discussion highlighted how rapid advances in AI, computing, and sensors are enabling transformative technologies across multiple domains. While exciting possibilities were presented, the need to carefully manage risks and build public trust emerged as key themes. The panelists agreed that addressing ethical concerns and ensuring broad societal benefit will be crucial as these technologies continue to develop.


Keypoints

Major discussion points:


– Rapid advances in neurotechnology, autonomous vehicles, and robotics that were once considered science fiction are now becoming reality


– These technologies are being enabled by breakthroughs in AI, computing power, sensors, and deep learning


– There are important ethical considerations around safety, privacy, equity of access, and potential misuse as these technologies are developed and deployed


– Building public trust is critical for the successful adoption of these transformative technologies


– The technologies have potential to greatly benefit humanity, particularly in areas like medical treatment and increased productivity, but risks need to be carefully managed


The overall purpose of the discussion was to explore recent breakthroughs in neurotechnology, autonomous vehicles, and robotics, and to examine both the exciting potential and important ethical considerations as these technologies move from science fiction to reality.


The tone of the discussion was generally optimistic and excited about the technological advances, while also thoughtful and measured when addressing ethical concerns and potential risks. The panelists aimed to paint a realistic picture of where the technologies currently stand and their near-term potential, rather than indulging in far-fetched speculation. There was an emphasis on responsible development and deployment to ensure public trust and equitable access.


Speakers

– Nita Farahany: Moderator


– Anthony Jules: Co-founder and Chief Executive Officer of Robust.ai


– Tom Oxley: Founder and Chief Executive Officer of Synchron


– Raquel Urtasun: Founder and Chief Executive Officer of Wabi


– Yossi Vardi: Chairman of International Technologies


Additional speakers:


– Stefan Schneider: Audience member from Sea of Davos


– Andreas Schappewald: Medical doctor from Switzerland, audience member


– Artem: Programmer from AI Foundation, audience member


Full session report

Emerging Technologies: From Science Fiction to Reality


This panel discussion at the World Economic Forum 25 (WEF25) event explored how emerging technologies like autonomous vehicles, brain-computer interfaces, and robotics are transitioning from science fiction to reality. Experts in these fields discussed recent technological advances and their potential impacts on society.


The session began with a poll asking whether public trust should be prioritized over rapid deployment of transformative technologies, setting the tone for the discussion on balancing innovation with ethical considerations.


Technological Convergence Enabling Innovation


The panellists agreed that recent breakthroughs in artificial intelligence (AI), computing power, sensors, and deep learning are enabling transformative technologies across multiple domains. Raquel Urtasun, founder and CEO of Wabi, explained how these advances are making fully autonomous self-driving trucks possible, with a timeline for deployment within the next few years. She highlighted three key developments:


1. AI taking a primary role in end-to-end approaches


2. Increased scalability


3. Enhanced computing power


Tom Oxley, founder and CEO of Synchron, described progress in brain-computer interfaces (BCIs) that allow paralysed patients to control devices with their thoughts. He introduced Synchron’s implantable technology, which is delivered through a catheter into the brain, offering a less invasive method than traditional approaches. Oxley noted that their BCI has been used in clinical trials with 10 users who can now text message using the technology.


Anthony Jules, co-founder and CEO of Robust.ai, discussed collaborative robotics working alongside humans in warehouses and manufacturing. He described Robust.ai’s robots as looking like shelves that drive themselves around, emphasizing the difference between automation and collaboration in robotics. Jules provided a striking comparison to illustrate the rapid growth in computing power, noting that modern smartphones have the equivalent processing capability of the world’s largest supercomputer in 2000.


Building Trust and Addressing Ethical Concerns


A key theme throughout the discussion was the importance of building public trust and addressing ethical concerns surrounding these emerging technologies. The panellists emphasised the need for safety, transparency, and demonstrating clear benefits to gain public acceptance.


Urtasun highlighted the use of advanced AI simulations to test and validate the safety of autonomous vehicles. She stressed the importance of developing AI systems that can generalise to unforeseen situations rather than simply memorising predefined scenarios.


Oxley focused on medical applications to build trust in brain-computer interfaces. He addressed common misconceptions, clarifying that detecting motor activity does not equate to understanding thoughts, thus allaying some privacy concerns.


Jules emphasised designing robots for human collaboration rather than replacement, positioning safety and trust as fundamental requirements in robotics development.


Societal Impact and Accessibility


The panellists discussed the potential for their technologies to reshape various aspects of society:


– Autonomous vehicles transforming transportation (Urtasun)


– Brain-computer interfaces aiding those with severe medical conditions (Oxley)


– Robotics changing the nature of work and human-machine interaction (Jules)


Yossi Vardi, Chairman of International Technologies, raised the importance of ensuring equitable access to these emerging technologies. This sparked a broader discussion on the potential for these innovations to either narrow or widen societal divides, depending on how they are developed and deployed. Vardi also emphasized the importance of supporting liberal arts education alongside technological advancements to maintain a balanced societal perspective.


Ethical Challenges and Unresolved Issues


The discussion highlighted several ethical challenges and unresolved issues:


1. Programming ethical decision-making for autonomous vehicles in unavoidable accident scenarios (Urtasun and Jules discussed the trolley problem)


2. Long-term privacy and data ownership implications of brain-computer interfaces (Oxley)


3. Potential job displacement from increased automation and robotics (Jules)


4. Ensuring equitable global access to emerging technologies (Vardi)


5. Balancing rapid technological deployment with building public trust and safety (All panellists)


Vardi offered a philosophical perspective, noting that while technology changes the magnitude of human capabilities, it doesn’t alter fundamental human virtues like justice, love, and empathy.


Future Directions and Follow-up Questions


The discussion concluded with several thought-provoking questions from the audience and panellists, highlighting areas for future exploration:


1. Ensuring equitable distribution of advanced technologies across society


2. Exploring potential misapplications of AI in arts or home applications


3. Assessing the future potential and risk-benefit ratio of brain-computer interfaces, including Oxley’s speculation about building personalized models of users


4. Developing more efficient and cost-effective AI systems for broader access, as emphasized by Urtasun


5. Creating ecosystems in different geographies to drive access to robotic technologies


6. Ensuring privacy, security, and appropriate data ownership as brain-computer interfaces advance


Overall, the panel painted an optimistic yet measured picture of the current state and near-term potential of these technologies. The experts emphasised responsible development and deployment to ensure public trust and equitable access. While exciting possibilities were presented, the need to carefully manage risks and address ethical concerns emerged as key themes for the continued development of these transformative technologies.


Session Transcript

Nita Farahany: Welcome. My name is Nita Farahani. I will be your moderator today and I am delighted to welcome all of you here and our online audience who’s following along. If you’re following along online, please use hashtag WEF25 and please do share some of the comments that are going to come out of this extraordinary conversation. To give you a little context of what we’re going to be talking about today, we have a lot of exciting advances that we’re going to be discussing. Neurotechnology is creating new possibilities of what our brains can do. Autonomous systems are taking us where we need to go and robots are becoming a part of our everyday life. These technologies are not just the backdrop of futuristic novels, but they’re actually creating a world that was previously unthinkable. What are some of the key technologies that once seemed truly unbelievable and how are they poised to reshape our everyday life today and in 2035? We, in this session, are going to explore these questions. We’re going to explore the rapid acceleration of these technologies in these fields that were once considered science fiction and we’re going to unpack some of the questions around public trust, public safety, questions around equity. This session will include three members of the forum’s innovator community. We have Raquel Urtasun, who’s joining us, who is the founder and chief executive officer of Wabi in Canada. We have Tom Oxley, who is the founder and chief executive officer of Synchron, and we have Anthony Jules, who is the co-founder and chief executive officer of Robust.ai. Right now, we’re going to start this conversation by launching a poll to try to gauge a little bit of public understanding and reactions. The question that we’re asking is, should we prioritize public trust over rapid deployment when introducing transformative technologies? We want to start here because trust has been such an important part of the conversation throughout this week in Davos. It also helps frame this question of the conversations of these technologies that we’re approaching, which for many people can seem frightening, for many people can seem unfamiliar, and we’re going to get a sense of where the public is right now, where the audience here is right now, so that we can figure out. the nature of our conversation. So let’s see. 100%, how often do you see that in a poll where you have 100% as a response? That tells us a lot, right? Which is the trust, oh, oh, wait, wait, there’s more. All right, I’ll give you all another second. We’ll see, we’ve got some other questions coming in. All right, for those of you who were slow this morning, I’m gonna give you five, four, three, two, one. I get it, it’s been a long week, right? Okay, so we have trust at about 50%. Trust is foundational for long-term success. Only when the risks outweigh the benefits and delays hinder progress and save fewer lives. Okay, now this kind of let her rip approach, the 4%, this is clearly not where the public sentiment is, right? So as we have this conversation today, we really wanna understand what are the risks and the benefits? How are we building trust with these different kinds of technologies? Should we get into it? All right, so I wanna start by trying to frame what these technologies are that we’re talking about. You’re the experts who are gonna explain that to us. And to try to help us understand, like it seems like we’re really on the verge of this technological renaissance, that technologies that felt like science fiction, like brain-computer interface, or AI-powered vehicles, or robots helping humans, these are now becoming a reality. But what changed? Was there some tipping point that made this happen? Was there some technological advance that suddenly made all of this convergent possible? And we’re gonna start with Raquel with a quick video, and then we’re gonna turn to Raquel to try to unpack that for us. Are we seeing the video here or just online? All right, maybe just online. So I’m gonna turn to you, Raquel. All right, and unpack it for us. Tell us a little bit about the backdrop of what you do, and help us understand why there’s been such great advances in that space.


Raquel Urtasun: Yeah, so we bring generative AI to the physical world to transform that physical world. And in particular, we are starting with self-driving trucking. which is really ripe for this technology. This is where scaling of robots is going to happen, you know, first in the context of self-driving. So there is really, like, three advances, I would say, that have made possible, you know, the fact that now we see deployment on the real world. And this is AI, obviously, you know, the first kind of, like, big advance in technology, where we went from AI having a secondary role in hand-engineered systems to now being at the forefront, really, of end-to-end approaches that are also probably safe. So there’s a big transformation that really brings scalability together with compute. It’s not by chance that NVIDIA is one of the most, you know, valuable companies in the world. And then advances in sensors that are, you know, right now you have really a path to scaling those sensors. I’d say you need economics that actually makes sense as well. So together, these three things really have formed the foundation for what you will see over the next couple of years, which is, you know, self-driving everywhere.


Nita Farahany: And just to make sure we understand self-driving, right, there’s different kinds of self-driving. There’s self-driving where, like, I have to have my hands on the steering wheel, and then it kind of shakes if I’m not paying attention. It says, pay attention to the road again. And then there’s, I’m not even in the front seat of the car. Right, so what are we talking about for self-driving?


Raquel Urtasun: Yeah, so I’m talking about, you know, what we call level four self-driving, which is there is no need for a human at all to intervene, the vehicles drive themselves. And that’s where we’re going, very quickly. And that’s where we are going. And, you know, we see today, you know, when we deploy in San Francisco, we’re going to see self-driving trucks deployed by Huawei this year. And then we will see over the next two to three years, really, that technology is scaling, you know, significantly.


Nita Farahany: And just, you know, I have young children. I have a 10-year-old and a 5-year-old. Is there any chance they’re going to ever learn how to drive a vehicle? Or is that, like, you know, by the time they reach the age of driving, is it just, this is a transition right now, but I don’t need to worry about teaching them how to drive?


Raquel Urtasun: You know, my hope is that it will be only if, you know, as a hobby, they actually want to drive for, you know, the pleasure of it.


Nita Farahany: All right, sci-fi coming true very quickly, but I’m relieved to know that my 10-year-old will never be behind the wheel of a car. All right, that’s exciting. Okay, Tom, brain-computer interface. Tell us what that is. There’s lots of media attention to this issue, lots of hype, but maybe overhype and mischaracterization of it. So tell us first what it is that you do at Synchron, what brain-computer interface is, and then is it moving really rapidly, and if so, why? What are the convergences that’s making that possible?


Tom Oxley: So Synchron is a implantable brain-computer interface. We have developed technology that’s delivered through a catheter up into the brain. I would say the why now, the field has been, so I’m a neurologist, and this field has been examined in the academic space since the, maybe since the 70s to the 90s in the preclinical space, and then in the 2000s to the 2020s in the early human clinical trial phase, and now we’re getting towards the more mature clinical trial phase. There’s some very famous, well-known entrepreneurs pursuing this field, and I think,


Nita Farahany: sort of think on- Is that like a Voldemort thing, like he who shall not be named or something?


Tom Oxley: Well, yeah, there are some different views on why BCI might be needed. What’s actually happening in medicine is that there is a large patient population of people who have conditions that can’t be treated, like paralysis, where BCI has emerged as a treatment modality. So with paralysis, your brain can be working, your body can fail, and you depend on other people to exist, to survive, and BCIs, the first wave was around robotic device control. That didn’t commercialize because robotics, prosthetic limbs weren’t ready to commercialize, and now, over the last 15 years, as smart devices have proliferated, internet of things, BCI control of your phone becomes an incredibly powerful way to reconnect with technology, and for people who are paralyzed, the ability to text messages is the top of the list of the things that you want back, so. There are six or seven companies now that are emerging out of the academic investment over the last 30 years. And it’s primarily around device control for people who are paralyzed to demonstrate increased independence. I think it’s happening now for a couple of reasons. So access into the brain is probably, I’d say that has been the biggest challenge. So how do you deliver the senses into the brain in an economic way and in a safe way and in a way that can withstand a lifelong biological system? The sense of modalities, the materials in the sense of modalities, and then the deep learning, the deep learning all had to converge. And then I think the use case, I think the emergence of widely available, ubiquitous digital technologies was a really critical part of the commercialization environment.


Nita Farahany: Okay, so let me unpack that a little bit just to make it more accessible for everyone here. So you said access into the brain, sensors and deep learning. So by access into the brain, I’m imagining, you’re drilling a hole in my skull and then you’re putting something into it. I’m not signing up for that anytime soon. So I’m not really imagining you’re doing this, but.


Tom Oxley: Will you say that now?


Nita Farahany: Yeah, maybe, maybe, yeah. Give me another week in Davos and maybe I’ll just be like, yeah, sign me up, but.


Tom Oxley: No, no, we all take for granted our health, so.


Nita Farahany: Right, fair, fair, right. As a healthy person sitting here right now, but you said access. So is it about drilling a hole into the skull or is there a different mechanism that you’ve introduced that makes it easier to get into the brain or?


Tom Oxley: Yeah, so we’ve, I think part of the reason we’ve been able to raise capital and move quite quickly, at this point we’re moving towards a late stage clinical trial and potentially the first FDA approval. The reason that’s happening is because we have not taken a traditional pathway to delivery into the brain. So there are some examples of intervention of, let’s say implantable neurotechnology that have existed for decades, so deep brain simulation for Parkinson’s disease, responsive neurostimulation implants for epilepsy. cochlear device for hearing, all those conditions have existed in quite low volumes. Medtronic has the highest number of deep brain stimulation. For Parkinson’s disease, that’s been around for decades, a million people with Parkinson’s disease in the US, 10% of whom are DBS would be useful but only about less than 10,000 per year having this surgery. It’s quite invasive, you have to go through the skull, it’s expensive, it’s hard to get into the operating room and the economics are challenging for the hospital. So my view, our thesis is that if the condition is going to scale and scale is needed for effective model development etc, network effect, the only way to do that is through a non-operative procedure and cardiology is an example of how the shift from open heart surgery to catheter-based therapies over the last 30 years, that shift is now coming over to the brain. A stroke treatment of pulling out a blood clot, really a revolution that happened in the middle of around 2015, has caused this huge proliferation of neurologists doing procedures up into the brain through catheters. So we’ve taken a view that if you can deliver electronics into the brain through a catheter without open brain surgery, that’s going to be the solution for scale for BCI.


Nita Farahany: And you’ve already done this with some patients? We’ve done two clinical trials of 10 users. Okay, and so they have brain implants that have remained through a catheter and they are text messaging? Yes. And maybe they just voted on the poll? Maybe. Okay, great. So let’s come to Jules. Jules, tell us about our robotic future, but more importantly, tell us about robust.ai and exactly what is the convergence that’s made this possible? We’ve watched these movies for a very long time, right, about robots coming and taking over the world. I don’t think that’s what is happening and what you’re creating. So unpack for us both what’s happening… and the convergence. We have sensors, we have deep learning, we have AI, we have compute, we have sensors. I hear sensors and AI showing up a lot here. Is it the same thing that’s happening with you? Tell us a little bit about your world.


Anthony Jules: Sure, I’ll start off by sharing a little bit about what we do. So at Robust AI, we create physical AI. So that’s the software and machine learning to make robots work in the world. And our robots work in warehouses and in manufacturing plants. They look like a shelf that drives itself around. And the unique thing about them is they actually understand the world around them in a way that other robots typically don’t. And at any point, people can have agency over the robot. You can literally just grab onto it and move it. And sometimes that’s to get a task done, sometimes that’s to move it out of the way. And that’s really a physical manifestation of this idea that people should have agency over robots. So-


Nita Farahany: Going back to the trust poll, right?


Anthony Jules: This is a- This is something I’m gonna talk a lot about because I think it’s absolutely key to this future. So to your question about what has changed, I’m gonna ride on the coattails of my other wonderful panelists, especially Raquel. We have access to an amount of compute that’s really hard to get your head around. As an example, what most people have in their pocket is the amount of computing power that the largest supercomputer in the world in the year 2000 had. Or the entire computing capability of the planet in 1990. Wow. That’s what we all have in our pockets right now. Wow. So you take this level of compute, you take this latest generation of AI, both generative and other types, and now we have the tools to understand the world and put that on a device that’s working in and amongst people and smart enough to really get useful work done. So the big thing is big changes in AI. amount of compute, and I’m going to toss in one more thing, which is the cost of developing hardware has come down dramatically. So small teams using things like additive manufacturing, CAD CAM software, can develop very, very impactful products at a relatively low development cost and then a low unit cost.


Nita Farahany: Great. I want to envision these robots a little bit more. I’m grabbing one and walking around. I should not be thinking of a humanoid robot. I should be thinking of something that looks more like my vacuum cleaner that moves around my house. I’ve got one of those autonomous vacuum cleaners.


Anthony Jules: Imagine literally your shelf in your library.


Nita Farahany: Like this. And this moves around.


Anthony Jules: That when you want a book, can drive around and present it to you.


Nita Farahany: So is it just a delivery robot? It’s not, like it doesn’t have hands, it’s not going to be my plumber.


Anthony Jules: So what you’re starting with is something that can move material around, but you can also add different modules to it.


Nita Farahany: So it’s almost like we have different kinds of autonomous vehicles here, right? Which is the robots that you’re creating are autonomous vehicles. They have sensors as well that can sense the environment in much the same way that your sensors sense the world. Yours are driving people in them. Yours are driving inventory? What are they driving?


Anthony Jules: Inventory, components, consumables, and manufacturing plants.


Nita Farahany: Okay, so what’s the biggest use case? It’s in manufacturing plants?


Anthony Jules: The biggest use case is actually in fulfillment in warehouses. So it’s, you know, you order a tube of toothpaste, a robot drives up to the shelf where that toothpaste is, someone puts that on that robot, and then it drives to a packing area where it gets put in a box and sent to you.


Nita Farahany: Okay, so I was at a hotel in New York a few months ago, and I ordered room service, and I ordered coffee, and a robot delivered it to me, right? So it came up. to my room and I opened the door and there was not a person there. There was just the, here’s what you should do to interact with the robot to get your coffee and is that what we should be envisioning?


Anthony Jules: Yeah, I mean, I think that’s one example. You know, what we’re doing is one morphology, you know, one type of robot. But what I think we’re gonna see more broadly is many, many different morphologies that are somewhat purpose-built to solve specific problems. You know, you’re gonna see robots that roll, you’re gonna see robots that fly. You know, we have already hundreds of millions of drones. You know, there are robots that sail, there are robots that dive, and of course, robots that walk. You know, you’re gonna see more and more humanoids and quadrupeds that actually get around by walking, but you’re gonna see, I think, even more robots that roll and fly.


Nita Farahany: Okay, so I’m just, I’m envisioning this world now which is my daughter isn’t driving, people have brain implants, robots have joined us in our everyday lives, and it does feel a little bit sci-fi, but we’re gonna unpack the risks and benefits of this because it sounds exciting but a little bit terrifying to think about what this future is and how we’re navigating it and how you’re thinking about navigating these trust issues that have come up. So I wanna come back to you, Raquel, and you have this incredible way of testing the AI, right? So, you know, I would imagine as you’re trying to test vehicles, you can’t put humans in cars and emulate every single scenario that could exist, like there’s some ethical concerns with having a big crash and seeing what happens, things like that, right? So it’s usually very physical, it’s very scenario-driven to do this testing, but how are you using new technologies to simulate the, you know, kind of real-world environments and to build the kind of public trust that we need, the validation that we need?


Raquel Urtasun: Yeah, and I will start by also talking a little bit about the form factor of our robots which is, you know, we do classic trucks. These are the largest trucks that you can see, you know, in the world today, right? So we have these massive robots that carry 80,000 pounds on their back, right? So you can imagine that building something that is safe is paramount to, you know, before launching any product in the real world. So what is very, very important is also that the technology that we build has to be able to generalize to anything that might see on the road. And it’s impossible to foresee by hand all those different situations, right? So it’s important that, you know, the AI system. that’s much more than just memorization. If you look at, for example, large language models today, they do a lot of memorization and you need to reprompt in order for them to be able to give you the answer. So things like this cannot work in the physical world, because consequences of a mistake are tremendous. At the same time, they have to be super efficient. They have to be in a fraction of a second making a very complicated decision making, which driving is actually pretty complex in the real world. And then we need to ensure that they are safe. And just being safe is not enough. We need to be able to prove that they’re actually safe. So there is two things that we need to do. First, build a AI system that is interpretable, that you can validate and verify. And that’s for the brain of the self-driving vehicle. And on the other side is, well, how do we test, to your question, how do we test that indeed it’s actually safe? So in cities, it’s a little bit easier to brute force your testing, in the sense that the amount of accidents on things, the rate of events, is actually higher than if you look at highways. So in long-haul trucking, a lot of our driving is really on main interstates, for example, in the US. And if you look at this rate of events, property damage is every 100,000 miles. If you look at a small bodily injury, every million miles. If you have a severe accident, 10 million miles. Death on the road, and this is humans, right? 100 million miles. Now, you want to prove this statistically significant, you need to see those events several times, right? So you are in the billions of miles. And a human can only drive 80,000 miles per year. It’s like, how many trucks do I need driving for how many years in order to be able to show that this technology is indeed safe? So that doesn’t work. So you need to come up with a very different type of testing. And that’s where simulation is a must. And people talk about simulation in the industry and self-driving for a while, but what is missing is, it’s not about simulating many scenarios or billions of miles or whatever, is, is your simulator really realistic? Can you show, can you prove scientifically that driving on simulation, you actually observe exactly the same things as driving in the real world? And that’s where we have a big breakthrough, which is with WebEvolve, which is the only simulator where you can actually indeed prove that. So now you have the answer to the data problem, right? Where suddenly with no risk, right? Because it’s running on the cloud at this scale, you can test the system under any possible condition. And that has been a tremendous enabler for building the safety case, building the trust, and at the same time to train the system because all that information just gets passed to the AI system that actually gets better and better. So now you have these two AI systems interplaying and where one is the teacher, right, that plays adversary, and the other one is the brain of the self-driving vehicle that actually is learning as it’s experiencing the simulator. And it’s really interesting that the answer to safety oftentimes is AI, and it’s for safety of another AI system, which is a very, very, you know, kind of like provocative thought as well.


Nita Farahany: Now, just because the title of our session here is around sci-fi, should I be envisioning like the matrix and it’s all being like tested in the matrix or something, but we are not in the matrix. So it’s like simulating, there’s a matrix world that’s testing everything, and then we’re using the cars in the real world because it’s gone through what feels exactly like the real world. One small question on this. I saw somebody post on social media the other day that they were recording their car, they had a Tesla, and they were at a stop where a train was crossing, and they showed what the car was seeing, and it wasn’t seeing a train going by, it kept flashing up different trucks and trying to make sense of what it was that it was seeing. Does the simulator… Help it then understand when it gets to a truck that it’s a truck versus it’s a train and take it through those simulations


Raquel Urtasun: Yeah, absolutely And you know, I will also go back to your question or your your thought about are you in the matrix versus not? So what is very interesting is with the simulator what we do is actually we did it cloned the world automatically So, you know if we see you once on the street, you know You will be part of what we will and you will actually be in the matrix. So think about that for a second Wow


Nita Farahany: Okay, now go ahead going back I’m gonna leave the matrix for a minute. I’m gonna come to Tom Tom. Tell me we’re not in the matrix But Tom we met actually, you know through I published a book called the battle for your brain It was looking at a lot of the advances and brain-computer interface And there’s been a lot of public conversation Around both the benefits and the potential risks and just this transformative world that may come forward a lot of people Are talking about data privacy issues across many different contexts, but also with brain-computer interface now personally I think in this space right an implanted brain-computer technology. It’s so highly regulated It’s a very different conversation than perhaps a consumer wearable world, but I’m hoping you can address You know there’s there’s so many sci-fi scenarios of we’re gonna merge with AI and We’re gonna use brain-computer interface to merge with AI and you know Tell us what the real story is about how you’re thinking about risks and benefits like who is the technology really gonna change? Lives for in the short term and in the long term you mentioned some of them in your previous answer But I want you to help us understand some of those risks as well. Hmm


Tom Oxley: Okay, so I think in the short term for like maybe decades maybe two decades this implantable BCI will be for people who have Major conditions so the FDA in the US the regulatory body They have a definition of implantable of a BCI being a neuro prosthesis So a neuro prosthesis is a medical device that restores a brain function really a cortical function. So the brain sees, hears, moves, feels, thinks. And so the first wave of BCI, a motor BCIs, about a third of your brain is dedicated to motor control. My motor cortex is controlling my hands right now. So the motor system is the kind of bottleneck of your brain engaging with the outside world. A lot of information coming in, not much coming out. So the BCI augments that motor outflow, especially if you have injuries like stroke, spinal cord injury, ALS. It’s another mechanism to get data out of the brain. So, yeah, so there’s an ethical question around what does that mean from a privacy perspective. And there is, yeah, there’s a lot of science fiction stories about where BCI goes. Lots of Netflix stories that don’t end very well. Black Mirror, as an example, a program that is kind of intoxicating and terrifying, which helps think about what the rollout of this technology will be. But the reality is that when I talk to my patients, the whole speculation of, well, what’s it gonna be like for me in this world where everyone has a brain implant is not useful for the current conversation right now when you’ve got people who are having severe medical problems that have no therapy. So if you ask someone who is paralyzed and not able to engage with their outside world and totally dependent on other people, are you concerned about privacy? They say, no, I’m concerned that I can’t talk to anyone or engage with the world. So if you have to get into my brain, let’s do it. So there’s that element. And then I think there’s the other side of the belief that if you can detect motor activity, then you can understand my thoughts. And that’s not true. It would be like, if you’re worried about privacy or getting hacked, it would be the same as having your mouse hacked, because your mouse is moving, you control the mouse and the mouse is a representation of your intention to do things on the screen. So people don’t seem to be that worried about having their mouse hacked, but it’s the same level, well, maybe you do, but it’s the same level of security. So, you know, that’s not to undersell the potential ethical issue, we’ve talked about the fact that we’re seeing signals that go beyond the motor cortex. You know, we were telling you last night about a patient who we have watching YouTube and the system was running and something weird happened in the YouTube clip and his brain went bing and the cursor went and did something that he didn’t intend. And so there are, like, there are other signals that we’re getting, that we’re detecting. And so, you know, maybe just to speak to what you guys are working on, I think where BCI is going to move into is the training environment that you’re describing, we also need to understand how the brain is reacting to its environment. And so I think we’re going to be moving into a domain where training is going to also probably exist in a simulated environment. You said that there’s the brain and then there’s the, well, we actually have a brain that we’re trying to make inference on and then there’s environment as well. So I think we’re moving into that domain, that’s what we’re on the precipice on. But you can’t do that until you have real world data with patients who are actually actively using the system. So then you start asking autonomous type questions with the BCI.


Nita Farahany: All right, I’m going to come to Jules and I want to tell all of you that we’re going to bring you into the conversation in a minute. So start thinking about your questions. First, we’re going to have another person join us in the conversation. But I want to come to Jules because one of the things that’s interesting is a, you know, kind of bridge between what Tom was just talking about and some of the work that you’re doing is that there are some BCI companies that are trying to develop, for example, surgical robots, right? Because of some of the difficulties of being able to do the brain surgery that’s involved. Tell me, like, as you start to imagine all of the different use cases of robotics and what you’re doing, you know, should we be envisioning a future where most human activities and human work can be replaced by robots? You mentioned earlier, like, I can pick up my shelf and move it around. But is there a point at which that. doesn’t happen where the robots are more skilled and start to replace, because it sounds like the robot cars are already more skilled than my daughter will ever be at driving, right? So help us understand a little bit, kind of the risks and benefits and what this looks like for the future of humans and work.


Anthony Jules: Yeah, so I think there’s, it’s a nuanced answer. I do think there are places where jobs that people do now or activities that people do now will be replaced. Mechanization has replaced labor for centuries, and that’s everything from moving from agrarian societies to living in cities to the Industrial Revolution, moving from most people doing farming to 10% of people now doing farming. So that trend’s gonna continue, and there is definitely gonna be some amount of dislocation based on that. But both what we’re focused on and a lot of what I see that I think is exciting is the field of what’s now being called collaborative robotics. And that’s where a person and a robot or a person and a set of robots are actually doing a task together. And the big difference there is it’s gone from automation, which is something that happens kind of behind a safety cage to automation plus coordination, which is automation and people kind of working, passing stuff back and forth over the safety cage to true collaboration where you have things like turn taking and workflows where a person does part, a robot does part, a person does part, a robot does part. And this is, I’m describing it very formally, but it’s actually gonna feel very natural because it’s how we already interact with objects that we use every day. It’s how we interact with our phones, and it’s how we interact with people. And the idea is really building on top of. the interaction mechanisms that we already use so that people working with robots just feels natural for people getting done what they’re trying to get done. And I think that’s the evolution of robotics and automation that I’m most excited about. And I think that will be a very large part of what robotics looks like in the future.


Nita Farahany: This is such a nice segue. I want to bring in another person to the conversation. But before I do that, and recognizing I want to bring the audience in, I want you to be thinking about this question, which is, this sounds incredible in many different ways, but I’m wondering what the distribution of these technologies look like across society. Like, who has access to them? How do they become more widely accessible? Is it only that I have lots of assistive devices that help me do my job, but it’s not distributed evenly across society? So I want you to think about that for a minute. But I want to bring in somebody into the conversation, which is Yossi Vardy, the Chairman of International Technologies. And I want to ask Mr. Vardy a question. So, Mr. Vardy, you have witnessed and contributed to the growth of multiple high-tech industries for nearly five decades. And this description of how natural it would be, you know, and how it would just feel like we’ve integrated this into our everyday lives, what do you think will be the key values that are really going to define humanity in the intelligent age?


Yossi Vardi: Okay, that’s a very good question, which is a statement that usually you do when you don’t have any idea how to answer it. But before that, I would like to ask you an ethical, I have an ethical dilemma that I need your help. Do you have to tip a robot which will bring you a coffee, or you don’t have to tip him?


Nita Farahany: Well, I mean, I think that, but I mean, that raises this exactly question, right? Which is, what is, you know, my daughter, my 10-year-old, one time I was asking Chat GPT to generate jokes. And I wrote back and said, that’s not funny. And she said, mommy, that is so rude, apologize, right? And, you know, so it’s this question of like what are our interactions gonna look like?


Yossi Vardi: Okay, so all of us are dealing with this issue, the relationship between high-tech and humanity. Every day, every time we do any, we operate any technology, when you write a talkback, at the back of your mind, you should have the question, am I writing appropriate things, not appropriate things? When you use a tool, technology is, at the end of the day, is a very powerful tool, but it’s a tool. At least for the time being, it doesn’t think. Maybe it will think, but definitely doesn’t exercise judgment. And you have to exercise judgment. The technology changed the magnitude of things, but it didn’t change the actual virtues of things. You know, justice, love, empathy, cruelty, all of them stay the same, just the technology amplifies. What you could do in a small scale, now you can do on a large scale. Will it be good things or bad things, and the dilemmas are all over the time, all over the place. When you talk about autonomous driving, all of us know the common question, if a car has to crash into a crowd and you can’t avoid it, will it be better to crash onto a young person or to an old person? I don’t know how you answer it. At one point of time, I interviewed the Deputy Prime Minister of Slovakia, and he said that Slovakia is the country with the highest number of cars manufactured per capita. And he gave this example, and then I asked him the question, supposing the crowd you have two, two people that the car will hurt one of them. One is a taxpayer, an ordinary taxpayer, and the other one is a Deputy Prime Minister of a country who lives on the tax payments of the regulatory system. People who shoot the car. Go over he suggested it should go over the the deputy prime minister, which I found very altruistic


Nita Farahany: Well, let’s bring in some others to the conversation, but first I’m gonna give you each Literally 30 seconds because I want I can see a lot of people have some questions for us 30 seconds each on this question


Yossi Vardi: Will you really allow me please two seconds just to finish it?


Nita Farahany: Oh, sorry. I didn’t realize I interrupted you.


Yossi Vardi: I would like to emphasize the importance of continuing to to support and to develop the classes of the the faculties of liberal arts because all the flat faculties of liberal liberal arts are now suffering from the budgets which go to To high-tech and at the end of the day, they are part of this judgment mechanism of what is good. What is bad, etc so I think that that Reducing…


Nita Farahany: Now I have to bring in people. All right So I’m gonna ask you guys to incorporate it in your answer. So I want to bring in the audience I want you to having thought about this question of equity. I want you to answer it with whatever Questions come from the audience right now because if not, I’m gonna cut off the audience questions I’m gonna come back to this question to figure out how do we make sure that this doesn’t create a bigger divide in society? And that it isn’t just benefiting, you know, wealthy nations or individuals. Okay, we have two microphones in the room Let’s see who has a question Great. Can we bring a microphone over here, please?


Audience: Thank You Stefan Schneider from Sea of Davos my question goes to the whole panel and that is what do you see as a potential Misapplication of AI AI makes sense in a medical field logistics organization But for instance in arts or also in home applications Where do you actually see misapplications of AI where it’s just a designer choice, but doesn’t actually bring a use out of it


Nita Farahany: Okay, I’m gonna get a couple more questions on the table. I’m gonna see if there’s another question and I’m gonna let us then do this as a round table. So is there another question as well? Great, why don’t we come here for a question? We do this in our faculty workshop sometimes. We stack a bunch of questions together so we get the questions out on the table and then let people take them. Go ahead.


Audience: Andreas Schappewald, medical doctor from Switzerland. Question to Dr. Oxlade, you talked about neuroscience and its benefits with new devices for the treatment of severe neurological disorders like treating deafness by cochlear implant, by insulin, it’s great. We all agree on the benefits. What about reading people’s mind and influencing people’s mind like some Indian gurus could do? I remember Sadhguru Yage Vasudev who was here in the past at the earlier meeting. That’s one of those gurus. So if gurus can do this, artificial intelligence will do this. Can you speculate on future research, on ongoing research, and what would be then the risk-benefit ratio?


Nita Farahany: Good, we’re getting good questions on risk, both on risk of AI, potential risks of reading thoughts. I’m gonna allow one more question to be on the table right here. Great, we’ll return to them.


Audience: My name is Artem. I’m a programmer from AI Foundation. So a question to Urtasun, how you program the car for a classical dilemma of addiction? Like for example, if there is a collision and it’s obvious you cannot do anything about it, but if the car do nothing, then it’s gonna be five people die in that car or there is a chance to turn the car and basically kill one person only. So like you cannot do anything. So it’s a classic dilemma. So how to do it? Because the human being, like they have some kind of decision, but for the programming, like you have to program somehow like that car.


Nita Farahany: All right, and so this builds on Yossi’s comments about the ethics as well. You can hear that on this original poll of the question of trust, there are these questions, not of just how exciting the technology is, but a real desire for everyone to hear how you’re each thinking about these ethical questions. And so given our time, I want you each to just take a minute. Okay, and address the equity questions, the risk questions, the thought reading questions in a way that makes sense for you. So why don’t we start.


Raquel Urtasun: Yeah, so with respect to the distribution of this across the entire population, I think it’s very important that we move from this extremely expensive to develop AI systems to very, very efficient systems. And this is really a call of action so that it doesn’t become the one that has access to power, to chips, et cetera, is the one that actually dominates AI and the impact of this technology. So I think this is a must. As it relates to the ethical question, I am not equipped to be able to decide whether your life is better than the life of some of the other human being. If a regulator is able to make that decision and tell us what it is, then we can incorporate that into our systems. The systems right now are built so that they minimize, in the case that collision is unavoidable and these are physical systems, it might be because somebody’s doing something wrong, not necessarily us, right? Then they are programmed so that they minimize the damage that that collision actually causes, which to me is the, right now, the most ethical answer to the question of, who am I to actually put a price on your life?


Nita Farahany: All right, I’m coming to you next, Anthony. So you have lots of interesting questions here on ethics. So give us your thoughts.


Anthony Jules: So first, I’ll talk a little bit about access. So one of the things that I’m excited about, because our robots are smaller and cheaper, and they’re based on the low cost of compute and the lower cost of being able to build hardware, it does mean that these are technologies that are more accessible globally. That being said, how these technologies exist and how they scale has everything to do with the ecosystem that’s available to companies or entrepreneurs or governments that are trying to make these investments. So it really becomes a question of what ecosystems get created in which geographies that I think will drive access to these technologies. In terms of the ethical questions, I have a few things that I think, I think safety is absolutely critical. Raquel talked about it before, but it is just table stakes for any of these systems that get deployed. Trust, I think, is a slightly different thing. It’s more nuanced. It’s really about, do these things add value? Do they relate to people in a way that is acceptable? And do they do that consistently? And I think if that relatedness happens and it’s consistent, that builds trust over time. So I think that’s really the way to address it. I’m gonna jump in on the safety and misuse question.


Nita Farahany: Briefly, because we’re, yep.


Anthony Jules: But I’m just gonna say, and I think this will cause a ripple, you know, this philosophical question, the trolley problem of which do you choose to hit, I think is a false question. You know, which of us, when we’ve been in a car accident, has made a choice about which thing we’re going to hit? You know, at the end of the day, these are systems that have real physical properties. They have mass, they have friction, you know, in terms of how they are attached to the road. They have energy. And it really becomes a question of reducing the amount of energy that you impart to the thing that you know is able to accept that. So for example, yeah, you’ll choose to drive off the road rather than drive into a vehicle. But this idea that you can somehow make a judgment on which person might be more valuable than another person, I really think is the wrong question and not something that you have access to in these situations where some type of accident’s occurring. So just.


Nita Farahany: All right, well, this is a good place to turn to Tom to close us out. So Tom, he just said trust, right? And you have this question. This is one of the questions that comes up a lot, right? Are you reading minds? You spoke to it earlier about motor cortex, but maybe we get there, right? Maybe increasingly brain signals can be decoded to actually read what people are thinking or seeing. How do you build those systems both equitably, right? Given the number of people you said could be impacted by this. but also for trust with the public about how that decoding happens, how it’s used or misused.


Tom Oxley: Okay, so let’s speculate where things go. I think what will happen over decades is that the BCI will build a personalized model of yourself. It’ll be able to make predictions using your brain information about what you wanted to do next, what you needed, how you felt. Yeah, probably more how you felt than what you were thinking. Maybe what you were thinking a couple of decades beyond that. So at that point, yes, you have a system that probably is able to make very personal and intimate predictions about how you’re engaging with the environment. Okay, so the risk there, if you want to go straight to risk, the risk there is kind of obvious. If there’s a major privacy concern, look at what happened with Facebook and how data was shared. If there’s third-party data sharing, if there’s insurance liability implications. On the other hand, you have a system that can augment your ability to engage in the world in a proliferative, augmented way. And you might want to do that because you want to expand your horizons, you want to perform better, you want to perform longer, and it’s the next step in enabling you to be more productive. So if that’s the direction we’re going, privacy becomes a major issue, security becomes a major issue, and data, who owns the model, who gets access to the model, how can you interrupt with the model, what are the third-party engagements. I think these are all the concerns that are going to emerge and have to be dealt with. I don’t know what all the solutions are, but I think transparency really matters. Trust matters, but I think the product has to be useful for people, and then the risks have to be managed. In the medical device space, there’s huge amounts of regulation where you have to roll out exactly what the product does and how you test it for safety. And I think you also have to believe that if it’s augmenting human… potential that if generally humans are good, then more good things than bad are going to happen, but there’s going to be a little bit of both.


Nita Farahany: This has been much more science than sci-fi, right? But it’s extraordinary, the advances that you’re pioneering are exciting. It’s helpful to hear the consistency, despite very different areas, that tremendous advances in AI, tremendous advances in compute and sensors, but it sounds like also very practical use cases that can be extraordinary and beneficial for humanity are really what’s driving the technological innovations across each of these spaces. Open questions remain about trust, about misuse cases, about how we ethically deploy these, but it seems like this question about what does the future look like, what do our lives look like, is unfolding pretty quickly in front of us. Thank you all for joining this morning. Thank you to the online audience for following along. Thanks.


R

Raquel Urtasun

Speech speed

183 words per minute

Speech length

1337 words

Speech time

437 seconds

AI and compute power enabling self-driving vehicles

Explanation

Raquel Urtasun explains that advances in AI, compute power, and sensors have made self-driving vehicles possible. These technologies bring scalability and enable end-to-end approaches that are safe and economically viable.


Evidence

Urtasun mentions that her company, Waabi, is deploying self-driving trucks this year, with significant scaling expected over the next 2-3 years.


Major Discussion Point

Technological Advancements Enabling Sci-Fi-Like Innovations


Agreed with

– Tom Oxley
– Anthony Jules
– Nita Farahany

Agreed on

Technological convergence enabling sci-fi-like innovations


Using AI simulations to test and validate self-driving safety

Explanation

Urtasun discusses the use of AI-powered simulations to test and validate the safety of self-driving vehicles. This approach allows for testing billions of miles of driving scenarios without physical risk.


Evidence

She mentions their simulator, WebEvolve, which can prove that simulated driving matches real-world conditions, enabling extensive safety testing and AI system improvement.


Major Discussion Point

Building Trust and Addressing Ethical Concerns


Agreed with

– Tom Oxley
– Anthony Jules
– Nita Farahany

Agreed on

Importance of building trust and addressing ethical concerns


Potential for self-driving technology to reshape transportation

Explanation

Urtasun suggests that self-driving technology will significantly transform transportation in the near future. She envisions a world where human driving might become optional or recreational.


Evidence

She mentions that her company is working on level four self-driving, where no human intervention is needed, and expects this technology to scale significantly in the next 2-3 years.


Major Discussion Point

Societal Impact and Accessibility


Difficulty of programming ethical decision-making for self-driving vehicles

Explanation

Urtasun addresses the ethical challenges in programming self-driving vehicles for unavoidable collision scenarios. She emphasizes that AI systems are not equipped to make value judgments about human lives.


Evidence

She states that their current approach is to minimize overall damage in unavoidable collisions, rather than making decisions about the value of individual lives.


Major Discussion Point

Ethical Challenges of AI and Robotics


Differed with

– Anthony Jules

Differed on

Approach to ethical decision-making in autonomous systems


T

Tom Oxley

Speech speed

176 words per minute

Speech length

1728 words

Speech time

588 seconds

Brain-computer interfaces becoming viable medical treatments

Explanation

Tom Oxley explains that brain-computer interfaces (BCIs) are emerging as viable medical treatments for conditions like paralysis. These devices allow patients to interact with technology and regain some independence.


Evidence

Oxley mentions that his company, Synchron, has conducted clinical trials with 10 users who can now text message using their BCI implants.


Major Discussion Point

Technological Advancements Enabling Sci-Fi-Like Innovations


Agreed with

– Raquel Urtasun
– Anthony Jules
– Nita Farahany

Agreed on

Technological convergence enabling sci-fi-like innovations


Focusing on medical applications to build trust in brain-computer interfaces

Explanation

Oxley emphasizes that the current focus of BCI technology is on medical applications for patients with severe conditions. This approach helps build trust and addresses immediate needs rather than speculative future uses.


Evidence

He mentions that for paralyzed patients, the ability to text messages is a top priority, and privacy concerns are secondary to regaining the ability to communicate.


Major Discussion Point

Building Trust and Addressing Ethical Concerns


Agreed with

– Raquel Urtasun
– Anthony Jules
– Nita Farahany

Agreed on

Importance of building trust and addressing ethical concerns


Brain-computer interfaces helping those with severe medical conditions

Explanation

Oxley discusses how BCIs are primarily being developed to help people with severe medical conditions like paralysis. These devices offer a way for patients to regain some independence and interact with the world.


Evidence

He mentions that BCIs allow paralyzed patients to control digital devices, with texting being a top priority for many users.


Major Discussion Point

Societal Impact and Accessibility


Privacy and data ownership concerns with brain-computer interfaces

Explanation

Oxley acknowledges potential future privacy and data ownership concerns with BCIs. He speculates that as the technology advances, it may be able to make predictions about a user’s intentions or feelings.


Evidence

He mentions the possibility of BCIs building personalized models of users over time, which could raise issues of data sharing, insurance liability, and third-party access.


Major Discussion Point

Ethical Challenges of AI and Robotics


A

Anthony Jules

Speech speed

156 words per minute

Speech length

1289 words

Speech time

494 seconds

Robotics and AI creating collaborative human-robot workplaces

Explanation

Anthony Jules discusses how robotics and AI are creating collaborative workplaces where humans and robots work together. He emphasizes that this collaboration feels natural and builds on existing interaction mechanisms.


Evidence

Jules describes their robots as shelves that can move autonomously in warehouses and manufacturing plants, working alongside humans in a collaborative manner.


Major Discussion Point

Technological Advancements Enabling Sci-Fi-Like Innovations


Agreed with

– Raquel Urtasun
– Tom Oxley
– Nita Farahany

Agreed on

Technological convergence enabling sci-fi-like innovations


Designing robots for human collaboration rather than replacement

Explanation

Jules emphasizes that their focus is on collaborative robotics, where robots and humans work together on tasks. This approach aims to enhance human capabilities rather than replace workers entirely.


Evidence

He describes workflows where tasks are shared between humans and robots, with turn-taking and natural interactions similar to how people interact with everyday objects.


Major Discussion Point

Building Trust and Addressing Ethical Concerns


Agreed with

– Raquel Urtasun
– Tom Oxley
– Nita Farahany

Agreed on

Importance of building trust and addressing ethical concerns


Robotics changing the nature of work and human-machine interaction

Explanation

Jules discusses how robotics is transforming the nature of work and human-machine interaction. He envisions a future where various robot morphologies are purpose-built to solve specific problems across different environments.


Evidence

He mentions examples of robots that roll, fly, sail, dive, and walk, each designed for specific applications in various industries.


Major Discussion Point

Societal Impact and Accessibility


Safety and trust as key considerations in robotics development

Explanation

Jules emphasizes the importance of safety and trust in robotics development. He argues that safety is a fundamental requirement, while trust is built through consistent value addition and acceptable interaction with humans.


Evidence

He mentions that their robots are designed to allow human agency, where people can physically move or interact with the robots as needed.


Major Discussion Point

Ethical Challenges of AI and Robotics


Differed with

– Raquel Urtasun

Differed on

Approach to ethical decision-making in autonomous systems


N

Nita Farahany

Speech speed

189 words per minute

Speech length

3215 words

Speech time

1020 seconds

Convergence of AI, sensors, and computing power driving innovation

Explanation

Nita Farahany highlights that the convergence of AI, sensors, and computing power is enabling previously unthinkable technologies. This technological renaissance is making science fiction-like innovations a reality.


Evidence

She mentions brain-computer interfaces, AI-powered vehicles, and robots helping humans as examples of technologies that are now becoming reality.


Major Discussion Point

Technological Advancements Enabling Sci-Fi-Like Innovations


Agreed with

– Raquel Urtasun
– Tom Oxley
– Anthony Jules

Agreed on

Technological convergence enabling sci-fi-like innovations


Balancing rapid deployment with public trust and safety

Explanation

Farahany emphasizes the importance of balancing rapid technological deployment with building public trust and ensuring safety. She highlights this as a key consideration in the development and implementation of emerging technologies.


Evidence

She references a poll showing that 50% of respondents believe trust is foundational for long-term success in introducing transformative technologies.


Major Discussion Point

Building Trust and Addressing Ethical Concerns


Agreed with

– Raquel Urtasun
– Tom Oxley
– Anthony Jules

Agreed on

Importance of building trust and addressing ethical concerns


Y

Yossi Vardi

Speech speed

154 words per minute

Speech length

473 words

Speech time

183 seconds

Importance of equitable access to emerging technologies

Explanation

Yossi Vardi emphasizes the importance of ensuring equitable access to emerging technologies across society. He suggests that the distribution of these technologies could have significant societal implications.


Major Discussion Point

Societal Impact and Accessibility


Need to maintain human judgment alongside technological advances

Explanation

Vardi stresses the importance of maintaining human judgment and ethical considerations alongside technological advancements. He argues that while technology amplifies capabilities, it doesn’t change fundamental human virtues or ethical dilemmas.


Evidence

He emphasizes the continued importance of liberal arts education in developing judgment mechanisms for ethical decision-making in the context of technological advancements.


Major Discussion Point

Ethical Challenges of AI and Robotics


Agreements

Agreement Points

Technological convergence enabling sci-fi-like innovations

speakers

– Raquel Urtasun
– Tom Oxley
– Anthony Jules
– Nita Farahany

arguments

AI and compute power enabling self-driving vehicles


Brain-computer interfaces becoming viable medical treatments


Robotics and AI creating collaborative human-robot workplaces


Convergence of AI, sensors, and computing power driving innovation


summary

All speakers agree that recent advancements in AI, computing power, and sensors are enabling technologies that were previously considered science fiction, such as self-driving vehicles, brain-computer interfaces, and collaborative robotics.


Importance of building trust and addressing ethical concerns

speakers

– Raquel Urtasun
– Tom Oxley
– Anthony Jules
– Nita Farahany

arguments

Using AI simulations to test and validate self-driving safety


Focusing on medical applications to build trust in brain-computer interfaces


Designing robots for human collaboration rather than replacement


Balancing rapid deployment with public trust and safety


summary

The speakers emphasize the importance of building public trust and addressing ethical concerns in the development and deployment of their respective technologies.


Similar Viewpoints

Both speakers envision a future where their technologies significantly transform traditional industries and human-machine interactions.

speakers

– Raquel Urtasun
– Anthony Jules

arguments

Potential for self-driving technology to reshape transportation


Robotics changing the nature of work and human-machine interaction


Both speakers emphasize that their technologies are designed to enhance human capabilities rather than replace humans entirely.

speakers

– Tom Oxley
– Anthony Jules

arguments

Brain-computer interfaces helping those with severe medical conditions


Designing robots for human collaboration rather than replacement


Unexpected Consensus

Ethical challenges in decision-making for autonomous systems

speakers

– Raquel Urtasun
– Anthony Jules

arguments

Difficulty of programming ethical decision-making for self-driving vehicles


Safety and trust as key considerations in robotics development


explanation

Both speakers, despite working in different fields, acknowledge the complexity of programming ethical decision-making into autonomous systems and emphasize the importance of safety and minimizing harm rather than making value judgments about human lives.


Overall Assessment

Summary

The speakers generally agree on the transformative potential of their technologies, the importance of building trust and addressing ethical concerns, and the focus on enhancing human capabilities rather than replacing humans entirely.


Consensus level

There is a high level of consensus among the speakers, particularly on the technological advancements enabling their innovations and the need to address ethical and trust issues. This consensus suggests a shared understanding of the challenges and responsibilities in developing and deploying these transformative technologies, which could lead to more coordinated efforts in addressing public concerns and regulatory challenges.


Differences

Different Viewpoints

Approach to ethical decision-making in autonomous systems

speakers

– Raquel Urtasun
– Anthony Jules

arguments

Difficulty of programming ethical decision-making for self-driving vehicles


Safety and trust as key considerations in robotics development


summary

Urtasun emphasizes the difficulty of programming ethical decisions for unavoidable collisions in self-driving vehicles, while Jules focuses on safety and trust as fundamental requirements in robotics development, without directly addressing the ethical dilemmas.


Unexpected Differences

Overall Assessment

summary

The main areas of disagreement were subtle and primarily focused on different approaches to addressing ethical concerns and building trust in emerging technologies.


difference_level

The level of disagreement among the speakers was relatively low. Most speakers shared similar views on the potential benefits and challenges of their respective technologies. The minor differences in approach reflect the diverse applications of AI and robotics across different sectors, rather than fundamental disagreements about the technologies themselves. This low level of disagreement suggests a generally unified vision for the future of these technologies, which could facilitate their development and adoption. However, it also highlights the need for continued dialogue to address ethical and societal concerns as these technologies evolve.


Partial Agreements

Partial Agreements

All speakers agree on the importance of building trust in their respective technologies, but they propose different approaches: Urtasun focuses on extensive simulations, Oxley emphasizes medical applications, and Jules highlights collaborative design.

speakers

– Raquel Urtasun
– Tom Oxley
– Anthony Jules

arguments

Using AI simulations to test and validate self-driving safety


Focusing on medical applications to build trust in brain-computer interfaces


Designing robots for human collaboration rather than replacement


Similar Viewpoints

Both speakers envision a future where their technologies significantly transform traditional industries and human-machine interactions.

speakers

– Raquel Urtasun
– Anthony Jules

arguments

Potential for self-driving technology to reshape transportation


Robotics changing the nature of work and human-machine interaction


Both speakers emphasize that their technologies are designed to enhance human capabilities rather than replace humans entirely.

speakers

– Tom Oxley
– Anthony Jules

arguments

Brain-computer interfaces helping those with severe medical conditions


Designing robots for human collaboration rather than replacement


Takeaways

Key Takeaways

Technological convergence of AI, sensors, and computing power is enabling rapid advances in self-driving vehicles, brain-computer interfaces, and robotics


These technologies have potential to significantly impact transportation, medical treatment, and the nature of work


Building public trust and addressing ethical concerns is crucial for successful deployment of these technologies


Ensuring equitable access and distribution of benefits from these technologies remains an important challenge


Safety, privacy, and maintaining human agency are key considerations in developing AI and robotic systems


Resolutions and Action Items

None identified


Unresolved Issues

How to program ethical decision-making for autonomous vehicles in unavoidable accident scenarios


Long-term privacy and data ownership implications of brain-computer interfaces


Potential job displacement and societal impacts from increased automation and robotics


How to ensure equitable global access to emerging technologies


Balancing rapid technological deployment with building public trust and safety


Suggested Compromises

Focusing initial brain-computer interface applications on clear medical needs to build public trust


Designing collaborative human-robot systems rather than fully autonomous replacements


Using AI simulations to extensively test autonomous systems before real-world deployment


Maintaining transparency in technological development to foster public trust


Thought Provoking Comments

Trust is foundational for long-term success.

speaker

Poll respondents


reason

This poll result set the tone for the entire discussion by highlighting the critical importance of public trust in emerging technologies.


impact

It led the panelists to address trust and safety concerns throughout their remarks, shaping the overall framing of the conversation around responsible innovation.


There is really, like, three advances, I would say, that have made possible, you know, the fact that now we see deployment on the real world. And this is AI, obviously, you know, the first kind of, like, big advance in technology, where we went from AI having a secondary role in hand-engineered systems to now being at the forefront, really, of end-to-end approaches that are also probably safe. So there’s a big transformation that really brings scalability together with compute.

speaker

Raquel Urtasun


reason

This comment provided a concise explanation of the key technological advances enabling real-world deployment of AI systems like self-driving vehicles.


impact

It set up a framework for understanding the convergence of AI, compute power, and sensor technology that was echoed by the other panelists in discussing their own fields.


So Synchron is a implantable brain-computer interface. We have developed technology that’s delivered through a catheter up into the brain.

speaker

Tom Oxley


reason

This introduced a less invasive method for brain-computer interfaces, challenging assumptions about the technology.


impact

It shifted the discussion towards more practical, near-term applications of BCI technology rather than far-future speculation.


As an example, what most people have in their pocket is the amount of computing power that the largest supercomputer in the world in the year 2000 had. Or the entire computing capability of the planet in 1990.

speaker

Anthony Jules


reason

This vivid comparison highlighted the exponential growth in computing power in a way that was easy for the audience to grasp.


impact

It contextualized the rapid pace of technological change, setting up the discussion of how these advances are enabling new robotic applications.


So what is very important is also that the technology that we build has to be able to generalize to anything that might see on the road. And it’s impossible to foresee by hand all those different situations, right? So it’s important that, you know, the AI system that’s much more than just memorization.

speaker

Raquel Urtasun


reason

This comment highlighted a key challenge and requirement for AI systems operating in the real world, distinguishing them from more limited AI applications.


impact

It deepened the discussion on AI safety and generalization, leading to exploration of simulation and testing approaches.


I think there’s the other side of the belief that if you can detect motor activity, then you can understand my thoughts. And that’s not true.

speaker

Tom Oxley


reason

This comment directly addressed common misconceptions about brain-computer interfaces and their capabilities.


impact

It shifted the conversation towards a more nuanced understanding of BCI technology and its near-term applications and limitations.


The technology changed the magnitude of things, but it didn’t change the actual virtues of things. You know, justice, love, empathy, cruelty, all of them stay the same, just the technology amplifies.

speaker

Yossi Vardi


reason

This comment provided a philosophical perspective on the role of technology in society, emphasizing human values.


impact

It broadened the discussion beyond technical capabilities to consider the ethical implications and societal impact of these technologies.


Overall Assessment

These key comments shaped the discussion by grounding it in current technological realities while also exploring broader implications. They moved the conversation from initial excitement about sci-fi-like advances to a more nuanced exploration of practical applications, safety considerations, and ethical challenges. The panelists consistently emphasized the importance of trust, safety, and responsible development, reflecting the audience’s initial concerns. The discussion evolved from explaining basic technological advances to grappling with complex questions about AI generalization, privacy, and the role of human judgment in increasingly automated systems.


Follow-up Questions

How can we ensure equitable distribution of advanced technologies across society?

speaker

Nita Farahany


explanation

This question addresses concerns about access to new technologies and their potential to widen societal divides.


What are potential misapplications of AI, particularly in arts or home applications?

speaker

Audience member (Stefan Schneider)


explanation

This explores the boundaries of appropriate AI use and potential overreach in certain domains.


What is the future potential and risk-benefit ratio of brain-computer interfaces for reading and influencing people’s minds?

speaker

Audience member (Andreas Schappewald)


explanation

This addresses both the possibilities and ethical concerns of advanced neurotechnology.


How should autonomous vehicles be programmed to handle ethical dilemmas in unavoidable collision scenarios?

speaker

Audience member (Artem)


explanation

This explores the complex ethical decisions that need to be encoded into AI systems.


How can we develop more efficient and cost-effective AI systems to ensure broader access?

speaker

Raquel Urtasun


explanation

This addresses the need for technological advancements that enable wider adoption and use of AI.


What ecosystems need to be created in different geographies to drive access to robotic technologies?

speaker

Anthony Jules


explanation

This explores the broader infrastructure and support systems needed for technology adoption.


How can we ensure privacy, security, and appropriate data ownership as brain-computer interfaces become more advanced?

speaker

Tom Oxley


explanation

This addresses critical ethical and practical concerns as neurotechnology progresses.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.